markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Alternatively, we can create different subplots for each time-series
|
fig, ax = plt.subplots(2, 2)
# ax is now an array!
ax[0, 0].plot(data[32, 32, 15, :])
ax[0, 1].plot(data[32, 32, 14, :])
ax[1, 0].plot(data[32, 32, 13, :])
ax[1, 1].plot(data[32, 32, 12, :])
ax[1, 0].set_xlabel('Time (TR)')
ax[1, 1].set_xlabel('Time (TR)')
ax[0, 0].set_ylabel('MRI signal (a.u.)')
ax[1, 0].set_ylabel('MRI signal (a.u.)')
# Note that we now set the title through the fig object!
fig.suptitle('Time-series from a few voxels')
fig.set_size_inches([12, 6])
|
beginner-python/002-plots.ipynb
|
ohbm/brain-hacking-101
|
apache-2.0
|
Another kind of plot is an image. For example, we can take a look at the mean and standard deviation of the time-series for one entire slice:
|
fig, ax = plt.subplots(1, 2)
# We'll use a reasonable colormap, and no smoothing:
ax[0].matshow(np.mean(data[:, :, 15], -1), cmap=mpl.cm.hot)
ax[0].axis('off')
ax[1].matshow(np.std(data[:, :, 15], -1), cmap=mpl.cm.hot)
ax[1].axis('off')
fig.set_size_inches([12, 6])
# You can save the figure to file:
fig.savefig('mean_and_std.png')
|
beginner-python/002-plots.ipynb
|
ohbm/brain-hacking-101
|
apache-2.0
|
There are many other kinds of figures you could create:
|
fig, ax = plt.subplots(2, 2)
# Note the use of `ravel` to create a 1D array:
ax[0, 0].hist(np.ravel(data))
ax[0, 0].set_xlabel("fMRI signal")
ax[0, 0].set_ylabel("# voxels")
# Bars are 0.8 wide:
ax[0, 1].bar([0.6, 1.6, 2.6, 3.6], [np.mean(data[:, :, 15]), np.mean(data[:, :, 14]), np.mean(data[:, :, 13]), np.mean(data[:, :, 12])])
ax[0, 1].set_ylabel("Average signal in the slice")
ax[0, 1].set_xticks([1,2,3,4])
ax[0, 1].set_xticklabels(["15", "14", "13", "12"])
ax[0, 1].set_xlabel("Slice #")
# Compares subsequent time-points:
ax[1, 0].scatter(data[:, :, 15, 0], data[:, :, 15, 1])
ax[1, 0].set_xlabel("fMRI signal (time-point 0)")
ax[1, 0].set_ylabel("fMRI signal (time-point 1)")
# `.T` denotes a transposition
ax[1, 1].boxplot(data[32, 32].T)
fig.set_size_inches([12, 12])
ax[1, 1].set_xlabel("Position")
ax[1, 1].set_ylabel("fMRI signal")
|
beginner-python/002-plots.ipynb
|
ohbm/brain-hacking-101
|
apache-2.0
|
\newpage
About me
Research Fellow, University of Nottingham: orcid
Director, Geolytics Limited - A spatial data analytics consultancy
About this presentation
Available on GitHub - https://github.com/AntArch/Presentations_Github/
Fully referenced PDF
\newpage
A potted history of mapping
In the beginning was the geoword
and the word was cartography
\newpage
Cartography was king. Static representations of spatial knowledge with the cartographer deciding what to represent.
\newpage
And then there was data .........
\newpage
Restrictive data
\newpage
Making data interoperable and open
\newpage
Technical interoperability - levelling the field
\newpage
Facilitating data driven visualization
From Map to Model The changing paradigm of map creation from cartography to data driven visualization
\newpage
\newpage
\newpage
\newpage
What about non-technical interoperability issues?
Issues surrounding non-technical interoperability include:
Policy interoperabilty
Licence interoperability
Legal interoperability
Social interoperability
We will focus on licence interoperability
\newpage
There is a multitude of formal and informal data.
\newpage
Each of these data objects can be licenced in a different way. This shows some of the licences described by the RDFLicence ontology
\newpage
What is a licence?
Wikipedia state:
A license may be granted by a party ("licensor") to another party ("licensee") as an element of an agreement between those parties.
A shorthand definition of a license is "an authorization (by the licensor) to use the licensed material (by the licensee)."
Concepts (derived from Formal Concept Analysis) surrounding licences
\newpage
Two lead organisations have developed legal frameworks for content licensing:
Creative Commons (CC) and
Open Data Commons (ODC).
Until the release of CC version 4, published in November 2013, the CC licence did not cover data. Between them, CC and ODC licences can cover all forms of digital work.
There are many other licence types
Many are bespoke
Bespoke licences are difficult to manage
Many legacy datasets have bespoke licences
I'll describe CC in more detail
\newpage
Creative Commons Zero
Creative Commons Zero (CC0) is essentially public domain which allows:
Reproduction
Distribution
Derivations
Constraints on CC0
The following clauses constrain CC0:
Permissions
ND – No derivatives: the licensee can not derive new content from the resource.
Requirements
BY – By attribution: the licensee must attribute the source.
SA – Share-alike: if the licensee adapts the resource, it must be released under the same licence.
Prohibitions
NC – Non commercial: the licensee must not use the work commercially without prior approval.
CC license combinations
License|Reproduction|Distribution|Derivation|ND|BY|SA|NC
----|----|----|----|----|----|----|----
CC0|X|X|X||||
CC-BY-ND|X|X||X|X||
CC-BY-NC-ND|X|X||X|X||X
CC-BY|X|X|X||X||
CC-BY-SA|X|X|X||X|X|
CC-BY-NC|X|X|X||X||X
CC-BY-NC-SA|X|X|X||X|X|X
Table: Creative Commons license combinations
\newpage
Why are licenses important?
They tell you what you can and can't do with 'stuff'
Very significant when multiple datasets are combined
It then becomes an issue of license compatibility
\newpage
Which is important when we mash up data
Certain licences when combined:
Are incompatible
Creating data islands
Inhibit commercial exploitation (NC)
Force the adoption of certain licences
If you want people to commercially exploit your stuff don't incorporate CC-BY-NC-SA data!
Stops the derivation of new works
A conceptual licence processing workflow. The licence processing service analyses the incoming licence metadata and determines if the data can be legally integrated and any resulting licence implications for the derived product.
\newpage
A rudimentry logic example
```text
Data1 hasDerivedContentIn NewThing.
Data1 hasLicence a cc-by-sa.
What hasLicence a cc-by-sa? #reason here
If X hasDerivedContentIn Y and hasLicence Z then Y hasLicence Z. #reason here
Data2 hasDerivedContentIn NewThing.
Data2 hasLicence a cc-by-nc-sa.
What hasLicence a cc-by-nc-sa? #reason here
Nothing hasLicence a cc-by-nc-sa and hasLicence a cc-by-sa. #reason here
```
And processing this within the Protege reasoning environment
|
from IPython.display import YouTubeVideo
YouTubeVideo('jUzGF401vLc')
|
20150916_OGC_Reuse_under_licence/.ipynb_checkpoints/20150916_OGC_Reuse_under_licence-checkpoint_conflict-20150915-181258.ipynb
|
AntArch/Presentations_Github
|
cc0-1.0
|
Extraction rate vs. extraction size
Current default is 25 spectra x 50 wavelengths extracted at a time.
On both KNL and Haswell, we could do better with smaller sub-extractions.
|
xlabels = sorted(set(hsw['nspec']))
ylabels = sorted(set(knl['nwave']))
set_cmap('viridis')
figure(figsize=(12,4))
subplot(121); plotimg(rate_hsw, xlabels, ylabels); title('Haswell rate')
subplot(122); plotimg(rate_knl, xlabels, ylabels); title('KNL rate')
|
doc/extract-size.ipynb
|
sbailey/knltest
|
bsd-3-clause
|
3x improvement is possible
Going to 5-10 spectra x 20 wavelengths gains a factor of 3x speed on both KNL and Haswell.
|
figure(figsize=(12,4))
subplot(121); plotimg(rate_hsw/rate_hsw[-1,-1], xlabels, ylabels); title('Haswell rate improvement')
subplot(122); plotimg(rate_knl/rate_knl[-1,-1], xlabels, ylabels); title('KNL rate improvement')
|
doc/extract-size.ipynb
|
sbailey/knltest
|
bsd-3-clause
|
Haswell to KNL performance
The best parameters for Haswell are ~7x faster than the best parameters for KNL,
and for a given extraction size (nspec,nwave), Haswell is 5x-8x faster than KNL
per process.
|
r = np.max(rate_hsw) / np.max(rate_knl)
print("Haswell/KNL = {}".format(r))
plotimg(rate_hsw/rate_knl, xlabels, ylabels)
title('Haswell / KNL performance')
|
doc/extract-size.ipynb
|
sbailey/knltest
|
bsd-3-clause
|
First we look into the era5-pds bucket zarr folder to find out what variables are available. Assuming that all the variables are available for all the years, we look into a random year-month data.
|
bucket = 'era5-pds'
#Make sure you provide / in the end
prefix = 'zarr/2008/01/data/'
client = boto3.client('s3')
result = client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/')
for o in result.get('CommonPrefixes'):
print (o.get('Prefix'))
client = Client()
client
fs = s3fs.S3FileSystem(anon=False)
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Here we define some functions to read in zarr data.
|
def inc_mon(indate):
if indate.month < 12:
return datetime.datetime(indate.year, indate.month+1, 1)
else:
return datetime.datetime(indate.year+1, 1, 1)
def gen_d_range(start, end):
rr = []
while start <= end:
rr.append(start)
start = inc_mon(start)
return rr
def get_z(dtime,var):
f_zarr = 'era5-pds/zarr/{year}/{month:02d}/data/{var}.zarr/'.format(year=dtime.year, month=dtime.month,var=var)
return xr.open_zarr(s3fs.S3Map(f_zarr, s3=fs))
def gen_zarr_range(start, end,var):
return [get_z(tt,var) for tt in gen_d_range(start, end)]
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
This is where we read in the data. We need to define the time range and variable name. In this example, we also choose to select only the area over Australia.
|
%%time
tmp_a = gen_zarr_range(datetime.datetime(1979,1,1), datetime.datetime(2020,3,31),'air_temperature_at_2_metres')
tmp_all = xr.concat(tmp_a, dim='time0')
tmp = tmp_all.air_temperature_at_2_metres.sel(lon=slice(110,160),lat=slice(-10,-45)) - 272.15
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Here we read in an other variable. This time only for a month as we want to use it only for masking.
|
sea_data = gen_zarr_range(datetime.datetime(2018,1,1), datetime.datetime(2018,1,1),'sea_surface_temperature')
sea_data_all = xr.concat(sea_data, dim='time0').sea_surface_temperature.sel(lon=slice(110,160),lat=slice(-10,-45))
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
We decided to use sea surface temperature data for making a sea-land mask.
|
sea_data_all0 = sea_data_all[0].values
mask = np.isnan(sea_data_all0)
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Mask out the data over the sea. To find out average temepratures over the land, it is important to mask out data over the ocean.
|
tmp_masked = tmp.where(mask)
tmp_mean = tmp_masked.mean('time0').compute()
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Now we plot the all time (1980-2019) average temperature over Australia. This time we decided to use only xarray plotting tools.
|
ax = plt.axes(projection=ccrs.Orthographic(130, -20))
tmp_mean.plot.contourf(ax=ax, transform=ccrs.PlateCarree())
ax.set_global()
ax.coastlines();
plt.draw()
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Now we are finding out yearly average temperature over the Australia land area.
|
yearly_tmp_AU = tmp_masked.groupby('time0.year').mean('time0').mean(dim=['lon','lat'])
f, ax = plt.subplots(1, 1)
yearly_tmp_AU.plot.line();
plt.draw()
|
api-examples/ERA5_zarr_example.ipynb
|
planet-os/notebooks
|
mit
|
Topic 4: Data Structures in Python
4.1 Sequences
Strings, tuples, and lists are all considered sequences in Python, which is why there are many operations that work on all three of them.
4.1.1 Iterating
|
# When at the top of a loop, the 'in' keyword in Python will iterate through all of the sequence's
# members in order. For strings, members are individual characters; for lists and tuples, they're
# the items contained.
# Task: Given a list of lowercase words, print whether the word has a vowel. Example: if the input is
# ['rhythm', 'owl', 'hymn', 'aardvark'], you should output the following:
# rhythm has no vowels
# owl has a vowel
# hymn has no vowels
# aardvark has a vowel
# HINT: The 'in' keyword can also test whether something is a member of another object.
# Also, don't forget about break and continue!
vowels = ['a', 'e', 'i', 'o', 'u']
words = ['rhythm', 'owl', 'hymn', 'aardvark']
for word in words:
has_vowel = False
for letter in word:
if letter in vowels:
has_vowel = True
break # Not necessary, but is more efficient
if has_vowel:
print(word, 'has a vowel')
else:
print(word, 'has no vowels')
# Given a tuple, write a program to check if the value at index i is equal to the square of i.
# Example: If the input is nums = (0, 2, 4, 6, 8), then the desired output is
#
# True
# False
# True
# False
# False
#
# Because nums[0] = 0^2 and nums[2] = 4 = 2^2. HINT: Use enumerate!
nums = (0, 2, 4, 6, 8)
for i, num in enumerate(nums): print(num == i * i)
for i in range(len(nums)):
num = nums[i]
|
events/code-at-night/archive/python_talk/intro_to_python_soln.ipynb
|
PrincetonACM/princetonacm.github.io
|
mit
|
Create an 'economy' object with N skill-level groups.
|
N = 10 # number of skill
E = Economy(N)
E.GAMMA = 0.8
E.ALPHA = 0.6
Xbar = [E.TBAR, E.LBAR]
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Let's summarize the parameters as they now stand:
|
E.print_params()
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
When Lucas = False farms don't have to specialize in farm management. When True then as in Lucas (1978) household labor must be dedicated to management and labor must be hired.
Assumed skill distribution
Let's assume population is uniform across the N groups but skill is rising.
|
E.s = np.linspace(1,5,num=N)
plt.title('skill distribution')
plt.xlabel('group index')
plt.plot(x,E.s,marker='o');
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Not a Lucas economy
|
E.Lucas = False
rwc, (Tc,Lc) = E.smallhold_eq([E.TBAR,E.LBAR],E.s)
rwc
plt.title('Eqn factor use')
plt.xlabel('group index')
plt.plot(x,Tc,marker='o');
plt.plot(x,Lc,marker='o');
plt.title('induced farm size (factor use) distribution')
plt.plot(Tc,marker='o')
plt.plot(Lc, marker='x');
E.excessD(rwc,Xbar,E.s) # should be essentially 0
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
A Lucas Economy
|
E.Lucas = True
rwc_L, (Tc_L,Lc_L) = E.smallhold_eq([E.TBAR,E.LBAR],E.s)
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
In the Lucas equilibrium there is less unskilled labor (since managers cannot be laborers) so all else equal we would expect higher wages.
|
rwc, rwc_L
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
In this sample equilibrium the two lowest skill groups become pure laborers.
|
plt.title('Eqn factor use')
plt.xlabel('group index')
plt.plot(x,Tc_L, marker='o', linestyle='None',label='land')
plt.plot(x,Lc_L, marker='o', linestyle='None', label='labor')
plt.legend();
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Note that now about 50% of labor is going into management. However this is an artifact of the unform distribution of skill which puts so much population in the higher skill groups.
|
E.LBAR-sum(Lc_L)
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Note that the two economies (one Lucas style the other non-Lucas) are not comparable because they have fundamentally different production technologies.
|
E.prodn([Tc_L,Lc_L],E.s)
sum(E.prodn([Tc_L,Lc_L],E.s))
E.prodn([Tc,Lc],E.s)
sum(E.prodn([Tc,Lc],E.s))
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Just out of interest, how much lower would output be in the Non-Lucas economy if every household were self-sufficient.
|
Tce = np.ones(N)*(E.TBAR/N)
Lce = np.ones(N)*(E.LBAR/N)
sum(E.prodn([Tce,Lce],E.s))
E.prodn([Tce,Lce],E.s)
sum(rwc*[E.TBAR/N, E.LBAR/N])+E.prodn([Tce,Lce],E.s)-
rwc*[Tc,Lc]
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Market Power distortions
|
sum
(Xrc,Xr,wc,wr) = scene_print(E,10, detail=True)
factor_plot(E,Xrc,Xr)
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Lucas = True
|
E.Lucas = True
rwcl, (Tcl,Lcl) = E.smallhold_eq([E.TBAR,E.LBAR],E.s)
Lcl
E.excessD(rwcl,Xbar,E.s)
plt.title('Competitive: Induced farm size (factor use) distribution')
plt.plot(Tcl,marker='o',label='land (Lucas)')
plt.plot(Lcl, marker='x',label='labor (Lucas)')
plt.plot(Tc, '-o',label='land ')
plt.plot(Lc, marker='x',label='labor ')
plt.legend();
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Not that the two economies are directly comparable (technologies are not the same)... but in the Lucas economy there will be less operating farms and a lower supply of farm labor (since the more skilled become full-time managers). The farms that do operate will therefore use more land and less labor compared to the non-Lucas economy.
|
E.Lucas = True
E.smallhold_eq([E.TBAR,E.LBAR/2], E.s)
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Cartel equilibria
|
(Xrcl,Xrl,wcl,wrl) = scene_print(E, numS=10, detail=True)
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Compared to the original scenario the landlord with marker power in the Lucas-scneario faces a countervailing force: if she pushes the wage too low then she makes self-managed production more attractive for a would-be medium sized farmer who is not now in production.
From the solution it would appear that the beyond a certain theta there does not appear to be an available way for the landlord to distort further. .
|
E.Lucas = True
factor_plot(E,Xrcl,Xrl)
E.Lucas = False
factor_plot(E,Xrc,Xr)
E.Lucas = True
E.cartel_eq(0.5)
E.cartel_eq(0.6)
E.cartel_eq(0.2)
Lr
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
Something is still not right... labor used/demanded excees labor supply
|
(r,w),(Tr,Lr)= E.cartel_eq(0.5)
sum(Lr),np.count_nonzero(Lr)*(E.LBAR)/E.N
sum(Lr)
fringe = E.smallhold_eq([E.TBAR,E.LBAR/2], E.s)
fringe.w
fringe.X
fringe.X[0]
|
LucasSpanControl.ipynb
|
jhconning/geqfarm
|
gpl-3.0
|
We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
|
from dependence import quantile_func
alpha = 0.05
q_func = quantile_func(alpha)
indep_result.q_func = q_func
|
examples/archive/grid-search-Copy1.ipynb
|
NazBen/impact-of-dependence
|
mit
|
Grid Search Approach
Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
|
%%snakeviz
K = 500
n = 10000
grid_type = 'lhs'
dep_measure = 'parameter'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure,
random_state=random_state)
|
examples/archive/grid-search-Copy1.ipynb
|
NazBen/impact-of-dependence
|
mit
|
As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
|
grid_result.compute_bootstraps(n_bootstrap=5000)
boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0)
boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist()
boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles]
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
sns.distplot(boot_min_quantiles, axlabel="Minimum quantiles", ax=axes[0])
sns.distplot(boot_min_params, axlabel="Parameters of the minimum", ax=axes[1])
|
examples/archive/grid-search-Copy1.ipynb
|
NazBen/impact-of-dependence
|
mit
|
Plotting the results from the visual search experiment
The results (fabricated data) are stored in a csv file where each row corresponds to one trial (observation). Each trial contains information about the participant ID (p_id), the trial number (trial), the reaction time in ms (rt), the condition (BB or RB; they indicate the colors of the target and distractors), and the number of distractors (n_distractors).
|
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# Read data into pandas dataframe
df = pd.read_csv('img\\results_visual_search.csv', sep = '\t')
print(df.head())
# Plot reaction times across participants
sns.barplot(x = 'p_id', y = 'rt', data = df)
plt.xlabel('Participant ID')
plt.ylabel('Reaction time (ms)')
plt.show()
# Plot reaction times across
sns.factorplot(x="n_distractors", y="rt",hue='condition',data = df)
plt.xlabel('Number of distractors')
plt.ylabel('Search time (ms)')
plt.show()
|
Week7_lecture.ipynb
|
marcus-nystrom/python_course
|
gpl-3.0
|
Perhaps it would make more sense construct the plot based on the average value of each participant from each trial and condition.
|
# Take the mean over different trials
df_avg = df.groupby(['p_id', 'n_distractors', 'condition']).mean()
df_avg.reset_index(inplace=True)
# Plot reaction times across
sns.factorplot(x="n_distractors", y="rt",hue='condition',data = df_avg)
plt.xlabel('Number of distractors')
plt.ylabel('Search time (ms)')
plt.show()
|
Week7_lecture.ipynb
|
marcus-nystrom/python_course
|
gpl-3.0
|
Derived Features
Another common feature type are derived features, where some pre-processing step is
applied to the data to generate features that are somehow more informative. Derived
features may be based in dimensionality reduction (such as PCA or manifold learning),
may be linear or nonlinear combinations of features (such as in Polynomial regression),
or may be some more sophisticated transform of the features. The latter is often used
in image processing.
For example, scikit-image provides a variety of feature
extractors designed for image data: see the skimage.feature submodule.
We will see some dimensionality-based feature extraction routines later in the tutorial.
Combining Numerical and Categorical Features
As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.
We will use a version of the Titanic (titanic3.xls) dataset from Thomas Cason, as retrieved from Frank Harrell's webpage here. We converted the .xls to .csv for easier manipulation without involving external libraries, but the data is otherwise unchanged.
We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
|
import os
f = open(os.path.join('datasets', 'titanic', 'titanic3.csv'))
print(f.readline())
lines = []
for i in range(3):
lines.append(f.readline())
print(lines)
|
notebooks/03.6 Case Study - Titanic Survival.ipynb
|
Capepy/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
The site linked here gives a broad description of the keys and what they mean - we show it here for completeness
pclass Passenger Class
(1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival
(0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation
(C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
In general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can now write a function to extract features from a text line, shown below.
Let's process an example line using the process_titanic_line function from helpers to see the expected output.
|
from helpers import process_titanic_line
print(process_titanic_line(lines[0]))
|
notebooks/03.6 Case Study - Titanic Survival.ipynb
|
Capepy/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
Now that we see the expected format from the line, we can call a dataset helper which uses this processing to read in the whole dataset. See helpers.py for more details.
|
from helpers import load_titanic
keys, train_data, test_data, train_labels, test_labels = load_titanic(
test_size=0.2, feature_skip_tuple=(), random_state=1999)
print("Key list: %s" % keys)
|
notebooks/03.6 Case Study - Titanic Survival.ipynb
|
Capepy/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
|
from sklearn.metrics import accuracy_score
from sklearn.dummy import DummyClassifier
clf = DummyClassifier('most_frequent')
clf.fit(train_data, train_labels)
pred_labels = clf.predict(test_data)
print("Prediction accuracy: %f" % accuracy_score(pred_labels, test_labels))
|
notebooks/03.6 Case Study - Titanic Survival.ipynb
|
Capepy/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
Exercice:
Clean countries data and save it as a valid csv without header.
|
import pandas as pd
import numpy as np
df_country_raw = pd.read_csv("../data/countries_data.csv",sep=";")
df_country_raw.head(15)
df_country_raw.to_csv("../data/countries_data_clean.csv",header=False)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Build a function that generates a dataframe with N user id plus a list of a random number of random news topics from news_topics.csv
|
import pandas as pd
import numpy as np
def generate_users_df(num_users, num_topics):
#generate num_users usernames
usernames_df = pd.Series(["user"]*num_users).str.cat(pd.Series(np.arange(num_users)).map(str))
#read topics csv
news_topics = pd.read_csv("../data/news_topics.csv",header=None)
#generate a list of N int picked uniformly random from range 0 .. num_topics
#WARNING: is really an uniform distribution??
rand_ints = pd.Series(np.random.randint(1,num_topics+1,num_users))
#WARNING: what happens if x>len(news_topics)
topics_df = rand_ints.apply(lambda x: "|".join(np.random.choice(news_topics.T[0],x,replace=False)))
return pd.concat({'username':usernames_df,'topics':topics_df},axis=1)
M = 5
N = 100
users_df = generate_users_df(N,M)
users_df.head(10)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Save the info generated with the previous function as csv so that it can be easily loaded as a Pair RDD in pyspark.
|
import csv
M = 20
N = 1000
users_df = generate_users_df(N,M)
users_df.to_csv("../data/users_events_example/user_info_%susers_%stopics.csv" % (N,M),
columns=["username","topics"],
header=None,
index=None)
#quoting=csv.QUOTE_MINIMAL)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Build a function that generates N csv files containing user's web browsing information. This function takes a max number of users M (from user0 to userM) and generates K user information logs for a randomly picked user (with repetition). The function will return this information with a timestamp. Each file represents 5 minute activity, the activity period will be K/300. The activity information is a random selection of 1 element over news topics.
|
import datetime
def generate_user_events(date_start, num_files, num_users, num_events):
#generate usernames
usernames_df = pd.Series(["user"]*num_users).str.cat(pd.Series(np.arange(num_users)).map(str))
#read topics
news_topics = pd.read_csv("../data/news_topics.csv",header=None,lineterminator="\n").T
#create time index
df_index = pd.date_range(date_start,
periods=num_events,
freq=pd.DateOffset(seconds=float(5*60)/num_events))
#generate data
event_data = {"user" : np.random.choice(usernames_df,num_events,replace=True),
"event" : np.random.choice(news_topics[0],num_events,replace=True)}
#generate df
return pd.DataFrame(event_data, index = df_index, columns=["user", "event"])
num_files = 10
num_users = 100
num_events = 1000
date_start = datetime.datetime.strptime('1/1/2016', '%d/%m/%Y')
for idx,i in enumerate(range(num_files)):
print "File ",idx+1," of ", num_files, " at ",date_start
userevent_df = generate_user_events(date_start, num_files, num_users, num_events)
file_name = "../data/users_events_example/userevents_" + date_start.strftime("%d%m%Y%H%M%S") + ".log"
userevent_df.to_csv(file_name, header=None)
date_start = date_start + datetime.timedelta(0,300)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Generate a unique id for papers.lst file and save it as a csv file. Then generate a csv file with random references among papers from papers.lst.
|
import csv, re
import pandas as pd
import numpy as np
f = file("../data/papers.lst","rb")
papers = []
for idx,l in enumerate(f.readlines()):
t = re.match("(\d+)(\s*)(.\d*)(\s*)(\w+)(\s*)(.*)",l)
if t:
#print "|",t.group(1),"|",t.group(3),"|",t.group(5),"|",t.group(7),"|"
papers.append([t.group(1),t.group(3),t.group(5),t.group(7)])
papers_df = pd.DataFrame(papers)
papers_df.to_csv("../data/papers.csv", header = None)
N = papers_df.shape[0]
#let's assume that a paper can have 30 references at max and 5 at min
M = 30
papers_references = pd.DataFrame(np.arange(N))
papers_references[1] = papers_references[0].apply(
lambda x:
";".join(
[str(x) for x in np.random.choice(papers_references[0],np.random.randint(5,M))]))
papers_references.columns = ["paper_id","references"]
papers_references.to_csv("../data/papers_references.csv",header=None,index=None)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Read "../data/country_info_worldbank.xls" and delete wrong rows and set proper column names.
|
import pandas as pd
cc_df0 = pd.read_excel("../data/country_info_worldbank.xls")
#delete unnececary rows
cc_df1 = cc_df0[cc_df0["Unnamed: 2"].notnull()]
#get columnames and set to dataframe
colnames = cc_df1.iloc[0].tolist()
colnames[0] = "Order"
cc_df1.columns = colnames
#delete void columns
cc_df2 = cc_df1.loc[:,cc_df1.iloc[1].notnull()]
#delete first row as it is colnames
cc_df3 = cc_df2.iloc[1:]
#reindex
cc_df3.index = np.arange(cc_df3.shape[0])
cc_df3[:]["Economy"] = cc_df3.Economy.str.encode('utf-8')
cc_df3.to_csv("../data/worldbank_countrycodes_clean.csv")
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Convert lat lon to UTM
|
import pandas as pd
est_df = pd.read_csv("../data/estacions_meteo.tsv",sep="\t")
est_df.head()
est_df.columns = est_df.columns.str.lower().\
str.replace("\[codi\]","").\
str.replace("\(m\)","").str.strip()
est_df.longitud = est_df.longitud.str.replace(",",".")
est_df.latitud = est_df.latitud.str.replace(",",".")
est_df.longitud = pd.to_numeric(est_df.longitud)
est_df.latitud = pd.to_numeric(est_df.latitud)
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Exercice:
Convert to Technically Correct Data: iqsize.csv
|
import pandas as pd
df = pd.read_csv("../data/iqsize.csv", na_values="n/a")
df.dtypes
#clean piq
errors = pd.to_numeric(df.piq, errors="coerce")
print df["piq"][errors.isnull()]
df["piq"] = pd.to_numeric(df["piq"].str.replace("'","."))
df.dtypes
errors = pd.to_numeric(df.height, errors="coerce")
print df["height"][errors.isnull()]
df["height"] = pd.to_numeric(df["height"].str.replace("'","."))
df.dtypes
df.sex.unique()
df.sex = df.sex.str.replace("Woman","Female")
df.sex = df.sex.str.replace("woman","Female")
df.sex = df.sex.str.replace("woman","Female")
df.sex = df.sex.str.replace("man","Male")
df.sex = df.sex.str.replace("Man","Male")
df.sex.unique()
df.to_csv("../data/iqsize_clean.csv",index=None)
df = pd.read_csv("../data/iqsize_clean.csv")
df
|
notes/99 - Exercices.ipynb
|
f-guitart/data_mining
|
gpl-3.0
|
Create a feature reader
We create a feature reader to obtain minimal distances between all residues which are not close neighbours. Feel free to map these distances to binary contacts or use inverse minimal residue distances instead. These coices usually work quite well.
|
traj_files = [f for f in sorted(glob('../../../DESHAWTRAJS/CLN025-0-protein/CLN025-0-protein-*.dcd'))]
pdb_file = '../../../DESHAWTRAJS/CLN025-0-protein/chig_pdb_166.pdb'
features = pyemma.coordinates.featurizer(pdb_file)
features.add_residue_mindist()
source = pyemma.coordinates.source([traj_files], features=features, chunk_size=10000)
|
test/Chignoling_FS_after_clustering.ipynb
|
ZuckermanLab/NMpathAnalysis
|
gpl-3.0
|
Discretization and MSM estimation
We start the actual analysis with a TICA projection onto two components on which we perform a k-means clustering. Then, we take a quick view on the implied timescale convergence, the 2D representation, and the clustering:
|
tica = pyemma.coordinates.tica(data=source, lag=5, dim=2).get_output()[0]
cluster = pyemma.coordinates.cluster_kmeans(tica, k=45, max_iter=100)
lags = np.asarray([1, 5, 10, 20, 50] + [i * 100 for i in range(1, 21)])
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
pyemma.plots.plot_implied_timescales(
pyemma.msm.its(cluster.dtrajs, lags=lags, errors=None, nits=6),
ylog=False, ax=axes[0], units='us', dt=2.0E-4)
pyemma.plots.plot_free_energy(*tica.T, ax=axes[1])
axes[1].scatter(*cluster.clustercenters.T, marker='x', c='grey', s=30, label='centers')
axes[1].legend()
axes[1].set_xlabel('TIC 1 / a.u.')
axes[1].set_ylabel('TIC 2 / a.u.')
fig.tight_layout()
# MSM estimation
msm = [pyemma.msm.estimate_markov_model(cluster.dtrajs, lag=lag, dt_traj='0.0002 us') for lag in lags]
lag = get_lagtime_from_array(lags, 0.2, dt=2.0E-4)[1]
pyemma.plots.plot_cktest(pyemma.msm.bayesian_markov_model(cluster.dtrajs, lag=lag, dt_traj='0.0002 us').cktest(2))
print('Estimated at lagtime %d steps' % lag)
get_lagtime_from_arraygtime_from_arraygtime_from_arrayget_lat
|
test/Chignoling_FS_after_clustering.ipynb
|
ZuckermanLab/NMpathAnalysis
|
gpl-3.0
|
Agglomerative Clustering from the transition matrix
Hierarchical agglomerative clustering using the Markovian commute time: $t_{ij} = \mathrm{MFPT}(i \rightarrow j)+ \mathrm{MFPT}(j \rightarrow i)$.
IMPORTANT: The goal of this clusterig is tho identify macrostates and not to use the best lag-time, the lag-time use for clustering could different from the one that would be appropiate for the final Markov model.
Lag times to use
|
#lag_to_use = [1, 10, 100, 1000]
lag_to_use = [1]
lag_index = [ get_lagtime_from_array(lags, element*0.2)[0] for element in lag_to_use ]
# This are the t_cut intervals to explore (in lag-time units) with the lag times in "lag_to_use"
#range_per_lag = [[200,600], [200,350], [100,250], [30,200]]
range_per_lag = [[400,900]]
|
test/Chignoling_FS_after_clustering.ipynb
|
ZuckermanLab/NMpathAnalysis
|
gpl-3.0
|
Clustering
|
for k, index in enumerate(lag_index):
K = msm[index].P
dt = 0.2
#---------------------
printmd("### Lag-time: "+str(dt)+"ns")
t_min_list=[]
t_max_list=[]
t_AB_list=[]
big_clusters_list = []
# t_cut range
min_ = range_per_lag[k][0]
max_ = range_per_lag[k][1]
interval = (max_ - min_)//15
t_cut_values = [min_+i for i in range(0,max_- min_,interval)][0:15]
fig_n_cols = 3
fig_n_rows = int(math.ceil(len(t_cut_values)/fig_n_cols))
fig = plt.figure(figsize=(6*fig_n_cols, 4.8*fig_n_rows))
printmd("#### t_values:")
for ii, t_cut in enumerate(t_cut_values):
big_clusters=[]
big_clusters_index =[]
# clustering
#clusters, t_min, t_max, clustered_tmatrix = kinetic_clustering_from_tmatrix(K, t_cut=t_cut, verbose=False)
clusters, t_min, t_max = kinetic_clustering2_from_tmatrix(K, t_cut=t_cut, verbose=False)
t_min_list.append(t_min)
t_max_list.append(t_max)
for i, cluster_i in enumerate(clusters):
if len(cluster_i) > 1:
big_clusters.append(cluster_i)
big_clusters_index.append(i)
n_big = len(big_clusters)
macrostates = biggest_clusters_indexes(big_clusters, n_pick=2)
#macrostates_list.append([ clusters[macrostates[i]] for i in range(len(macrostates))])
big_clusters_list.append(big_clusters)
if n_big > 1:
#tAB = markov_commute_time(clustered_tmatrix,[macrostates[0]],[macrostates[1]] )
tAB = markov_commute_time(K,[macrostates[0]],[macrostates[1]] )
else:
tAB = 0.0
t_AB_list.append(tAB)
print("t_cut: {:.2f}ns, t_min: {:.2f}ns, t_max: {:.2e}ns, tAB: {:.2f}ns".format(t_cut*dt, t_min*dt, t_max*dt, tAB*dt))
plt.subplot(fig_n_rows, fig_n_cols, ii+1)
pyemma.plots.plot_free_energy(*tica.T)
plt.scatter(*cluster.clustercenters.T, marker='x', c='grey', s=30, label='centers')
plt.annotate("t_cut: {:.2f}ns".format(t_cut*dt), xy=(-0.8,-4))
colors = ['red','blue','green','black','orange'] + color_sequence
for i, cluster_i in enumerate(big_clusters):
cluster_i_tica_xy = []
for index in cluster_i:
cluster_i_tica_xy.append(cluster.clustercenters[index])
cluster_i_tica_xy = np.array(cluster_i_tica_xy)
plt.scatter(*cluster_i_tica_xy.T, marker='o', c=colors[i], s=30, label='cluster-'+str(i))
plt.legend(loc='upper left')
plt.xlabel('TIC 1 / a.u.')
plt.ylabel('TIC 2 / a.u.')
printmd("#### Observed clusters vs t_cut")
plt.show()
printmd("#### t_AB plots:")
plot_t_AB(t_cut_values, t_min_list, t_max_list, t_AB_list)
# printmd("#### RMSD of the Macrostates:")
# plot_rmsd_histogram_clusters(t_cut_values, big_clusters_list, rmsd, dt, dtrajs=cluster.dtrajs)
|
test/Chignoling_FS_after_clustering.ipynb
|
ZuckermanLab/NMpathAnalysis
|
gpl-3.0
|
Selecting t_cut = 119ns for mfpts calculations
|
dt = 0.0002 # in micro-sec
if 0 in big_clusters_list[12][1]:
stateA = big_clusters_list[12][0] #Unfolded
stateB = big_clusters_list[12][1] #Folded
else:
stateA = big_clusters_list[12][1] #Unfolded
stateB = big_clusters_list[12][0] #Folded
lag_to_use = lags[0:16:2]
lag_index = [ get_lagtime_from_array(lags, element*0.2)[0] for element in lag_to_use ]
msm_mfptAB = np.asarray([msm[lag_index[i]].mfpt(stateA, stateB) for i in range(len(lag_to_use))])
msm_mfptBA = np.asarray([msm[lag_index[i]].mfpt(stateB, stateA) for i in range(len(lag_to_use))])
|
test/Chignoling_FS_after_clustering.ipynb
|
ZuckermanLab/NMpathAnalysis
|
gpl-3.0
|
Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
* Increasing the value of the feature RM would lead to an increase in the value of MDEV, 'cause houses with more rooms worth more than houses with less rooms.
* Class of homeowners in the neighborhood usually indicates the overall importance of that geographical location. An area with more "lower class" population usually indicates that houses in that area are less desirable. So an increase in the value of LSTAT would lead to a decrease in the value of MDEV.
* A high value of students to teachers ratio usually indicates poor quality of education. So an increase in PTRATIO would also lead to a decrease in the value of MDEV.
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
|
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Answer:
It seems that the model works fine in making predictions since it has successfully captured the variation of the target variable since predictions are pretty close to true values which is confirmed by R^2 score of 0.923 being close to 1.
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
|
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print "Training and testing split was successful."
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer:
One of the reasons we need testing subset is to check if the model is overfitting the training test and failing to generalize to the unseen data. In general testing subset is used to see how well the model makes predictions for unseen data.
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
|
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer:
With reference to the graph in the upper right, that is the graph with the max_depth = 3. It shows that the learning curve for the training set slowly declines as the number of points in the training set increases. On its slow decline, it appears to approach roughly an R^2 score of 0.80. A decline (or at best stagnanation) is expected at any depth level, as we go from one point in the training set to more. Basically, one rule could easily classify a few points, its when you have many and variation that more rules are needed.
In this case, adding training points doesn't do much to change the R^2 score of the training set, especially once we have a around 200 points. Changing focus to the testing set's learning curve, we see a ramp up to a R^2 score above 0.60 as we add just 50 points to the trainging set.
From there, the testing set learning curve slowly increases to right below an R^2 score of 0.80. The initial ramp up makes sense, because having just a few points in the testing set would fail to accurately capture variation in the data. So new data would most likely not be accuarely predicted.
Overall, the one with max_depth = 3 seems to be the best spot out of these four graphs. The training and testing set learning curves seem to converge at about 0.80, which is the best R^2 score with convergence out of the graphs.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
|
vs.ModelComplexity(X_train, y_train)
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer:
From the above graph, we can see that both training score and validation score are quite low when the model is trained with a maximum depth of 1, that means it suffers from high bias.
In the other hand, when the model is trained with a maximum depth of 10, training score is very high (close to 1.0) and the difference between training score & test score is also quite significant. This suggests that the model suffers from high variance.
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer:
max_depth of 4 results in a model that best generalizes to unseen data because the validation score is pretty high and the training score is high as well which means the model isn't overfitting the training data and predicts pretty well for unseen data.
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer:
When we optimize/learn an algorthim, we are minimzing/maximizing some objective function by solving for model parameters (i.e. the slope and intercept in a simple univariate linear regression) . Sometimes, the methods we use to solve these learning algortihms, have other "parameters" we need to choose/set. One example would be the learing rate in stochastic gradient descent. These "parameters" are called hyperparameters. They can be set manually or chosen through some external model mechanism.
One such mechanism is grid search. Grid search is a traditional way of performing hyperparameter optimization (sometimes called parameter sweep), which is simply an exhaustive search through a manually specified subset of the hyperparameter space of a learning algorithm. Grid search is guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer:
In the k-fold cross-validation training we divide the training set into k equal size subsets and train the model on k-1 subsets and then evaluating the error on remaining 1 subset, and doing so for every k-1 subsets. After that we average the error among k subsets on which we evaluated model's error and this gives us average model's error on all the training data. Then we run the same procedure for different sets of parameters and get estimated errors for different parameter sets, after which we choose the parameter set that gives the lowest average error.
One advantage of this method, is how the data is split. Each data point gets to be in a test set exactly once, and gets to be in a training set k-1 times. Therefore, the variance of the resulting estimate is reduced as k is increased. One disadvantage of this method is that the training algorithm has to be rerun k times, which means its more costly.
Using k-fold cross-validation as a performance metric for grid search gives us more belief (less variance) in our hyperparameter choice then just a simple cross validation. It let us minimze error for a given hpyerparameter over k instances, leading to a better (less variant choice in the hyperparameer). Therefore, we do not just chose are parameter on one instance of the learned model.
Also, not using any form of cross validation for grid search, would leave us unaware if are hyperparameter would be any good with new data (most likely, it would not be very good).
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
|
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Answer:
The optimal model have maximum depth of 4. My initial guess was right that maximum depth of 4 is the optimal parameter for the model.
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
|
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
Answer:
So predicted selling prices would be: Client 1 to sell for USD 391,183.33, Client 2 to sell for USD 189,123.53 and Client 3 to sell for USD 942,666.67. These prices look reasonable if we consider the number of rooms each house has and other parameters discussed earlier. The student to teacher ratio is lower as the price of the house is higher, and the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood is lower as the price of the house is higher. Also if we look at the descriptive statistics for the price of the houses, we can see that predicted price for client 3's home is pretty close to the maximum value for price which can be explained by the high number of rooms, low ratio of students to teachers at school and wealth of the neighbourhood. For client 2 home price we see that it's within 1 standard deviation from the mean price which tells that it's pretty close to average price, and looking at the features it makes sense, and for client 3 home price we can tell that it's within 2 standard deviations from the mean price, which means that it's not an outlier, but closer to the low-price homes, which conforms with the feature values of the home.
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
|
vs.PredictTrials(features, prices, fit_model, client_data)
|
boston_housing/boston_housing.ipynb
|
alirsamar/MLND
|
mit
|
First, we must connect to our data cube. We can then query the contents of the data cube we have connected to, including both the metadata and the actual data.
|
dc = datacube.Datacube(app='dc-water-analysis')
api = datacube.api.API(datacube=dc)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Obtain the metadata of our cube... Initially, we need to get the platforms and products in the cube. The rest of the metadata will be dependent on these two options.
|
# Get available products
products = dc.list_products()
platform_names = list(set(products.platform))
product_names = list(products.name)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Execute the following code and then use the generated form to choose your desired platfrom and product.
|
product_values = create_platform_product_gui(platform_names, product_names)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
With the platform and product, we can get the rest of the metadata. This includes the resolution of a pixel, the latitude/longitude extents, and the minimum and maximum dates available of the chosen platform/product combination.
|
# Save the form values
platform = product_values[0].value
product = product_values[1].value
# Get the pixel resolution of the selected product
resolution = products.resolution[products.name == product]
lat_dist = resolution.values[0][0]
lon_dist = resolution.values[0][1]
# Get the extents of the cube
descriptor = api.get_descriptor({'platform': platform})[product]
min_date = descriptor['result_min'][0]
min_lat = descriptor['result_min'][1]
min_lon = descriptor['result_min'][2]
min_date_str = str(min_date.year) + '-' + str(min_date.month) + '-' + str(min_date.day)
min_lat_rounded = round(min_lat, 3)
min_lon_rounded = round(min_lon, 3)
max_date = descriptor['result_max'][0]
max_lat = descriptor['result_max'][1]
max_lon = descriptor['result_max'][2]
max_date_str = str(max_date.year) + '-' + str(max_date.month) + '-' + str(max_date.day)
max_lat_rounded = round(max_lat, 3) #calculates latitude of the pixel's center
max_lon_rounded = round(max_lon, 3) #calculates longitude of the pixel's center
# Display metadata
generate_metadata_report(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded, lon_dist,
min_lat_rounded, max_lat_rounded, lat_dist)
show_map_extents(min_lon_rounded, max_lon_rounded, min_lat_rounded, max_lat_rounded)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Execute the following code and then use the generated form to choose the extents of your desired data.
|
extent_values = create_extents_gui(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded,
min_lat_rounded, max_lat_rounded)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Now that we have filled out the above two forms, we have enough information to query our data cube. The following code snippet ends with the actual Data Cube query, which will return the dataset with all the data matching our query.
|
# Save form values
start_date = datetime.strptime(extent_values[0].value, '%Y-%m-%d')
end_date = datetime.strptime(extent_values[1].value, '%Y-%m-%d')
min_lon = extent_values[2].value
max_lon = extent_values[3].value
min_lat = extent_values[4].value
max_lat = extent_values[5].value
# Query the Data Cube
dataset_in = dc.load(platform=platform,
product=product,
time=(start_date, end_date),
lon=(min_lon, max_lon),
lat=(min_lat, max_lat))
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
At this point, we have finished accessing our data cube and we can turn to analyzing our data. In this example, we will run the WOfS algorithm. The wofs_classify function, seen below, will return a modified dataset, where a value of 1 indicates the pixel has been classified as water by the WoFS algorithm and 0 represents the pixel is non-water.
For more information on the WOfS algorithm, refer to:
Mueller, et al. (2015) "Water observations from space: Mapping surface water from 25 years of Landsat imagery across Australia." Remote Sensing of Environment.
|
water_class = wofs_classify(dataset_in)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Execute the following code and then use the generated form to choose your desired acquisition date. The following two code blocks are only necessary if you would like to see the water mask of a single acquisition date.
|
acq_dates = list(water_class.time.values.astype(str))
acq_date_input = create_acq_date_gui(acq_dates)
# Save form value
acq_date = acq_date_input.value
acq_date_index = acq_dates.index(acq_date)
# Get water class for selected acquisition date and mask no data values
water_class_for_acq_date = water_class.wofs[acq_date_index]
water_class_for_acq_date.values = water_class_for_acq_date.values.astype('float')
water_class_for_acq_date.values[water_class_for_acq_date.values == -9999] = np.nan
water_observations_for_acq_date_plot = water_class_for_acq_date.plot(cmap='BuPu')
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
With all of the pixels classified as either water/non-water, let's perform a time series analysis over our derived water class. The function, perform_timeseries_analysis, takes in a dataset of 3 dimensions (time, latitude, and longitude), then sums the values of each pixel over time. It also keeps track of the number of clear observations we have at each pixel. We can then normalize each pixel to determine areas at risk of flooding. The normalization calculation is simply:
$$normalized_water_observations = \dfrac{total_water_observations}{total_clear_observations}$$.
The output each of the three calculations can be seen below.
|
time_series = perform_timeseries_analysis(water_class)
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
The following plots visualize the results of our timeseries analysis. You may change the color scales with the cmap option. For color scales available for use by cmap, see http://matplotlib.org/examples/color/colormaps_reference.html. You can also define discrete color scales by using the levels and colors. For example:
normalized_water_observations_plot = normalized_water_observations.plot(levels=3, colors=['#E5E5FF', '#4C4CFF', '#0000FF'])
normalized_water_observations_plot = normalized_water_observations.plot(levels=[0.00, 0.50, 1.01], colors=['#E5E5FF', '#0000FF'])
For more examples on how you can modify plots, see http://xarray.pydata.org/en/stable/plotting.html.
|
normalized_water_observations_plot = time_series.normalized_data.plot(cmap='dc_au_WaterSummary')
total_water_observations_plot = time_series.total_data.plot(cmap='dc_au_WaterObservations')
total_clear_observations_plot = time_series.total_clean.plot(cmap='dc_au_ClearObservations')
|
dc_notebooks/water_detection_3.ipynb
|
ceos-seo/Data_Cube_v2
|
apache-2.0
|
Cyclic Rotation
|
def solution(A,K):
if len(A) == 0:
return A
elif K >= len(A):
M = K - (K/len(A)) * len(A)
return A[-(M):] + A[:-(M)]
else:
return A[-(K):] + A[:-(K)]
print solution(A,K)
|
2. Arrays/Cyclic Rotation.ipynb
|
SimplifyData/Codality
|
mit
|
File I/O
Fixing encoding in CSV file
The file nobel-prize-winners.csv contains some odd-looking characters in the name-column, such as 'è'. These are the HTML codes for characters outside the limited ASCII set. Python is very capable at Unicode/UTF-8, so let's convert the characters to something more pleasant to the eye
|
import html # part of the Python 3 standard library
with open('nobel-prize-winners.csv', 'rt') as fp:
orig = fp.read() # read the entire file as a single hunk of text
orig[727:780] # show some characters, note the '\n'
print(orig[727:780]) # see how the '\n' gets converted to a newline
|
src/00-Solutions-to-exercises.ipynb
|
MadsJensen/intro_to_scientific_computing
|
bsd-3-clause
|
With some Googling, we find this candidate function to fix the character
|
html.unescape?
fixed = html.unescape(orig) # one line, less than a second...
print(fixed[727:780]) # much better
with open('nobel-prize-winners-fixed.csv', 'wt') as fp:
fp.write(fixed) # write back to disk, and we're done!
|
src/00-Solutions-to-exercises.ipynb
|
MadsJensen/intro_to_scientific_computing
|
bsd-3-clause
|
Part B
Write a function which takes a NumPy array and returns another NumPy array with all its elements squared.
Your function should:
be named squares
take 1 argument: a NumPy array
return 1 value: a NumPy array where each element is the squared version of the input array
You cannot use any loops, built-in functions, or NumPy functions.
|
import numpy as np
np.random.seed(13735)
x1 = np.random.random(10)
y1 = np.array([ 0.10729775, 0.01234453, 0.37878359, 0.12131263, 0.89916465,
0.50676134, 0.9927178 , 0.20673811, 0.88873398, 0.09033156])
np.testing.assert_allclose(y1, squares(x1), rtol = 1e-06)
np.random.seed(7853)
x2 = np.random.random(35)
y2 = np.array([ 7.70558043e-02, 1.85146792e-01, 6.98666869e-01,
9.93510847e-02, 1.94026134e-01, 8.43335268e-02,
1.84097846e-04, 3.74604155e-03, 7.52840504e-03,
9.34739871e-01, 3.15736597e-01, 6.73512540e-02,
9.61011706e-02, 7.99394100e-01, 2.18175433e-01,
4.87808337e-01, 5.36032332e-01, 3.26047002e-01,
8.86429452e-02, 5.66360150e-01, 9.06164054e-01,
1.73105310e-01, 5.02681242e-01, 3.07929118e-01,
7.08507520e-01, 4.95455022e-02, 9.89891434e-02,
8.94874125e-02, 4.56261817e-01, 9.46454001e-01,
2.62274636e-01, 1.79655411e-01, 3.81695141e-01,
5.66890651e-01, 8.03936029e-01])
np.testing.assert_allclose(y2, squares(x2))
|
assignments/A4/A4_Q2.ipynb
|
eds-uga/csci1360e-su17
|
mit
|
Part C
Write a function which computes the sum of the elements of a NumPy array.
Your function should:
be named sum_of_elements
take 1 argument: a NumPy array
return 1 floating-point value: the sum of the elements in the NumPy array
You cannot use any loops, but you can use the numpy.sum function.
|
import numpy as np
np.random.seed(7631)
x1 = np.random.random(483)
s1 = 233.48919473752667
np.testing.assert_allclose(s1, sum_of_elements(x1))
np.random.seed(13275)
x2 = np.random.random(23)
s2 = 12.146235770777777
np.testing.assert_allclose(s2, sum_of_elements(x2))
|
assignments/A4/A4_Q2.ipynb
|
eds-uga/csci1360e-su17
|
mit
|
Part D
You may not have realized it yet, but in the previous three parts, you've implemented almost all of what's needed to compute the Euclidean distance between two vectors, as represented with NumPy arrays. All you have to do now is link the code you wrote in the previous three parts together in the right order.
Write a function which takes two NumPy arrays and computes their distance. Your function should:
be named distance
take 2 arguments: both NumPy arrays of the same length
return 1 number: a non-zero floating point value that is the distance between the two arrays
Remember how Euclidean distance $d$ between two vectors $\vec{a}$ and $\vec{b}$ is calculated:
$$
d(\vec{a}, \vec{b}) = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + ... + (a_n - b_n) ^2}
$$
where $a_1$ and $b_1$ are the first elements of the arrays $\vec{a}$ and $\vec{b}$; $a_2$ and $b_2$ are the second elements, and so on.
You've already implemented everything except the square root; in addition to that, you just need to arrange the functions you've written in the correct order inside your distance function. Aside from calling your functions from Parts A-C, there is VERY LITTLE ORIGINAL CODE you'll need to write here! The tricky part is understanding how to make all these parts work together.
You cannot use any functions aside from those you've already written.
|
import numpy as np
import numpy.linalg as nla
np.random.seed(477582)
x11 = np.random.random(10)
x12 = np.random.random(10)
np.testing.assert_allclose(nla.norm(x11 - x12), distance(x11, x12))
np.random.seed(54782)
x21 = np.random.random(584)
x22 = np.random.random(584)
np.testing.assert_allclose(nla.norm(x21 - x22), distance(x21, x22))
|
assignments/A4/A4_Q2.ipynb
|
eds-uga/csci1360e-su17
|
mit
|
Part E
Now, you'll use your distance function to find the pair of vectors that are closest to each other. This is a very, very common problem in data science: finding a data point that is most similar to another data point.
In this problem, you'll write a function that takes two arguments: the data point you have (we'll call this the "reference data point"), and a list of data points you want to search. You'll loop through this list and, using your distance() function defined in Part D, compute the distance between the reference data point and each data point in the list, hunting for the one that gives you the smallest distance (meaning here that it is most similar to your reference data point).
Your function should:
be named similarity_search
take 2 arguments: a reference point (NumPy array), and a list of data points (list of NumPy arrays)
return 1 value: the smallest distance you could find between your reference data point and one of the data points in the list
For example, similarity_search([1, 1], [ [1, 1], [2, 2], [3, 3] ]) should return 0, since the smallest distance that can be found between the reference data point [1, 1] and an data point in the list is the list's first element: an exact copy of the reference data point. The distance between a 2D point and itself will always be 0, so this is pretty much as small as you can get.
HINT: This really isn't much code at all! Conceptually it's nothing you haven't done before, either--it's very much like the question in Assignment 3 that asked you to write code to find the minimum value in a list. This just looks intimidating, because now you're dealing with NumPy arrays. If your solution goes beyond 10-15 lines of code, consider re-thinking the problem.
|
import numpy as np
r1 = np.array([1, 1])
l1 = [np.array([1, 1]), np.array([2, 2]), np.array([3, 3])]
a1 = 0.0
np.testing.assert_allclose(a1, similarity_search(r1, l1))
np.random.seed(7643)
r2 = np.random.random(2) * 100
l2 = [np.random.random(2) * 100 for i in range(100)]
a2 = 1.6077074397123927
np.testing.assert_allclose(a2, similarity_search(r2, l2))
|
assignments/A4/A4_Q2.ipynb
|
eds-uga/csci1360e-su17
|
mit
|
Input
|
from sklearn.datasets import load_files
corpus = load_files("../data/")
doc_count = len(corpus.data)
print("Doc count:", doc_count)
assert doc_count is 56, "Wrong number of documents loaded, should be 56 (56 stories)"
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Vectorizer
|
from helpers.tokenizer import TextWrangler
from sklearn.feature_extraction.text import CountVectorizer
bow = CountVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="lemma"))
X_bow = bow.fit_transform(corpus.data)
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Decided for BOW vectors, containing lemmatized words. BOW results (in this case) in better cluster performance than with tf-idf vectors. Lemmatization worked slightly better than stemming. (-> KElbow plots in plots/ dir).
Models
|
from sklearn.cluster import KMeans
kmeans = KMeans(n_jobs=-1, random_state=23)
from yellowbrick.cluster import KElbowVisualizer
viz = KElbowVisualizer(kmeans, k=(2, 28), metric="silhouette")
viz.fit(X_bow)
#viz.poof(outpath="plots/KElbow_bow_lemma_silhoutte.png")
viz.poof()
from yellowbrick.cluster import SilhouetteVisualizer
def plot_silhoutte_plots(max_n):
for i in range(2, max_n + 1):
plt.clf()
n_cluster = i
viz = SilhouetteVisualizer(KMeans(n_clusters=n_cluster, random_state=23))
viz.fit(X_bow)
path = "plots/SilhouetteViz" + str(n_cluster)
viz.poof(outpath=path)
#plot_silhoutte_plots(28)
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Decided for 3 clusters, because of highest avg Silhoutte score compared to other cluster sizes.
|
from yellowbrick.cluster import SilhouetteVisualizer
n_clusters = 3
model = KMeans(n_clusters=n_clusters, n_jobs=-1, random_state=23)
viz = SilhouetteVisualizer(model)
viz.fit(X_bow)
viz.poof()
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Nonetheless, the assignment isn't perfect. Cluster #1 looks good, but the many negative vals in cluster #0 & #1 suggest that there exist a cluster with more similar docs than in the actual assigned cluster. As a cluster size of 2 also leads to an inhomogen cluster and has a lower avg Silhoutte score, we go with the size of 3. Nevertheless, in general those findings suggest that the Sherlock Holmes stories should be represented in a single collection only.
Training
|
from sklearn.pipeline import Pipeline
pipe = Pipeline([("bow", bow),
("kmeans", model)])
pipe.fit(corpus.data)
pred = pipe.predict(corpus.data)
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Evaluation
Cluster density
Silhoutte coefficient: [-1,1], where 1 is most dense and negative vals correspond to ill seperation.
|
from sklearn.metrics import silhouette_score
print("Avg Silhoutte score:", silhouette_score(X_bow, pred), "(novel collections)")
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Compared to original collections by Sir Arthur Conan Doyle:
|
print("AVG Silhoutte score", silhouette_score(X_bow, corpus.target), "(original collections)")
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Average Silhoutte coefficient is at least slightly positive and much better than the score of the original assignment (which is even negative). Success.
Visual Inspection
We come from the original assignment by Sir Arthur Conan Doyle...
|
from yellowbrick.text import TSNEVisualizer
# Map target names of original collections to target vals
collections_map = {}
for i, collection_name in enumerate(corpus.target_names):
collections_map[i] = collection_name
# Plot
tsne_original = TSNEVisualizer()
labels = [collections_map[c] for c in corpus.target]
tsne_original.fit(X_bow, labels)
tsne_original.poof()
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
... to the novel collection assignment:
|
# Plot
tsne_novel = TSNEVisualizer()
labels = ["c{}".format(c) for c in pipe.named_steps.kmeans.labels_]
tsne_novel.fit(X_bow, labels)
tsne_novel.poof()
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Confirms the findings from the Silhoutte plot above (in the Models section), cluster #1 looks very coherent, cluster #2 is seperated and the two documents of cluster #0 fly somewhere around.
Nonetheless, compared to the original collection, this looks far better. Success.
Document-Cluster Assignment
Finally, we want to assign the Sherlock Holmes stories to the novel collection created by clustering, right?
Create artificial titles for the collections created from clusters.
|
# Novel titles, can be more creative ;>
novel_collections_map = {0: "The Unassignable Adventures of Cluster 0",
1: "The Adventures of Sherlock Holmes in Cluster 1",
2: "The Case-Book of Cluster 2"}
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Let's see how the the books are differently assigned to collections by Sir Arthur Conan Doyle (Original Collection), respectively by the clustering algo (Novel Collection).
|
orig_assignment = [collections_map[c] for c in corpus.target]
novel_assignment = [novel_collections_map[p] for p in pred]
titles = [" ".join(f_name.split("/")[-1].split(".")[0].split("_"))
for f_name in corpus.filenames]
# Final df, compares original with new assignment
df_documents = pd.DataFrame([orig_assignment, novel_assignment],
columns=titles, index=["Original Collection", "Novel Collection"]).T
df_documents.to_csv("collections.csv")
df_documents
df_documents["Novel Collection"].value_counts()
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
Collections are uneven assigned. Cluster #1 is the predominant one. Looks like cluster #0 subsume the (rational) unassignable stories.
T-SNE plot eventually looks like that:
|
tsne_novel_named = TSNEVisualizer(colormap="Accent")
tsne_novel_named.fit(X_bow, novel_assignment)
tsne_novel_named.poof(outpath="plots/Novel_Sherlock_Holmes_Collections.png")
|
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
|
donK23/pyData-Projects
|
apache-2.0
|
The following box is useless if you're not using a notebook - they just enable the online notebook drawing stuff.
|
#the following are to do with this interactive notebook code
%matplotlib inline
from matplotlib import pyplot as plt # this lets you draw inline pictures in the notebooks
import pylab # this allows you to control figure size
pylab.rcParams['figure.figsize'] = (10.0, 8.0) # this controls figure size in the notebook
|
0 The I don't have a notebook notebook.ipynb
|
handee/opencv-gettingstarted
|
mit
|
Its three main tables are vis.Session, vis.Condition, and vis.Trial. Furthermore vis.Condition has many tables below specifying parameters specific to each type of stimulus condition.
|
(dj.ERD(Condition)+1+Session+Trial).draw()
|
jupyter/tutorial/pipeline_vis.ipynb
|
fabiansinz/pipeline
|
lgpl-3.0
|
Each vis.Session comprises multiple trials that each has only one condition. The trial has timing information and refers to the general stimulus condition.
The type of condition is determined by the dependent tables of vis.Condition (e.g. vis.Monet, vis.Trippy, vis.MovieClipCond) that describe details that are specific to each type of visual stimulus. Some of these tables have lookup tables with cached data-intensive stimuli such as noise movies.
|
(dj.ERD(MovieClipCond)+Monet-1).draw()
|
jupyter/tutorial/pipeline_vis.ipynb
|
fabiansinz/pipeline
|
lgpl-3.0
|
Distribution of constructiveness (Check if it's skewed)
|
df['constructive_nominal'] = df['constructive'].apply(nominalize_constructiveness)
cdict = df['constructive_nominal'].value_counts().to_dict()
# Plot constructiveness distribution in the data
# The slices will be ordered and plotted counter-clockwise.
labels = 'Constructive', 'Non constructive', 'Not sure'
items =[cdict['yes'], cdict['no'], cdict['not_sure']]
total = sum(cdict.values())
size =[round(item/float(total) * 100) for item in items]
print(size)
colors = ['xkcd:green', 'xkcd:red', 'xkcd:orange']
plot_donut_chart(size,labels,colors, 'Constructiveness distribution (Total = ' + str(total) + ')')
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
Distribution of toxicity (Check if skewed)
|
df['crowd_toxicity_level_nominal'] = df['crowd_toxicity_level'].apply(nominalize_toxicity)
# Plot toxicity distribution with context (avg score)
toxicity_counts_dict = {'Very toxic':0, 'Toxic':0, 'Mildly toxic':0, 'Not toxic':0}
toxicity_counts_dict.update(df['crowd_toxicity_level_nominal'].value_counts().to_dict())
print(toxicity_counts_dict)
total = sum(toxicity_counts_dict.values())
# The slices will be ordered and plotted counter-clockwise.
labels = 'Very toxic', 'Toxic', 'Mildly toxic', 'Not toxic'
size=[toxicity_counts_dict['Very toxic'],toxicity_counts_dict['Toxic'],toxicity_counts_dict['Mildly toxic'],toxicity_counts_dict['Not toxic']]
colors = ['xkcd:red', 'xkcd:orange', 'xkcd:blue', 'xkcd:green']
plot_donut_chart(size,labels,colors, 'Toxicity distribution (avg score) (Total = ' + str(total) + ')')
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
Distribution of toxicity in constructive and non-constructive comments (Check if the dists are very different)
Plot toxicity distribution for constructive comments
|
toxicity_column_name = 'crowd_toxicity_level_nominal'
constructive_very_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column_name] == 'Very toxic')].shape[0]
print('Constructive very toxic: ', constructive_very_toxic)
constructive_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column_name] == 'Toxic')].shape[0]
print('Constructive toxic: ', constructive_toxic)
constructive_mildly_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column_name] == 'Mildly toxic')].shape[0]
print('Constructive mildly toxic: ', constructive_mildly_toxic)
constructive_not_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column_name] == 'Not toxic')].shape[0]
print('Constructive non toxic: ', constructive_not_toxic)
labels = 'Very toxic', 'Toxic', 'Mildly toxic', 'Not toxic'
size=[constructive_very_toxic, constructive_toxic, constructive_mildly_toxic, constructive_not_toxic]
total = sum(size)
colors = ['xkcd:red', 'xkcd:orange', 'xkcd:blue', 'xkcd:green']
plot_donut_chart(size,labels,colors, 'Toxicity in constructive comments (Total = ' + str(total) + ')')
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
Plot toxicity distribution for non-constructive comments
|
# Plot toxicity (with context) distribution for non constructive comments
nconstructive_very_toxic = df[(df['constructive_nominal'] == 'no') & (df[toxicity_column_name] == 'Very toxic')].shape[0]
print('Non constructive very toxic: ', nconstructive_very_toxic)
nconstructive_toxic = df[(df['constructive_nominal'] == 'no') & (df[toxicity_column_name] == 'Toxic')].shape[0]
print('Non constructive toxic: ', nconstructive_toxic)
nconstructive_mildly_toxic = df[(df['constructive_nominal'] == 'no') & (df[toxicity_column_name] == 'Mildly toxic')].shape[0]
print('Non constructive mildly toxic: ', nconstructive_mildly_toxic)
nconstructive_not_toxic = df[(df['constructive_nominal'] == 'no') & (df[toxicity_column_name] == 'Not toxic')].shape[0]
print('Non constructive non toxic: ', nconstructive_not_toxic)
labels = 'Very toxic', 'Toxic', 'Mildly toxic', 'Not toxic'
size=[nconstructive_very_toxic, nconstructive_toxic, nconstructive_mildly_toxic, nconstructive_not_toxic]
total = sum(size)
colors = ['xkcd:red', 'xkcd:orange', 'xkcd:blue', 'xkcd:green']
plot_donut_chart(size,labels,colors,'Toxicity in non-constructive comments (Total = ' + str(total) + ')')
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
Plot toxicity distribution for ambiguous comments
|
# Plot toxicity (with context) distribution for ambiguous comments
ns_very_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == 'Very toxic')].shape[0]
print('Ambiguous very toxic: ', ns_very_toxic)
ns_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == 'Toxic')].shape[0]
print('Ambiguous toxic: ', ns_toxic)
ns_mildly_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == 'Mildly toxic')].shape[0]
print('Ambiguous mildly toxic: ', ns_mildly_toxic)
ns_not_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == 'Not toxic')].shape[0]
print('Ambiguous non toxic: ', ns_not_toxic)
labels = 'Very toxic', 'Toxic', 'Mildly toxic', 'Not toxic'
size=[ns_very_toxic, ns_toxic, ns_mildly_toxic, ns_not_toxic]
total = sum(size)
colors = ['xkcd:red', 'xkcd:orange', 'xkcd:blue', 'xkcd:green']
plot_donut_chart(size,labels,colors, 'Toxicity (no context) in ambiguous comments (Total = ' + str(total) + ')')
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
Check how the annotators did on internal gold questions
|
# Starting from batch three we have included internal gold questions for toxicity and constructiveness (20 each) with
# internal_gold_constructiveness flag True. In the code below, we examine to what extent the annotators agreed with
# these internal gold questions.
# Get a subset dataframe with internal gold questions for constructiveness
internal_gold_con_df = df[df['constructive_internal_gold'].notnull()].copy()
# Call secret_gold_evaluation_constructiveness function from data_quality_analysis_functions
print('Disagreement on constructiveness secret gold questions (%): ', secret_gold_evaluation_constructiveness(internal_gold_con_df))
# Get a subset dataframe with internal gold questions for toxicity
internal_gold_tox_df = df[df['crowd_toxicity_level_internal_gold'].notnull()].copy()
# Call secret_gold_evaluation_toxicity function from data_quality_analysis_functions
print('Disagreement on toxicity secret gold questions (%): ', secret_gold_evaluation_toxicity(internal_gold_tox_df))
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
|
conversationai/conversationai-crowdsource
|
apache-2.0
|
2. RL-Algorithms based on Temporal Difference TD(0)
2a. Load the "Temporal Difference" Python class
Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
|
%run ../PlotUtils.py
plotutls = PlotUtils()
|
Reinforcement-Learning/TD0-models/02.CliffWalking.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.