markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
The **operator** Module
import operator dir(operator)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Arithmetic Operators A variety of arithmetic operators are implemented.
operator.add(1, 2) operator.mul(2, 3) operator.pow(2, 3) operator.mod(13, 2) operator.floordiv(13, 2) operator.truediv(3, 2)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
These would have been very handy in our previous section:
from functools import reduce reduce(lambda x, y: x*y, [1, 2, 3, 4])
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Instead of defining a lambda, we could simply use **operator.mul**:
reduce(operator.mul, [1, 2, 3, 4])
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Comparison and Boolean Operators Comparison and Boolean operators are also implemented as functions:
operator.lt(10, 100) operator.le(10, 10) operator.is_('abc', 'def')
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
We can even get the truthyness of an object:
operator.truth([1,2]) operator.truth([]) operator.and_(True, False) operator.or_(True, False)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Element and Attribute Getters and Setters We generally select an item by index from a sequence by using **[n]**:
my_list = [1, 2, 3, 4] my_list[1]
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
We can do the same thing using:
operator.getitem(my_list, 1)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
If the sequence is mutable, we can also set or remove items:
my_list = [1, 2, 3, 4] my_list[1] = 100 del my_list[3] print(my_list) my_list = [1, 2, 3, 4] operator.setitem(my_list, 1, 100) operator.delitem(my_list, 3) print(my_list)
[1, 100, 3]
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
We can also do the same thing using the **operator** module's **itemgetter** function.The difference is that this returns a callable:
f = operator.itemgetter(2)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Now, **f(my_list)** will return **my_list[2]**
f(my_list) x = 'python' f(x)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Furthermore, we can pass more than one index to **itemgetter**:
f = operator.itemgetter(2, 3) my_list = [1, 2, 3, 4] f(my_list) x = 'pytyhon' f(x)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Similarly, **operator.attrgetter** does the same thing, but with object attributes.
class MyClass: def __init__(self): self.a = 10 self.b = 20 self.c = 30 def test(self): print('test method running...') obj = MyClass() obj.a, obj.b, obj.c f = operator.attrgetter('a') f(obj) my_var = 'b' operator.attrgetter(my_var)(obj) my_var = 'c' operator.attrgetter(my_var)(obj) f = operator.attrgetter('a', 'b', 'c') f(obj)
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Of course, attributes can also be methods.In this case, **attrgetter** will return the object's **test** method - a callable that can then be called using **()**:
f = operator.attrgetter('test') obj_test_method = f(obj) obj_test_method()
test method running...
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Just like lambdas, we do not need to assign them to a variable name in order to use them:
operator.attrgetter('a', 'b')(obj) operator.itemgetter(2, 3)('python')
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Of course, we can achieve the same thing using functions or lambdas:
f = lambda x: (x.a, x.b, x.c) f(obj) f = lambda x: (x[2], x[3]) f([1, 2, 3, 4]) f('python')
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Use Case Example: Sorting Suppose we want to sort a list of complex numbers based on the real part of the numbers:
a = 2 + 5j a.real l = [10+1j, 8+2j, 5+3j] sorted(l, key=operator.attrgetter('real'))
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Or if we want to sort a list of string based on the last character of the strings:
l = ['aaz', 'aad', 'aaa', 'aac'] sorted(l, key=operator.itemgetter(-1))
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Or maybe we want to sort a list of tuples based on the first item of each tuple:
l = [(2, 3, 4), (1, 2, 3), (4, ), (3, 4)] sorted(l, key=operator.itemgetter(0))
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Slicing
l = [1, 2, 3, 4] l[0:2] l[0:2] = ['a', 'b', 'c'] print(l) del l[3:5] print(l)
['a', 'b', 'c']
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
We can do the same thing this way:
l = [1, 2, 3, 4] operator.getitem(l, slice(0,2)) operator.setitem(l, slice(0,2), ['a', 'b', 'c']) print(l) operator.delitem(l, slice(3, 5)) print(l)
['a', 'b', 'c']
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Calling another Callable
x = 'python' x.upper() operator.methodcaller('upper')('python')
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Of course, since **upper** is just an attribute of the string object **x**, we could also have used:
operator.attrgetter('upper')(x)()
_____no_output_____
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
If the callable takes in more than one parameter, they can be specified as additional arguments in **methodcaller**:
class MyClass: def __init__(self): self.a = 10 self.b = 20 def do_something(self, c): print(self.a, self.b, c) obj = MyClass() obj.do_something(100) operator.methodcaller('do_something', 100)(obj) class MyClass: def __init__(self): self.a = 10 self.b = 20 def do_something(self, *, c): print(self.a, self.b, c) obj.do_something(c=100) operator.methodcaller('do_something', c=100)(obj)
10 20 100
Apache-2.0
dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb
rebekka-halal/bg
Data types--- **EXERCISES** _1. What type of value is 3.4? How can you find out?_ **Solution**_It is a floating-point number (often abbreviated “float”)._
print(type(3.4))
<class 'float'>
MIT
semester2/notebooks/1.2-data-types-solutions.ipynb
pedrohserrano/global-studies
This is the basic load and clean stuff
# %load ~/dataviz/ExplorePy/clean-divvy-explore.py import pandas as pd import numpy as np import datetime as dt import pandas.api.types as pt import pytz as pytz from astral import LocationInfo from astral.sun import sun from astral.geocoder import add_locations, database, lookup from dateutil import parser as du_pr from pathlib import Path db = database() TZ=pytz.timezone('US/Central') chi_town = lookup('Chicago', db) print(chi_town) rev = "5" input_dir = '/mnt/d/DivvyDatasets' input_divvy_basename = "divvy_trip_history_201909-202108" input_divvy_base = input_dir + "/" + input_divvy_basename input_divvy_raw = input_divvy_base + ".csv" input_divvy_rev = input_dir + "/rev" + rev + "-" + input_divvy_basename + ".csv" input_chitemp = input_dir + "/" + "ChicagoTemperature.csv" # # returns true if the rev file is already present # def rev_file_exists(): path = Path(input_divvy_rev) return path.is_file() def update_dow_to_category(df): # # we need to get the dow properly set # cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] cats_type = pt.CategoricalDtype(categories=cats, ordered=True) df['day_of_week'] = df['day_of_week'].astype(cats_type) return df def update_start_cat_to_category(df): cats = ['AM_EARLY', 'AM_RUSH', 'AM_MID', 'LUNCH', 'PM_EARLY', 'PM_RUSH', 'PM_EVENING', 'PM_LATE'] cats_type = pt.CategoricalDtype(categories=cats, ordered=True) df['start_cat'] = df['start_cat'].astype(cats_type) return df # # loads and returns the rev file as a data frame. It handles # the need to specify some column types # # filename : the filename to load # def load_divvy_dataframe(filename): print("Loading " + filename) # so need to set the type on a couple of columns col_names = pd.read_csv(filename, nrows=0).columns types_dict = { 'ride_id': str, 'start_station_id': str, 'end_station_id': str, 'avg_temperature_celsius': float, 'avg_temperature_fahrenheit': float, 'duration': float, 'start_lat': float, 'start_lng': float, 'end_lat': float, 'end_lng': float, 'avg_rain_intensity_mm/hour': float, 'avg_wind_speed': float, 'max_wind_speed': float, 'total_solar_radiation': int, 'is_dark': bool } types_dict.update({col: str for col in col_names if col not in types_dict}) date_cols=['started_at','ended_at','date'] df = pd.read_csv(filename, dtype=types_dict, parse_dates=date_cols) if 'start_time' in df: print("Converting start_time") df['start_time'] = df['start_time'].apply(lambda x: dt.datetime.strptime(x, "%H:%M:%S")) return df def yrmo(year, month): return "{}-{}".format(year, month) def calc_duration_in_minutes(started_at, ended_at): diff = ended_at - started_at return diff.total_seconds() / 60 # # load the chicago temperature into a data frame # def load_temperature_dataframe(): print("Loading " + input_chitemp) df = pd.read_csv(input_chitemp) print("Converting date") df['date'] = df['date'].apply(lambda x: dt.datetime.strptime(x, "%Y-%m-%d")) return df def add_start_time(started_at): return started_at.time() def add_start_cat(started_at): start_time = started_at.time() time_new_day = dt.time(00,00) time_am_rush_start = dt.time(7,00) time_am_rush_end = dt.time(9,00) time_lunch_start = dt.time(11,30) time_lunch_end = dt.time(13,00) time_pm_rush_start = dt.time(15,30) time_pm_rush_end = dt.time(19,00) time_evening_end = dt.time(23,00) if start_time >= time_new_day and start_time < time_am_rush_start: return 'AM_EARLY' if start_time >= time_am_rush_start and start_time < time_am_rush_end: return 'AM_RUSH' if start_time >= time_am_rush_end and start_time < time_lunch_start: return 'AM_MID' if start_time >= time_lunch_start and start_time < time_lunch_end: return 'LUNCH' # slight change on Chi rush from 15:00 to 15:30 if start_time >= time_lunch_end and start_time < time_pm_rush_start: return 'PM_EARLY' if start_time >= time_pm_rush_start and start_time < time_pm_rush_end: return 'PM_RUSH' if start_time >= time_pm_rush_end and start_time < time_evening_end: return 'PM_EVENING' return 'PM_LATE' def add_is_dark(started_at): st = started_at.replace(tzinfo=TZ) chk = sun(chi_town.observer, date=st, tzinfo=chi_town.timezone) return st >= chk['dusk'] or st <= chk['dawn'] # # handles loading and processing the divvy raw data by # adding columns, removing bad data, etc. # def process_raw_divvy(filename): df_divvy = load_divvy_dataframe(filename) print("Creating additional columns") data = pd.Series(df_divvy.apply(lambda x: [ add_start_time(x['started_at']), add_is_dark(x['started_at']), yrmo(x['year'], x['month']), calc_duration_in_minutes(x['started_at'], x['ended_at']), add_start_cat(x['started_at']) ], axis = 1)) new_df = pd.DataFrame(data.tolist(), data.index, columns=['start_time','is_dark','yrmo','duration','start_cat']) df_divvy = df_divvy.merge(new_df, left_index=True, right_index=True) # # # # add a simplistic time element # # # print("Adding start_time") # df_divvy['start_time'] = df_divvy.apply(lambda row: add_start_time(row['started_at']), axis = 1) # print("Adding start_cat") # df_divvy['start_cat'] = df_divvy.apply(lambda row: add_start_cat(row['start_time']), axis = 1) # # # # is it dark # # # print("Adding is_dark") # df_divvy['is_dark'] = df_divvy.apply(lambda row: add_is_dark(row['started_at']), axis = 1) # # # # add a year-month column to the divvy dataframe # # this uses a function with the row; it is not # # the absolute fastest way # # # print("Adding year-month as yrmo") # df_divvy['yrmo'] = df_divvy.apply(lambda row: yrmo(row['year'], row['month']), # axis = 1) # # # # we also want a duration to be calculated # # # print("Adding duration") # df_divvy['duration'] = df_divvy.apply(lambda row: calc_duration_in_minutes(row['started_at'], # row['ended_at']), # axis = 1) # # add the temperature # df_chitemp = load_temperature_dataframe() print("Merging in temperature") df_divvy = pd.merge(df_divvy, df_chitemp, on="date") print(df_divvy.shape) print(df_divvy.head()) # print(df_divvy.loc[df_divvy['date'] == '2020-02-21']) # 2020-02-21 was missing in org. temp # print(df_divvy[['ride_id','member_casual','date','duration','yrmo','avg_temperature_fahrenheit','start_time','start_cat']]) # # clean the dataframe to remove invalid durations # which are really only (about) < 1 minute, or > 12 hours # print("Removing invalid durations") df_divvy = df_divvy[(df_divvy.duration >= 1.2) & (df_divvy.duration < 60 * 12)] # print(df_divvy.shape) df_divvy = update_dow_to_category(df_divvy) df_divvy = update_start_cat_to_category(df_divvy) # # drop some bogus columns # print("Dropping columns") df_divvy.drop(df_divvy.columns[[0,-1]], axis=1, inplace=True) return df_divvy # # writes the dataframe to the specified filename # def save_dataframe(df, filename): print("Saving dataframe to " + filename) df_out = df.copy() df_out['date'] = df_out['date'].map(lambda x: dt.datetime.strftime(x, '%Y-%m-%d')) df_out.to_csv(filename, index=False, date_format="%Y-%m-%d %H:%M:%S") # # load the divvy csv into a data frame # if rev_file_exists(): df_divvy = load_divvy_dataframe(input_divvy_rev) df_divvy = update_dow_to_category(df_divvy) df_divvy = update_start_cat_to_category(df_divvy) else: df_divvy = process_raw_divvy(input_divvy_raw) save_dataframe(df_divvy, input_divvy_rev) print(df_divvy) df_divvy.info() # # btw, can just pass the row and let the function figure it out # #def procone(row): # print(row['date']) # return 0 #df_divvy.apply(lambda row: procone(row), axis = 1)
LocationInfo(name='Chicago', region='USA', timezone='US/Central', latitude=41.833333333333336, longitude=-87.68333333333334) Loading /mnt/d/DivvyDatasets/rev5-divvy_trip_history_201909-202108.csv Converting start_time ID ...1 ride_id rideable_type started_at \ 0 1147482 8267501 24710636 docked_bike 2019-09-01 00:00:15 1 1147483 8267502 24710637 docked_bike 2019-09-01 00:00:48 2 1147484 8267503 24710638 docked_bike 2019-09-01 00:01:13 3 1147485 8267504 24710639 docked_bike 2019-09-01 00:01:34 4 1147486 8267505 24710640 docked_bike 2019-09-01 00:03:29 ... ... ... ... ... ... 8190302 9464757 6162300 98EAC61BBAAF73C9 classic_bike 2021-08-31 07:46:16 8190303 9464758 6162336 B060D2DF6AC0D65B classic_bike 2021-08-31 17:57:04 8190304 9464759 6162481 D22E7B3E5C0D162E classic_bike 2021-08-31 17:17:09 8190305 9464760 6162496 4C86FE37842CD185 classic_bike 2021-08-31 17:22:15 8190306 9464761 6162497 6BEE9ACBA8E9BD51 classic_bike 2021-08-31 13:20:57 ended_at start_station_name start_station_id \ 0 2019-09-01 00:05:00 Southport Ave & Waveland Ave 227 1 2019-09-01 00:06:46 Wells St & Concord Ln 289 2 2019-09-01 00:07:53 Broadway & Waveland Ave 304 3 2019-09-01 00:11:04 Wells St & Concord Ln 289 4 2019-09-01 00:21:53 Wells St & Concord Ln 289 ... ... ... ... 8190302 2021-08-31 07:53:51 Wells St & Huron St TA1306000012 8190303 2021-08-31 18:08:36 Wells St & Huron St TA1306000012 8190304 2021-08-31 17:35:18 Wells St & Evergreen Ave TA1308000049 8190305 2021-08-31 17:30:12 Lakeview Ave & Fullerton Pkwy TA1309000019 8190306 2021-08-31 13:26:28 Lakeview Ave & Fullerton Pkwy TA1309000019 end_station_name end_station_id ... yrmo \ 0 Ashland Ave & Belle Plaine Ave 246 ... 2019-09 1 Sedgwick St & Webster Ave 143 ... 2019-09 2 Broadway & Belmont Ave 296 ... 2019-09 3 Clark St & Drummond Pl 220 ... 2019-09 4 Western Ave & Walton St 374 ... 2019-09 ... ... ... ... ... 8190302 Franklin St & Adams St (Temp) TA1309000008 ... 2021-08 8190303 Clark St & Lincoln Ave 13179 ... 2021-08 8190304 Lincoln Ave & Diversey Pkwy TA1307000064 ... 2021-08 8190305 Clark St & Lincoln Ave 13179 ... 2021-08 8190306 Clark St & Lincoln Ave 13179 ... 2021-08 duration start_cat avg_temperature_celsius \ 0 4.750000 AM_EARLY 19.1 1 5.966667 AM_EARLY 19.1 2 6.666667 AM_EARLY 19.1 3 9.500000 AM_EARLY 19.1 4 18.400000 AM_EARLY 19.1 ... ... ... ... 8190302 7.583333 AM_RUSH 23.7 8190303 11.533333 PM_RUSH 23.7 8190304 18.150000 PM_RUSH 23.7 8190305 7.950000 PM_RUSH 23.7 8190306 5.516667 PM_EARLY 23.7 avg_temperature_fahrenheit avg_humidity avg_rain_intensity_mm/hour \ 0 66.3 82% 1.0 1 66.3 82% 1.0 2 66.3 82% 1.0 3 66.3 82% 1.0 4 66.3 82% 1.0 ... ... ... ... 8190302 74.6 65% 0.0 8190303 74.6 65% 0.0 8190304 74.6 65% 0.0 8190305 74.6 65% 0.0 8190306 74.6 65% 0.0 avg_wind_speed max_wind_speed total_solar_radiation 0 1.4 8.9 4351 1 1.4 8.9 4351 2 1.4 8.9 4351 3 1.4 8.9 4351 4 1.4 8.9 4351 ... ... ... ... 8190302 2.7 9.5 13771 8190303 2.7 9.5 13771 8190304 2.7 9.5 13771 8190305 2.7 9.5 13771 8190306 2.7 9.5 13771 [8190307 rows x 32 columns] <class 'pandas.core.frame.DataFrame'> RangeIndex: 8190307 entries, 0 to 8190306 Data columns (total 32 columns): # Column Dtype --- ------ ----- 0 ID object 1 ...1 object 2 ride_id object 3 rideable_type object 4 started_at datetime64[ns] 5 ended_at datetime64[ns] 6 start_station_name object 7 start_station_id object 8 end_station_name object 9 end_station_id object 10 start_lat float64 11 start_lng float64 12 end_lat float64 13 end_lng float64 14 member_casual object 15 date datetime64[ns] 16 month object 17 day object 18 year object 19 day_of_week category 20 start_time datetime64[ns] 21 is_dark bool 22 yrmo object 23 duration float64 24 start_cat category 25 avg_temperature_celsius float64 26 avg_temperature_fahrenheit float64 27 avg_humidity object 28 avg_rain_intensity_mm/hour float64 29 avg_wind_speed float64 30 max_wind_speed float64 31 total_solar_radiation int64 dtypes: bool(1), category(2), datetime64[ns](4), float64(10), int64(1), object(14) memory usage: 1.8+ GB
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Look at the average duration by rider type & day of week average duration by day of week for rider types
type(df_divvy['duration']) df_divvy.info() df_divvy.shape df_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_rider_by_dow df_rider_by_dow.sort_values(by=['member_casual','day_of_week'])
<class 'pandas.core.frame.DataFrame'> RangeIndex: 8190307 entries, 0 to 8190306 Data columns (total 32 columns): # Column Dtype --- ------ ----- 0 ID object 1 ...1 object 2 ride_id object 3 rideable_type object 4 started_at datetime64[ns] 5 ended_at datetime64[ns] 6 start_station_name object 7 start_station_id object 8 end_station_name object 9 end_station_id object 10 start_lat float64 11 start_lng float64 12 end_lat float64 13 end_lng float64 14 member_casual object 15 date datetime64[ns] 16 month object 17 day object 18 year object 19 day_of_week category 20 start_time datetime64[ns] 21 is_dark bool 22 yrmo object 23 duration float64 24 start_cat category 25 avg_temperature_celsius float64 26 avg_temperature_fahrenheit float64 27 avg_humidity object 28 avg_rain_intensity_mm/hour float64 29 avg_wind_speed float64 30 max_wind_speed float64 31 total_solar_radiation int64 dtypes: bool(1), category(2), datetime64[ns](4), float64(10), int64(1), object(14) memory usage: 1.8+ GB
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Now we want to plot
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns
_____no_output_____
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
bar plot of Duration by Rider Type and Day of Week
df_rider_by_dow.unstack('member_casual').plot(kind='bar') df_rider_by_dow.reset_index(inplace=True) sns.set(rc={"figure.figsize":(16,8)}) sns.barplot(data=df_rider_by_dow, x="day_of_week", y="mean_time", hue="member_casual")
_____no_output_____
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Look at the number of riders by type and day of week grouping
df_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(num_rides = ('ID', 'count')) df_rider_by_dow #df_rider_by_dow['day_of_week'] = df_rider_by_dow['day_of_week'].astype(cats_type) df_rider_by_dow.sort_values(by=['member_casual','day_of_week'])
_____no_output_____
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
plot of Number of Rids by Rider Type and Day of Week
df_rider_by_dow.unstack('member_casual').plot(kind='bar') df_rider_by_dow.reset_index(inplace=True) sns.set(rc={"figure.figsize":(16,8)}) sns.barplot(data=df_rider_by_dow, x="day_of_week", y="num_rides", hue="member_casual") df_member_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'member'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_casual_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'casual'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2) df_member_by_yr_dow.unstack('year').plot(kind='bar', title='Member Rider mean time by year and day of week') df_casual_by_yr_dow.unstack('year').plot(kind='bar', title='Casual Rider mean time by year and day of week') df_rider_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(mean_time = ('duration', 'mean')).round(2) df_rider_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider mean time by yrmo') df_rider_count_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(count = ('ID', 'count')) df_rider_count_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider count by yrmo') df_rider_count_by_yrmo.unstack('member_casual').plot(kind='line', title='Rider count by yrmo')
_____no_output_____
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Let's look at starting in the dark by rider
df_rider_count_by_is_dark = df_divvy.groupby(['member_casual','is_dark']).agg(count = ('ID', 'count')) df_rider_count_by_is_dark.unstack('member_casual').plot(kind='bar', title='Rider count by starting in the dark') df_rider_by_time = df_divvy.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time.unstack('start_cat').plot(kind='bar') weekdays = ['Monday','Tuesday','Wednesday','Thursday','Friday'] weekends = ['Saturday','Sunday'] weekday_riders = df_divvy[df_divvy.day_of_week.isin(weekdays)] weekend_riders = df_divvy[df_divvy.day_of_week.isin(weekends)] weekday_riders.shape weekend_riders.shape df_rider_by_time_weekday = weekday_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time_weekday.unstack('start_cat').plot(kind='bar', title="Weekday times") df_rider_by_time_weekday.to_csv(date_format="%Y-%m-%d %H:%M:%S") df_rider_by_time_weekend = weekend_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count')) df_rider_by_time_weekend.unstack('start_cat').plot(kind='bar', title="Weekend times") df_rider_by_time_weekend.to_csv()
_____no_output_____
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Starting stations -- member
df_starting_member = df_divvy[df_divvy['member_casual']=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member = df_starting_member.sort_values(by='count', ascending=False) df_starting_member_top = df_starting_member.iloc[0:19] df_starting_member_top.plot(kind='bar', title="Starting Stations - Member") df_starting_member_weekday = weekday_riders[weekday_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member_weekday = df_starting_member_weekday.sort_values(by='count', ascending=False) df_starting_member_weekday_top = df_starting_member_weekday.iloc[0:19] df_starting_member_weekday_top.plot(kind='bar', title="Starting Stations Weekday - Member") from io import StringIO output = StringIO() df_starting_member_weekday_top.to_csv(output) print(output.getvalue()) df_starting_member_weekend = weekend_riders[weekend_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_member_weekend = df_starting_member_weekend.sort_values(by='count', ascending=False) df_starting_member_weekend_top = df_starting_member_weekend.iloc[0:19] df_starting_member_weekend_top.plot(kind='bar', title="Starting Stations Weekend - Member") output = StringIO() df_starting_member_weekend_top.to_csv(output) print(output.getvalue())
start_station_name,count Clark St & Elm St,11804 Theater on the Lake,11412 Wells St & Concord Ln,11294 Broadway & Barry Ave,9624 Clark St & Lincoln Ave,9435 Clark St & Armitage Ave,9319 Lake Shore Dr & North Blvd,9245 Wells St & Elm St,8896 Streeter Dr & Grand Ave,8820 Dearborn St & Erie St,8260 Larrabee St & Webster Ave,8083 Desplaines St & Kinzie St,8021 Kingsbury St & Kinzie St,7777 Wabash Ave & Grand Ave,7440 Wilton Ave & Belmont Ave,7427 Wells St & Huron St,7309 Clark St & Wrightwood Ave,7251 Wells St & Evergreen Ave,7215 Broadway & Cornelia Ave,7171
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
Starting Stations - casual
df_starting_casual = df_divvy[df_divvy['member_casual']=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual = df_starting_casual.sort_values(by='count', ascending=False) df_starting_casual_top = df_starting_casual.iloc[0:19] df_starting_casual_top.head() df_starting_casual_weekday = weekday_riders[weekday_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual_weekday = df_starting_casual_weekday.sort_values(by='count', ascending=False) df_starting_casual_weekday_top = df_starting_casual_weekday.iloc[0:19] output = StringIO() df_starting_casual_weekday_top.to_csv(output) print(output.getvalue()) df_starting_casual_weekday_top.shape df_starting_casual_weekend = weekend_riders[weekend_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count')) df_starting_casual_weekend = df_starting_casual_weekend.sort_values(by='count', ascending=False) df_starting_casual_weekend_top = df_starting_casual_weekend.iloc[0:19] output = StringIO() df_starting_casual_weekend_top.to_csv(output) print(output.getvalue())
start_station_name,count Streeter Dr & Grand Ave,42683 Lake Shore Dr & Monroe St,24422 Millennium Park,23273 Michigan Ave & Oak St,19327 Theater on the Lake,17348 Lake Shore Dr & North Blvd,14798 Shedd Aquarium,14373 Indiana Ave & Roosevelt Rd,12122 Clark St & Lincoln Ave,11290 Dusable Harbor,10879 Wells St & Concord Ln,10573 Clark St & Armitage Ave,10339 Wabash Ave & Grand Ave,10207 Michigan Ave & Washington St,10082 Michigan Ave & Lake St,9802 Clark St & Elm St,9795 Buckingham Fountain,9458 Michigan Ave & 8th St,9345 Fairbanks Ct & Grand Ave,8833
MIT
Notebook/HereItIs.ipynb
soddencarpenter/dataviz
osumapper: create osu! map using Tensorflow and Colab -- For osu!mania game mode --For mappers who don't know how this colaboratory thing works:- Press Ctrl+Enter in code blocks to run them one by one- It will ask you to upload .osu file and audio.mp3 after the third block of code- .osu file needs to have correct timing (you can use [statementreply](https://osu.ppy.sh/users/126198)'s TimingAnlyz tool)- After uploading them, wait for a few minutes until download popsGithub: https://github.com/kotritrona/osumapper Step 1: InstallationFirst of all, check the Notebook Settings under Edit tab.Activate GPU to make the training faster.Then, clone the git repository and install dependencies.
%cd /content/ !git clone https://github.com/kotritrona/osumapper.git %cd osumapper/v7.0 !apt install -y ffmpeg !apt install -y nodejs !cp requirements_colab.txt requirements.txt !cp package_colab.json package.json !pip install -r requirements.txt !npm install
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Step 2: Choose a pre-trained modelSet the select_model variable to one of:- "default": default model (choose only after training it)- "lowkey": model trained with 4-key and 5-key maps (☆2.5-5.5)- "highkey": model trained with 6-key to 9-key maps (☆2.5-5.5)
from mania_setup_colab import * select_model = "highkey" model_params = load_pretrained_model(select_model);
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Step 3: Upload map and music fileMap file = .osu file with correct timing (**Important:** Set to mania mode and the wished key count!)Music file = the mp3 file in the osu folder
from google.colab import files print("Please upload the map file:") mapfile_upload = files.upload() for fn in mapfile_upload.keys(): uploaded_osu_name = fn print('Uploaded map file: "{name}" {length} bytes'.format(name=fn, length=len(mapfile_upload[fn]))) print("Please upload the music file:") music_upload = files.upload() for fn in music_upload.keys(): print('Uploaded music file: "{name}" {length} bytes'.format(name=fn, length=len(music_upload[fn])))
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Step 4: Read the map and convert to python readable format
from act_newmap_prep import * step4_read_new_map(uploaded_osu_name);
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Step 5: Use model to calculate map rhythmParameters:"note_density" determines how many notes will be placed on the timeline, ranges from 0 to 1."hold_favor" determines how the model favors holds against circles, ranges from -1 to 1."divisor_favor" determines how the model favors notes to be on X divisors starting from a beat (white, blue, red, blue), ranges from -1 to 1 each."hold_max_ticks" determines the max amount of time a hold can hold off, ranges from 1 to +∞."hold_min_return" determines the final granularity of the pattern dataset, ranges from 1 to +∞."rotate_mode" determines how the patterns from the dataset gets rotated. modes (0,1,2,3,4)- 0 = no rotation- 1 = random- 2 = mirror- 3 = circulate- 4 = circulate + mirror
from mania_act_rhythm_calc import * model = step5_load_model(model_file=model_params["rhythm_model"]); npz = step5_load_npz(); params = model_params["rhythm_param"] # Or set the parameters here... # params = step5_set_params(note_density=0.6, hold_favor=0.2, divisor_favor=[0] * divisor, hold_max_ticks=8, hold_min_return=1, rotate_mode=4); predictions = step5_predict_notes(model, npz, params); notes_each_key = step5_build_pattern(predictions, params, pattern_dataset=model_params["pattern_dataset"]);
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Do a little modding to the map.Parameters:- key_fix: remove continuous notes on single key modes (0,1,2,3) 0=inactive 1=remove late note 2=remove early note 3=divert
modding_params = model_params["modding"] # modding_params = { # "key_fix" : 3 # } notes_each_key = mania_modding(notes_each_key, modding_params); notes, key_count = merge_objects_each_key(notes_each_key)
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
Finally, save the data into an .osu file!
from google.colab import files from mania_act_final import * saved_osu_name = step8_save_osu_mania_file(notes, key_count); files.download(saved_osu_name) # clean up if you want to make another map! # colab_clean_up(uploaded_osu_name)
_____no_output_____
Apache-2.0
v7.0/mania_Colab.ipynb
jsstwright/osumapper
First let's figure out how to generate an AR proces
def ar1(phi = .9, n = 100, init = 0): time_series = [init] error = np.random.randn(n) for period in range(n): time_series.append(error[period] + phi*time_series[-1]) return pd.Series(time_series[1:], index = range(n)) def ar2(phi1 = .9, phi2 = .8, n = 100, init = 0): time_series = [init, init] error = np.random.randn(n) for period in range(2,n): time_series.append(error[period] + phi1*time_series[-1] + phi2*time_series[-2]) return pd.Series(time_series[1:], index = range(1,n)) # try out different values of phi >=1 as compared to < 1 # sometimes you need to make a large n to see lack of stationarity a1 = ar1(phi = .5, n = 10) a1.plot() # try out different values of phi >=1 as compared to < 1 # sometimes you need to make a large n to see lack of stationarity a2 = ar2(n = 100) a2.plot()
_____no_output_____
MIT
TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb
sunny2309/scipy_conf_notebooks
Now let's generate an MA process
def ma1(theta = .5, n = 100): time_series = [] error = np.random.randn(n) for period in range(1,n): time_series.append(error[period] + theta*error[period-1]) return pd.Series(time_series[1:], index = range(1,n-1)) m1 = ma1(theta = -1000) m1.plot()
_____no_output_____
MIT
TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb
sunny2309/scipy_conf_notebooks
Let's look at ACF + PACF for each kind of process
a1 = ar1(phi = .5, n = 1000) a1_acf = acf(a1, nlags = 50) plt.plot(a1_acf) plt.axhline(y=0,linestyle='--', color = 'black') plt.axhline(y=-1.96/np.sqrt(len(a1)),linestyle='--', color = 'red') plt.axhline(y=1.96/np.sqrt(len(a1)),linestyle='--', color = 'red') a1 = ar1(phi = .5, n = 1000) a1_pacf = pacf(a1, nlags = 50) plt.plot(a1_pacf) plt.axhline(y=0,linestyle='--', color = 'black') plt.axhline(y=-1.96/np.sqrt(len(a1)),linestyle='--', color = 'red') plt.axhline(y=1.96/np.sqrt(len(a1)),linestyle='--', color = 'red') m1 = ma1(n = 1000, theta = .9) m1_acf = acf(m1, nlags = 50) plt.plot(m1_acf) plt.axhline(y=0,linestyle='--', color = 'black') plt.axhline(y=-1.96/np.sqrt(len(m1)),linestyle='--', color = 'red') plt.axhline(y=1.96/np.sqrt(len(m1)),linestyle='--', color = 'red') m1 = ma1(n = 1000, theta = .9) m1_pacf = pacf(m1, nlags = 50) plt.plot(m1_pacf) plt.axhline(y=0,linestyle='--', color = 'black') plt.axhline(y=-1.96/np.sqrt(len(m1)),linestyle='--', color = 'red') plt.axhline(y=1.96/np.sqrt(len(m1)),linestyle='--', color = 'red')
_____no_output_____
MIT
TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb
sunny2309/scipy_conf_notebooks
__Pitfall__: if you get a dimension like `(134,)`, be careful! For linear regression and some models, this works just fine, but for some other models such as CNN/RNN, this dimension will result in sth unexpected and very hard to debug. As a good habit, you should always check your one-dimensional array and make sure that the 2nd shape parameter is not missing.
df_y.head() df_y = df.filter(items=['survived']) # to get the right shape, use filter() df_y.shape df_y.head() reg = linear_model.LinearRegression() x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=0.2, random_state=0) reg.fit(x_train, y_train) reg.predict(x_test) mean_squared_error(y_test, reg.predict(x_test)) # age: 25, # class: 1, # fare_paid: 45, # gender: 1 ('male') # parents_children: 0, # point_of_embarkation: 1 ('C') # siblings_spouse: 1 fake_passenger = [[25, 1, 45, 1, 0, 1, 1]] reg.predict(fake_passenger)
_____no_output_____
CC0-1.0
database/MongoDB notebooks/06_linear regression on titanic data set.ipynb
neo-mashiro/BU
Evaluate a privacy policyToday, virtually every organization with which you interact will collect or use some about you. Most typically, the collection and use of these data will be disclosed according to an organization's privacy policy. We encounter these privacy polices all the time, when we create an account on a website, open a new credit card, or even sign up for grocery store loyalty program. Yet despite (or perhaps because of) their ubiquity, most people have never read a privacy policy from start to finish. Moreover, even if we took the time to read privacy policies, many of us would struggle to fully understand them due to their frequent use of complex, legalistic, and opaque language. These considerations raise many potential ethical questions regarding whether organizations are sufficiently transparent about the increasingly vast sums of data they collect about their users, customers, employees, and other stakeholders.The purpose of this notebook is to help you gain a better understanding of the landscape of contemporary privacy policies, using a data-driven approach. We'll leverage a [new dataset](https://github.com/ansgarw/privacy) that provides the full text of privacy policies for hundreds of publicly-traded companies, which we'll analyze using some techniques from natural language processing. By the time you make your way through this notebook, you should have a better understanding of the diverse form and content of modern privacy policies, their linguistic characteristics, and a few neat tricks for analyzing large textual data with Python. Without further ado, let's get started! Roadmap * Preliminaries (packages + data wrangling) * Topic models * Keywords in context * Named entities * Readability * Embeddings * Exercises Preliminaries Let's start out by loading some packages. We'll be using pandas to help with data wrangling and holding the data in an easy to work with data frame format. The json package is part of the Python Standard Library and will help us with reading the raw data. Matplotlib is for plotting; umap is for clustering policies and is not completely necessary. Finally, we'll use several natural language processing packages, spacy, textacy, and gensim, for the actual text analysis.
# run the following commands to install the needed packages """ pip install pandas pip install spacy python -m spacy download en_core_web_lg pip install textacy pip install gensim pip install umap pip install matplotlib """ # load some packages import pandas as pd import json import spacy import textacy import gensim import matplotlib.pyplot as plt import umap import umap.plot from bokeh.plotting import show, output_notebook import tqdm tqdm.tqdm.pandas() # for umap warnings from matplotlib.axes._axes import _log as matplotlib_axes_logger matplotlib_axes_logger.setLevel("ERROR") # load spacy nlp model nlp = spacy.load("en_core_web_lg", disable=["parser"]) nlp.max_length = 2000000
/Users/rfunk/.pyenv/versions/anaconda3-2019.10/lib/python3.7/site-packages/tqdm/std.py:697: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version from pandas import Panel
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Now, let's go ahead and load the data.
# load the data with open("data/policies.json") as f: policies_df = pd.DataFrame({k:" ".join(v) for k,v in json.load(f).items()}.items(), columns=["url","policy_text"]) # check out the results policies_df.head()
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Looks pretty reasonable. We have one column for the URL and one for the full text of the privacy policy. Note that the orignal data come in a json format, and there, each URL is associated with a set of paragraphs that constitute each privacy policy. In the code above, when we load the data, we concatenate these paragraphs to a single text string, which will be easier for us to work with in what follows. Our next step will be to process the documents with spacy. We'll add a column to our data frame with the processed documents (that way we still have the raw text handy). This might take a minute. If it takes too long on your machine, you can just look at a random sample of policies. Just uncomment out the code below.
#policies_df = policies_df.sample(frac=0.20) # set frac to some fraction that will run in a reasonable time on your machine policies_df["policy_text_processed"] = policies_df.policy_text.progress_apply(nlp)
100%|██████████| 4062/4062 [09:28<00:00, 7.15it/s]
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
With that simple line of code, spacy has done a bunch of hard work for us, including things like tokenization, part-of-speech tagging, entity parsing, and other stuff that go well beyond our needs today. Let's take a quick look.
policies_df.head()
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Okay, at this point, we've loaded all the packages we need, and we've done some of the basic wrangling necessary to get the data into shape. We'll need to do a little more data wrangling to prepare for a few of the analyses in store below, but we've already done enough to let us get started. So without further ado, let's take our first peek at the data. Topic modelsWe'll start out by trying to get a better sense for __what__ is discussed in corporate privacy policies. To do so, we'll make use of an approach in natural language processing known as topic models. Given our focus, we're not going to go into any of the methodological details of how these models work, but in essence, what they're going to do is search for a set of latent topics in our corpus of documents (here, privacy policies). You can think of topics as clusters of related words on a particular subject (e.g., if we saw the words "homework", "teacher", "student", "lesson" we might infer that the topic was school); documents can contain discussions of multiple topics.To start out, we'll do some more processing on the privacy policies to make them more useable for our topic modeling library (called gensim).
# define a processing function process_gensim = lambda tokens: [token.lemma_.lower() for token in tokens if not(token.is_punct or token.is_stop or token.is_space or token.is_digit)] # apply the function policies_df["policy_text_gensim"] = policies_df.policy_text_processed.apply(process_gensim) # create a gensim dictionary gensim_dict = gensim.corpora.dictionary.Dictionary(policies_df["policy_text_gensim"]) # create a gensim corpus gensim_corpus = [gensim_dict.doc2bow(policy_text) for policy_text in policies_df["policy_text_gensim"]] # fit the topic model lda_model = gensim.models.LdaModel(gensim_corpus, id2word=gensim_dict, num_topics=10) # show the results lda_model.show_topics(num_topics=-1, num_words=8)
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
As a bonus, we can also check the coherence, essentially a model fit (generally, these measures look at similarity among high scoring words in topics). If you're so inclined, you can re-run the topic model above with different hyperparameters to see if you can get a better fit; I didn't spend a whole lot of time tuning.
# get coherence coherence_model_lda = gensim.models.CoherenceModel(model=lda_model, texts=policies_df["policy_text_gensim"], dictionary=gensim_dict, coherence="c_v") coherence_model_lda.get_coherence()
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Take a look at the topics identified by the models above. Can you assign human-interpretable labels to them? What can you learn about the different topics of discussion in privacy policies? Key words in contextTopic models are nice, but they're a bit abstract. They give us an overview about interesting clusters of words, but they don't tell us much about how particular words or used or the details of the topics. For that, we can actually learn a lot just by picking out particular words of interest and pulling out their context from the document, known as a "keyword in context" approach. As an illustration, the code below pulls out uses of the word "third party" in the policies of 20 random firms. There's no random seed set, so if you run the code again you'll get a different set of result. In the comment on the first line, I've given you a few additional words you may want to check.
KEYWORD = "right" # "third party" # privacy, right, duty, selling, disclose, trust, inform NUM_FIRMS = 20 with pd.option_context("display.max_colwidth", 100, "display.min_rows", NUM_FIRMS, "display.max_rows", NUM_FIRMS): display( pd.DataFrame(policies_df.sample(n=NUM_FIRMS).apply(lambda row: list(textacy.text_utils.KWIC(row["policy_text"], keyword=KEYWORD, window_width=35, print_only=False)), axis=1).explode()).head(NUM_FIRMS) )
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Run the code for some different words, not just the ones in my list, but also those that interest you. Can you learn anything about corporate mindsets on privacy? What kind of rights are discussed? Named entitiesAnother way we can gain some insight into the content of privacy policies is by seeing who exactly they discuss. Once again, spacy gives us an easy (if sometimes rough) way to do this. Specifically, when we process a document using spacy, it will automatically extract several different categories of named entities (e.g., person, organization, place, you can find the full list [here](https://spacy.io/api/annotation)). In the code, we'll pull out all the organization and person entities.
# extract named entities from the privacy policies pull_entities = lambda policy_text: list(set([entity.text.lower() for entity in policy_text.ents if entity.label_ in ("ORG", "PERSON")])) policies_df["named_entities"] = policies_df.policy_text_processed.apply(pull_entities)
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Let's take a quick peek at our data frame and see what the results look like.
# look at the entities with pd.option_context("display.max_colwidth", 100, "display.min_rows", 50, "display.max_rows", 50): display(policies_df[["url","named_entities"]].head(50))
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Now let's add a bit more structure. We'll run a little code to help us identify the most frequently discussed organizations and people in the corporate privacy policies.
# pull the most frequent entities entities = policies_df["named_entities"].explode("named_entities") NUM_WANTED = 50 with pd.option_context("display.min_rows", 50, "display.max_rows", 50): display(entities.groupby(entities).size().sort_values(ascending=False).head(50))
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
What do you make of the most frequent entities? Are you surprised? Do they fit with what you expected? Can we make any inferences about the kind of data sharing companies might be enaging in by looking at these entities? ReadabilityNext, we'll evaluate the privacy policies according to their readability. There are many different measures of readability, but the basic idea is to evaluate a text according to various metrics (e.g., words per sentence, number of syllables per word) that correlate with, well, how easy it is to read. The textacy package makes it easy to quickly evaluate a bunch of different metrics of readability. Let's compute them and then do some exploration.
# compute a bunch of text statistics (including readability) policies_df["TextStats"] = policies_df.policy_text_processed.apply(textacy.text_stats.TextStats)
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
You can now access the various statistics for individual documents as follows (e.g., for the document at index 0).
policies_df.iloc[0]["TextStats"].flesch_kincaid_grade_level
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
This tells us that the Flesch-Kinkaid grade level for the policy is just under 12th grade. We're probably not terribly interested in the readability of any given policy. We can do a little wrangling with pandas to extract various metrics for all policies and add them to the data frame. Below, I'll pull out the Flesch-Kincaid grade level and the Gunning-Fog index (both are grade-level measures).
# pull out a few readability metrics policies_df["flesch_kincaid_grade_level"] = policies_df.TextStats.apply(lambda ts: ts.flesch_kincaid_grade_level) policies_df["gunning_fog_index"] = policies_df.TextStats.apply(lambda ts: ts.gunning_fog_index) # let's also clean up some extreme values policies_df.loc[(policies_df.flesch_kincaid_grade_level < 0) | (policies_df.flesch_kincaid_grade_level > 20), "flesch_kincaid_grade_level"] = None policies_df.loc[(policies_df.gunning_fog_index < 0) | (policies_df.gunning_fog_index > 20), "gunning_fog_index"] = None
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
I would encourage you to adapt the code above to pull out some other readability-related features that seem interesting. You can find the full list available in our `TextStats` object [here](https://textacy.readthedocs.io/en/stable/api_reference/misc.html), in the textacy documentation. Let's plot the values we just extracted.
# plot with matplotlib fig, axes = plt.subplots(1, 2) policies_df["flesch_kincaid_grade_level"].hist(ax=axes[0]) policies_df["gunning_fog_index"].hist(ax=axes[1]) plt.tight_layout()
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
These results are pretty striking, especially when you consider them alongside statistics on the literacy rate in the United States. According to [surveys](https://www.oecd.org/skills/piaac/Country%20note%20-%20United%20States.pdf) by the OECD, about half of adults in the United States can read at an 8th grade level or lower. EmbeddingsYet another way that we can gain some intuition on privacy policies is by seeing how similar or different particular policies are from one another. For example, we might not be all that surprised if we saw that Google's privacy policy was quite similar to Facebook's. We might raise an eyebrow if we saw that Nike and Facebook also had very similar privacy policies. What kind of data are they collecting on us when we buy our sneakers? One way we can compare the similarity among documents (here, privacy policies) is by embedding them in some high dimensional vector space, and the using linear algebra to find the distance between vectors. Classically, we would do this by representing documents as vectors of words, where entries represent word frequencies, and perhaps weighting those frequencies (e.g., using TF-IDF). Here, we'll use a slightly more sophisticated approach. When we process the privacy policies using spacy, we get a vector representation of each document, which is based on the word embeddings for its constituent terms. Again, given the focus of this class, we're not going to go into the methodological details of word embeddings, but you can think of them as a vectorization that aims to capture semantic relationships.Below, we'll pull the document embeddings from spacy. We'll then do some dimension reduction using a cool algorithm from topological data analysis known as [Uniform Manifold Approximation and Projection](https://arxiv.org/abs/1802.03426) (UMAP), and visualize the results using an interactive plot.
# pull the document embeddings from spacy and format for clustering embeddings_df = policies_df[["url", "policy_text_processed"]] embeddings_df = embeddings_df.set_index("url") embeddings_df["policy_text_processed"] = embeddings_df["policy_text_processed"].apply(lambda text: text.vector) embeddings_df = embeddings_df.policy_text_processed.apply(pd.Series) # non-interactive plot mapper = umap.UMAP().fit(embeddings_df.to_numpy()) umap.plot.points(mapper) # interactive plot output_notebook() hover_df = embeddings_df.reset_index() hover_df["index"] = hover_df.index p = umap.plot.interactive(mapper, labels=hover_df["index"], hover_data=hover_df[["index","url"]], point_size=2) umap.plot.show(p)
_____no_output_____
MIT
sessions/privacy/privacy_policy.ipynb
russellfunk/data_privacy
Colab KSO Tutorial 12: Display movies available on the serverWritten by @jannesgg and @vykantonLast updated: Jun 19, 2022 Set up and requirements Install kso_data_management and its requirements
# Clone koster_data_management repo !git clone --recurse-submodules https://github.com/ocean-data-factory-sweden/koster_data_management.git !pip install -r koster_data_management/requirements.txt # Restart the session to load the latest packages exit()
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Import Python packages
# Set the directory of the libraries import sys, os from pathlib import Path # Enables testing changes in utils %load_ext autoreload %autoreload 2 # Specify the path of the tutorials os.chdir("koster_data_management/tutorials") sys.path.append('..') # Import required modules import kso_utils.tutorials_utils as t_utils import kso_utils.project_utils as p_utils import kso_utils.server_utils as s_utils print("Packages loaded successfully")
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Choose your project
project_name = t_utils.choose_project()
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Initiate database
# Initiate db project = p_utils.find_project(project_name = project_name.value) db_info_dict = t_utils.initiate_db(project)
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Retrieve info of movies available on the server
available_movies_df = s_utils.retrieve_movie_info_from_server( project = project, db_info_dict = db_info_dict )
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Select the movie of interest
movie_selected = t_utils.select_movie(available_movies_df)
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Display the movie
movie_display, movie_path = t_utils.preview_movie( project = project, db_info_dict = db_info_dict, available_movies_df = available_movies_df, movie_i = movie_selected.value ) movie_display #END
_____no_output_____
MIT
colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb
ocean-data-factory-sweden/koster_zooniverse
Create a pandas series having values 4, 7, -5, 3, NAN and their index as d, b, a, c, e
# Write a code here import numpy as np import pandas as pd series = pd.Series([4,7,-5,3,np.nan], index=['d','b','a','c','e']) series
_____no_output_____
MIT
Arvind/Introduction to Pandas Series and Creating Series.ipynb
Arvind-collab/Data-Science
Import modules
import numpy as np import tensorflow as tf from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense from handwritten_digits.utils_np import test_prediction from handwritten_digits.data import one_hot
_____no_output_____
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = tf.keras.utils.normalize(X_train, axis=1) X_test = tf.keras.utils.normalize(X_test, axis=1)
_____no_output_____
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Define model architecture
model = Sequential([ Flatten(), Dense(784, activation=tf.nn.relu), Dense(128, activation=tf.nn.relu), Dense(32, activation=tf.nn.relu), Dense(10, activation=tf.nn.softmax), ])
_____no_output_____
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Train model
model.compile( optimizer="SGD", loss="sparse_categorical_crossentropy", metrics=["accuracy"] ) model.fit(X_train, y_train, epochs=15)
Epoch 1/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.9401 - accuracy: 0.7679 Epoch 2/15 1875/1875 [==============================] - 22s 12ms/step - loss: 0.3231 - accuracy: 0.9072 Epoch 3/15 1875/1875 [==============================] - 22s 12ms/step - loss: 0.2497 - accuracy: 0.9272 Epoch 4/15 1875/1875 [==============================] - 22s 12ms/step - loss: 0.2080 - accuracy: 0.9402 Epoch 5/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1786 - accuracy: 0.9483 Epoch 6/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1565 - accuracy: 0.9546 Epoch 7/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1386 - accuracy: 0.9598 Epoch 8/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1239 - accuracy: 0.9646 Epoch 9/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1116 - accuracy: 0.96810s - l Epoch 10/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.1012 - accuracy: 0.9712 Epoch 11/15 1875/1875 [==============================] - 21s 11ms/step - loss: 0.0922 - accuracy: 0.9740 Epoch 12/15 1875/1875 [==============================] - 23s 12ms/step - loss: 0.0841 - accuracy: 0.9759 Epoch 13/15 1875/1875 [==============================] - 20s 11ms/step - loss: 0.0769 - accuracy: 0.9784 Epoch 14/15 1875/1875 [==============================] - 21s 11ms/step - loss: 0.0705 - accuracy: 0.9803 Epoch 15/15 1875/1875 [==============================] - 22s 12ms/step - loss: 0.0648 - accuracy: 0.9820
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Evaluate model performance
training_loss, training_accuracy = model.evaluate(x=X_train, y=y_train) test_loss, test_accuracy = model.evaluate(x=X_test, y=y_test)
1875/1875 [==============================] - 14s 7ms/step - loss: 0.0609 - accuracy: 0.9833 313/313 [==============================] - 3s 8ms/step - loss: 0.0963 - accuracy: 0.9692
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Evaluate predictions
# bring preds to same shape as in numpy model probs = model.predict(X_test) preds = probs == np.amax(probs, axis=1, keepdims=True) preds = preds.T.astype(float) # bring data to same shape as in numpy model X_test_reshaped = X_test.reshape(X_test.shape[0], -1).T y_test_reshaped = one_hot(y_test) random_index = np.random.randint(0, preds.shape[1]) test_prediction( X=X_test_reshaped, Y=y_test_reshaped, pred=preds, index=random_index, )
Prediction: 1 True label: 1
MIT
src/executables/model_tf.ipynb
frederik-schmidt/Handwritten-digits
Copyright 2020 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");
# Copyright 2020 Google LLC. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ==============================================================================
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
DDSP Processor DemoThis notebook provides an introduction to the signal `Processor()` object. The main object type in the DDSP library, it is the base class used for Synthesizers and Effects, which share the methods:* `get_controls()`: inputs -> controls.* `get_signal()`: controls -> signal.* `__call__()`: inputs -> signal. (i.e. `get_signal(**get_controls())`)Where:* `inputs` is a variable number of tensor arguments (depending on processor). Often the outputs of a neural network.* `controls` is a dictionary of tensors scaled and constrained specifically for the processor* `signal` is an output tensor (usually audio or control signal for another processor)Let's see why this is a helpful approach by looking at the specific example of the `Additive()` synthesizer processor.
#@title Install and import dependencies %tensorflow_version 2.x !pip install -qU ddsp # Ignore a bunch of deprecation warnings import warnings warnings.filterwarnings("ignore") import ddsp import ddsp.training from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE import matplotlib.pyplot as plt import numpy as np import tensorflow as tf sample_rate = DEFAULT_SAMPLE_RATE # 16000
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
Example: additive synthesizerThe additive synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples. `__init__()`All member variables are initialized in the constructor, which makes it easy to change them as hyperparameters using the [gin](https://github.com/google/gin-config) dependency injection library. All processors also have a `name` that is used by `ProcessorGroup()`.
n_frames = 1000 hop_size = 64 n_samples = n_frames * hop_size # Create a synthesizer object. additive_synth = ddsp.synths.Additive(n_samples=n_samples, sample_rate=sample_rate, name='additive_synth')
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
`get_controls()` The outputs of a neural network are often not properly scaled and constrained. The `get_controls` method gives a dictionary of valid control parameters based on neural network outputs. **3 inputs (amps, hd, f0)*** `amplitude`: Amplitude envelope of the synthesizer output.* `harmonic_distribution`: Normalized amplitudes of each harmonic.* `fundamental_frequency`: Frequency in Hz of base oscillator
# Generate some arbitrary inputs. # Amplitude [batch, n_frames, 1]. # Make amplitude linearly decay over time. amps = np.linspace(1.0, -3.0, n_frames) amps = amps[np.newaxis, :, np.newaxis] # Harmonic Distribution [batch, n_frames, n_harmonics]. # Make harmonics decrease linearly with frequency. n_harmonics = 30 harmonic_distribution = (np.linspace(-2.0, 2.0, n_frames)[:, np.newaxis] + np.linspace(3.0, -3.0, n_harmonics)[np.newaxis, :]) harmonic_distribution = harmonic_distribution[np.newaxis, :, :] # Fundamental frequency in Hz [batch, n_frames, 1]. f0_hz = 440.0 * np.ones([1, n_frames, 1], dtype=np.float32) # Plot it! time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, amps[0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, harmonic_distribution[0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, f0_hz[0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency')
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations:* Amplitude is not >= 0 (avoids phase shifts)* Harmonic distribution is not normalized (factorizes timbre and amplitude)* Fundamental frequency * n_harmonics > nyquist frequency (440 * 20 > 8000), which will lead to [aliasing](https://en.wikipedia.org/wiki/Aliasing).
controls = additive_synth.get_controls(amps, harmonic_distribution, f0_hz) print(controls.keys()) # Now let's see what they look like... time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, controls['amplitudes'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, controls['harmonic_distribution'][0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, controls['f0_hz'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency')
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
Notice that * Amplitudes are now all positive* The harmonic distribution sums to 1.0* All harmonics that are above the Nyquist frequency now have an amplitude of 0. The amplitudes and harmonic distribution are scaled by an "exponentiated sigmoid" function (`ddsp.core.exp_sigmoid`). There is nothing particularly special about this function (other functions can be specified as `scale_fn=` during construction), but it has several nice properties:* Output scales logarithmically with input (as does human perception of loudness).* Centered at 0, with max and min in reasonable range for normalized neural network outputs.* Max value of 2.0 to prevent signal getting too loud.* Threshold value of 1e-7 for numerical stability during training.
x = tf.linspace(-10.0, 10.0, 1000) y = ddsp.core.exp_sigmoid(x) plt.figure(figsize=(18, 4)) plt.subplot(121) plt.plot(x, y) plt.subplot(122) _ = plt.semilogy(x, y)
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
`get_signal()`Synthesizes audio from controls.
audio = additive_synth.get_signal(**controls) play(audio) specplot(audio)
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
`__call__()` Synthesizes audio directly from the raw inputs. `get_controls()` is called internally to turn them into valid control parameters.
audio = additive_synth(amps, harmonic_distribution, f0_hz) play(audio) specplot(audio)
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
Example: Just for fun... Let's run another example where we tweak some of the controls...
## Some weird control envelopes... # Amplitude [batch, n_frames, 1]. amps = np.ones([n_frames]) * -5.0 amps[:50] += np.linspace(0, 7.0, 50) amps[50:200] += 7.0 amps[200:900] += (7.0 - np.linspace(0.0, 7.0, 700)) amps *= np.abs(np.cos(np.linspace(0, 2*np.pi * 10.0, n_frames))) amps = amps[np.newaxis, :, np.newaxis] # Harmonic Distribution [batch, n_frames, n_harmonics]. n_harmonics = 20 harmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :] for i in range(n_harmonics): harmonic_distribution[:, i] = 1.0 - np.linspace(i * 0.09, 2.0, 1000) harmonic_distribution[:, i] *= 5.0 * np.abs(np.cos(np.linspace(0, 2*np.pi * 0.1 * i, n_frames))) if i % 2 != 0: harmonic_distribution[:, i] = -3 harmonic_distribution = harmonic_distribution[np.newaxis, :, :] # Fundamental frequency in Hz [batch, n_frames, 1]. f0_hz = np.ones([n_frames]) * 200.0 f0_hz[:100] *= np.linspace(2, 1, 100)**2 f0_hz[200:1000] += 20 * np.sin(np.linspace(0, 8.0, 800) * 2 * np.pi * np.linspace(0, 1.0, 800)) * np.linspace(0, 1.0, 800) f0_hz = f0_hz[np.newaxis, :, np.newaxis] # Get valid controls controls = additive_synth.get_controls(amps, harmonic_distribution, f0_hz) # Plot! time = np.linspace(0, n_samples / sample_rate, n_frames) plt.figure(figsize=(18, 4)) plt.subplot(131) plt.plot(time, controls['amplitudes'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Amplitude') plt.subplot(132) plt.plot(time, controls['harmonic_distribution'][0, :, :]) plt.xticks([0, 1, 2, 3, 4]) plt.title('Harmonic Distribution') plt.subplot(133) plt.plot(time, controls['f0_hz'][0, :, 0]) plt.xticks([0, 1, 2, 3, 4]) _ = plt.title('Fundamental Frequency') audio = additive_synth.get_signal(**controls) play(audio) specplot(audio)
_____no_output_____
Apache-2.0
ddsp/colab/tutorials/0_processor.ipynb
kyjohnso/ddsp
Brazil
plot_bra2 = pd.read_csv('sensi_withhold_bra.csv') eff_new = pd.DataFrame( np.array([np.repeat(list(plot_bra2['intervention']),320), plot_bra2[plot_bra2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*319+[0.1])*10 fig1,ax = plt.subplots(figsize=(10,6)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('Brazil',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') #plt.xlim(-1.5,1) fig1.savefig("sensi_withhold_bra",bbox_inches='tight',dpi=300)
_____no_output_____
MIT
plot/plot_robust_sample.ipynb
lunliu454/infect_place
Japan
plot_jp2 = pd.read_csv('sensi_withhold_jp.csv') plot_jp2 eff_new = pd.DataFrame( np.array([np.repeat(list(plot_jp2['intervention']),46), plot_jp2[plot_jp2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*45+[0.1])*10 fig1,ax = plt.subplots(figsize=(10,6)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('Japan',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') fig1.savefig("sensi_withhold_jp",bbox_inches='tight',dpi=300)
_____no_output_____
MIT
plot/plot_robust_sample.ipynb
lunliu454/infect_place
UK
plot_uk2 = pd.read_csv('sensi_withhold_uk.csv') eff_new4 = pd.DataFrame( np.array([np.repeat(list(plot_uk2['intervention']),235), plot_uk2[plot_uk2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new4['x'] = eff_new4['x'].astype(float) eff_new4['color'] =([1]*234+[0.1])*5 fig4,ax = plt.subplots(figsize=(10,4)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new4, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('United Kingdom',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') #plt.ylim(-0.6,2.5) fig4.savefig("sensi_withhold_uk",bbox_inches='tight',dpi=300)
_____no_output_____
MIT
plot/plot_robust_sample.ipynb
lunliu454/infect_place
US
plot_us2 = pd.read_csv('sensi_withhold_us.csv') eff_new = pd.DataFrame( np.array([np.repeat(list(plot_us2['intervention']),310), plot_us2[plot_us2.columns[2:]].values.reshape(1,-1)[0]]).T, columns=['intervention','x']) eff_new['x'] = eff_new['x'].astype(float) eff_new['color'] =([1]*309+[0.1])*9 fig4,ax = plt.subplots(figsize=(10,5.5)) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) plt.axvline(x=0,ls="-",linewidth=1,c="black") sns.scatterplot(data = eff_new, x='x', y='intervention', hue='color', s=200, palette=new_gray, alpha=0.3, legend=False, edgecolor=None, ax=ax) plt.xlabel('United States',c="black",fontsize=24,fontname='Helvetica') plt.ylabel('') fig4.savefig("sensi_withhold_us",bbox_inches='tight',dpi=300)
_____no_output_____
MIT
plot/plot_robust_sample.ipynb
lunliu454/infect_place
Tutorial: Parameterized Hypercomplex Multiplication (PHM) Layer Author: Eleonora GrassucciOriginal paper: Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters.Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu.[ArXiv link](https://arxiv.org/pdf/2102.08597.pdf).
# Imports import numpy as np import math import time import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F import torch.utils.data as Data from torch.nn import init # Check Pytorch version: torch.kron is available from 1.8.0 torch.__version__ # Define the PHM class class PHM(nn.Module): ''' Simple PHM Module, the only parameter is A, since S is passed from the trainset. ''' def __init__(self, n, kernel_size, **kwargs): super().__init__(**kwargs) self.n = n A = torch.empty((n-1, n, n)) self.A = nn.Parameter(A) self.kernel_size = kernel_size def forward(self, X, S): H = torch.zeros((self.n*self.kernel_size, self.n*self.kernel_size)) # Sum of Kronecker products for i in range(n-1): H = H + torch.kron(self.A[i], S[i]) return torch.matmul(X, H.T)
_____no_output_____
MIT
tutorials/PHM tutorial.ipynb
eleGAN23/Hyper
Learn the Hamilton product between two pure quaternionsA pure quaternion is a quaternion with scalar part equal to 0.
# Setup the training set x = torch.FloatTensor([0, 1, 2, 3]).view(4, 1) # Scalar part equal to 0 W = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]]) # Scalar parts equal to 0 y = torch.matmul(W, x) num_examples = 1000 batch_size = 1 X = torch.zeros((num_examples, 16)) S = torch.zeros((num_examples, 16)) Y = torch.zeros((num_examples, 16)) for i in range(num_examples): x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12] s1 = s1.view(2,2) s2 = s2.view(2,2) s3 = s3.view(2,2) s4 = s4.view(2,2) s_1 = torch.cat([s1,-s2,-s3,-s4]) s_2 = torch.cat([s2,s1,-s4,s3]) s_3 = torch.cat([s3,s4,s1,-s2]) s_4 = torch.cat([s4,-s3,s2,s1]) W = torch.cat([s_1,s_2, s_3, s_4], dim=1) x = torch.cat([torch.FloatTensor([0]*4), x]) s = torch.cat([torch.FloatTensor([0]*4), s]) x_mult = x.view(2, 8) y = torch.matmul(x_mult, W.T) y = y.view(16, ) X[i, :] = x S[i, :] = s Y[i, :] = y X = torch.FloatTensor(X).view(num_examples, 16, 1) S = torch.FloatTensor(S).view(num_examples, 16, 1) Y = torch.FloatTensor(Y).view(num_examples, 16, 1) data = torch.cat([X, S, Y], dim=2) train_iter = torch.utils.data.DataLoader(data, batch_size=batch_size) ### Setup the test set num_examples = 1 batch_size = 1 X = torch.zeros((num_examples, 16)) S = torch.zeros((num_examples, 16)) Y = torch.zeros((num_examples, 16)) for i in range(num_examples): x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float) s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12] s1 = s1.view(2,2) s2 = s2.view(2,2) s3 = s3.view(2,2) s4 = s4.view(2,2) s_1 = torch.cat([s1,-s2,-s3,-s4]) s_2 = torch.cat([s2,s1,-s4,s3]) s_3 = torch.cat([s3,s4,s1,-s2]) s_4 = torch.cat([s4,-s3,s2,s1]) W = torch.cat([s_1,s_2, s_3, s_4], dim=1) x = torch.cat([torch.FloatTensor([0]*4), x]) s = torch.cat([torch.FloatTensor([0]*4), s]) x_mult = x.view(2, 8) y = torch.matmul(x_mult, W.T) y = y.view(16, ) X[i, :] = x S[i, :] = s Y[i, :] = y X = torch.FloatTensor(X).view(num_examples, 16, 1) S = torch.FloatTensor(S).view(num_examples, 16, 1) Y = torch.FloatTensor(Y).view(num_examples, 16, 1) data = torch.cat([X, S, Y], dim=2) test_iter = torch.utils.data.DataLoader(data, batch_size=batch_size) # Define training function def train(net, lr, phm=True): # Squared loss loss = nn.MSELoss() optimizer = torch.optim.Adam(net.parameters(), lr=lr) for epoch in range(5): for data in train_iter: optimizer.zero_grad() X = data[:, :, 0] S = data[:, 4:, 1] Y = data[:, :, 2] if phm: out = net(X.view(2, 8), S.view(3, 2, 2)) else: out = net(X) l = loss(out, Y.view(2, 8)) l.backward() optimizer.step() print(f'epoch {epoch + 1}, loss {float(l.sum() / batch_size):.6f}') # Initialize model parameters def weights_init_uniform(m): m.A.data.uniform_(-0.07, 0.07) # Create layer instance n = 4 phm_layer = PHM(n, kernel_size=2) phm_layer.apply(weights_init_uniform) # Train the model train(phm_layer, 0.005) # Check parameters of the layer require grad for name, param in phm_layer.named_parameters(): if param.requires_grad: print(name, param.data) # Take a look at the convolution performed on the test set for data in test_iter: X = data[:, :, 0] S = data[:, 4:, 1] Y = data[:, :, 2] y_phm = phm_layer(X.view(2, 8), S.view(3, 2, 2)) print('Hamilton product result from test set:\n', Y.view(2, 8)) print('Performing Hamilton product learned by PHM:\n', y_phm) # Check the PHC layer have learnt the proper algebra for the marix A W = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]]) print('Ground-truth Hamilton product matrix:\n', W) print() print('Learned A in PHM:\n', phm_layer.A) print() print('Learned A sum in PHM:\n', sum(phm_layer.A).T)
Ground-truth Hamilton product matrix: tensor([[ 0., -1., -1., -1.], [ 1., 0., -1., 1.], [ 1., 1., 0., -1.], [ 1., -1., 1., 0.]]) Learned A in PHM: Parameter containing: tensor([[[-6.0884e-08, 1.0000e+00, -1.6100e-08, 2.6916e-08], [-1.0000e+00, -1.8684e-08, -2.1245e-08, -8.8355e-08], [-1.2780e-08, 1.2693e-07, -3.8119e-08, 1.0000e+00], [-1.0182e-07, 4.7619e-08, -1.0000e+00, 3.8946e-08]], [[ 1.5405e-08, -3.1784e-08, 1.0000e+00, 2.9003e-08], [-3.5486e-08, -3.5375e-08, 3.3766e-08, -1.0000e+00], [-1.0000e+00, -2.9093e-08, -5.3595e-08, 3.2789e-08], [ 6.2255e-09, 1.0000e+00, 3.7168e-08, 8.2059e-09]], [[-3.9100e-08, -5.8766e-09, 2.8090e-09, 1.0000e+00], [-1.5466e-07, 5.3471e-08, 1.0000e+00, 3.3222e-08], [ 3.3584e-08, -1.0000e+00, -6.5275e-08, 1.9724e-07], [-1.0000e+00, -3.0299e-08, 1.3472e-08, -2.8102e-08]]], requires_grad=True) Learned A sum in PHM: tensor([[-8.4579e-08, -1.0000e+00, -1.0000e+00, -1.0000e+00], [ 1.0000e+00, -5.8745e-10, -1.0000e+00, 1.0000e+00], [ 1.0000e+00, 1.0000e+00, -1.5699e-07, -1.0000e+00], [ 1.0000e+00, -1.0000e+00, 1.0000e+00, 1.9049e-08]], grad_fn=<PermuteBackward>)
MIT
tutorials/PHM tutorial.ipynb
eleGAN23/Hyper
Explore feature-to-feature relationship in Boston
import pandas as pd import seaborn as sns from sklearn import datasets import discover import matplotlib.pyplot as plt # watermark is optional - it shows the versions of installed libraries # so it is useful to confirm your library versions when you submit bug reports to projects # install watermark using # %install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py %load_ext watermark # show a watermark for this environment %watermark -d -m -v -p numpy,matplotlib,sklearn -g example_dataset = datasets.load_boston() df_boston = pd.DataFrame(example_dataset.data, columns=example_dataset.feature_names) df_boston['target'] = example_dataset.target df = df_boston cols = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'target'] classifier_overrides = set() df = df_boston df.head()
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
Discover non-linear relationships_Github note_ colours for `style` don't show up in Github, you'll have to grab a local copy of the Notebook.* NOX predicts RAD, INDUS, TAX and DIS* RAD predicts DIS poorly, NOX better, TAX better* CRIM predicts RAD but RAD poorly predicts CRIM
%time df_results = discover.discover(df[cols].sample(frac=1), classifier_overrides) fig, ax = plt.subplots(figsize=(12, 8)) sns.heatmap(df_results.pivot(index='target', columns='feature', values='score').fillna(1), annot=True, center=0, ax=ax, vmin=-0.1, vmax=1, cmap="viridis"); # we can also output a DataFrame using style (note - doesn't render on github with colours, look at a local Notebook!) df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", low=0.7, axis=1) \ .set_precision(2)
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
We can drill in to some of the discovered relationships
print(example_dataset.DESCR) # NOX (pollution) predicts AGE of properties - lower pollution means more houses built after 1940 than before df.plot(kind="scatter", x="NOX", y="AGE", alpha=0.1); # NOX (pollution) predicts DIStance, lower pollution means larger distance to places of work df.plot(kind="scatter", x="NOX", y="DIS", alpha=0.1); # More lower-status people means lower house prices ax = df.plot(kind="scatter", x="LSTAT", y="target", alpha=0.1); # closer to employment centres means higher proportion of owner-occupied residences built prior to 1940 (i.e. more older houses) ax = df.plot(kind="scatter", x="DIS", y="AGE", alpha=0.1);
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
Try correlationsCorrelations can give us a direction and information about linear and rank-based relationships which we won't get from RF. Pearson (linear)
df_results = discover.discover(df[cols], classifier_overrides, method='pearson') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2)
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
Spearman (rank-based)
df_results = discover.discover(df[cols], classifier_overrides, method='spearman') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2) ax = df.plot(kind="scatter", x="CRIM", y="LSTAT", alpha=0.1); ax = df.plot(kind="scatter", x="CRIM", y="NOX", alpha=0.1);
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
Mutual InformationMutual information represents the amount of information that each column predicts about the others.
df_results = discover.discover(df[cols], classifier_overrides, method='mutual_information') df_results.pivot(index='target', columns='feature', values='score').fillna(1) \ .style.background_gradient(cmap="viridis", axis=1) \ .set_precision(2) ax = df.plot(kind="scatter", x="TAX", y="INDUS", alpha=0.1) ax = df.plot(kind="scatter", x="TAX", y="NOX", alpha=0.1)
_____no_output_____
MIT
example_boston_discover_feature_relationships.ipynb
PeteBleackley/discover_feature_relationships
Just Neural Network
submission = pd.DataFrame() submission['LAP_TIME'] = y_predicted_nn.ravel() submission submission.to_csv(f'../Submissions/Dare_In_Reality NN Only.csv', index=False) y_predicted_nn df
_____no_output_____
MIT
Notebooks/Production.ipynb
MikeAnderson89/Dare_In_Reality_Hackathon
Amazon Fine Food Reviews AnalysisData Source: https://www.kaggle.com/snap/amazon-fine-food-reviews EDA: https://nycdatascience.com/blog/student-works/amazon-fine-foods-visualization/The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.Number of reviews: 568,454Number of users: 256,059Number of products: 74,258Timespan: Oct 1999 - Oct 2012Number of Attributes/Columns in data: 10 Attribute Information:1. Id2. ProductId - unique identifier for the product3. UserId - unqiue identifier for the user4. ProfileName5. HelpfulnessNumerator - number of users who found the review helpful6. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not7. Score - rating between 1 and 58. Time - timestamp for the review9. Summary - brief summary of the review10. Text - text of the review Objective:Given a review, determine whether the review is positive (rating of 4 or 5) or negative (rating of 1 or 2).[Q] How to determine if a review is positive or negative? [Ans] We could use Score/Rating. A rating of 4 or 5 can be cosnidered as a positive review. A rating of 1 or 2 can be considered as negative one. A review of rating 3 is considered nuetral and such reviews are ignored from our analysis. This is an approximate and proxy way of determining the polarity (positivity/negativity) of a review. [1]. Reading Data [1.1] Loading the dataThe dataset is available in two forms1. .csv file2. SQLite DatabaseIn order to load the data, We have used the SQLITE dataset as it is easier to query the data and visualise the data efficiently. Here as we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score is above 3, then the recommendation wil be set to "positive". Otherwise, it will be set to "negative".
%matplotlib inline import warnings warnings.filterwarnings("ignore") import sqlite3 import pandas as pd import numpy as np import nltk import string import matplotlib.pyplot as plt import seaborn as sns from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import confusion_matrix from sklearn import metrics from sklearn.metrics import roc_curve, auc from nltk.stem.porter import PorterStemmer from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.metrics import roc_auc_score from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD from sklearn.cluster import KMeans from wordcloud import WordCloud, STOPWORDS import re # Tutorial about Python regular expressions: https://pymotw.com/2/re/ import string from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.stem.wordnet import WordNetLemmatizer from gensim.models import Word2Vec from gensim.models import KeyedVectors import pickle from tqdm import tqdm import os from google.colab import drive drive.mount('/content/drive') # using SQLite Table to read data. con = sqlite3.connect('drive/My Drive/database.sqlite') # filtering only positive and negative reviews i.e. # not taking into consideration those reviews with Score=3 # SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000, will give top 500000 data points # you can change the number to any other number based on your computing power # filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000""", con) # for tsne assignment you can take 5k data points filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 200000""", con) # Give reviews with Score>3 a positive rating(1), and reviews with a score<3 a negative rating(0). def partition(x): if x < 3: return 0 return 1 #changing reviews with score less than 3 to be positive and vice-versa actualScore = filtered_data['Score'] positiveNegative = actualScore.map(partition) filtered_data['Score'] = positiveNegative print("Number of data points in our data", filtered_data.shape) filtered_data.head(3) display = pd.read_sql_query(""" SELECT UserId, ProductId, ProfileName, Time, Score, Text, COUNT(*) FROM Reviews GROUP BY UserId HAVING COUNT(*)>1 """, con) print(display.shape) display.head() display[display['UserId']=='AZY10LLTJ71NX'] display['COUNT(*)'].sum()
_____no_output_____
MIT
#11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb
wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset
[2] Exploratory Data Analysis [2.1] Data Cleaning: DeduplicationIt is observed (as shown in the table below) that the reviews data had many duplicate entries. Hence it was necessary to remove duplicates in order to get unbiased results for the analysis of the data. Following is an example:
display= pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 AND UserId="AR5J8UI46CURR" ORDER BY ProductID """, con) display.head()
_____no_output_____
MIT
#11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb
wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset
As it can be seen above that same user has multiple reviews with same values for HelpfulnessNumerator, HelpfulnessDenominator, Score, Time, Summary and Text and on doing analysis it was found that ProductId=B000HDOPZG was Loacker Quadratini Vanilla Wafer Cookies, 8.82-Ounce Packages (Pack of 8) ProductId=B000HDL1RQ was Loacker Quadratini Lemon Wafer Cookies, 8.82-Ounce Packages (Pack of 8) and so onIt was inferred after analysis that reviews with same parameters other than ProductId belonged to the same product just having different flavour or quantity. Hence in order to reduce redundancy it was decided to eliminate the rows having same parameters.The method used for the same was that we first sort the data according to ProductId and then just keep the first similar product review and delelte the others. for eg. in the above just the review for ProductId=B000HDL1RQ remains. This method ensures that there is only one representative for each product and deduplication without sorting would lead to possibility of different representatives still existing for the same product.
#Sorting data according to ProductId in ascending order sorted_data=filtered_data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') #Deduplication of entries final=sorted_data.drop_duplicates(subset={"UserId","ProfileName","Time","Text"}, keep='first', inplace=False) final.shape #Checking to see how much % of data still remains (final['Id'].size*1.0)/(filtered_data['Id'].size*1.0)*100
_____no_output_____
MIT
#11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb
wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset
Observation:- It was also seen that in two rows given below the value of HelpfulnessNumerator is greater than HelpfulnessDenominator which is not practically possible hence these two rows too are removed from calcualtions
display= pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 AND Id=44737 OR Id=64422 ORDER BY ProductID """, con) display.head() final=final[final.HelpfulnessNumerator<=final.HelpfulnessDenominator] #Before starting the next phase of preprocessing lets see the number of entries left print(final.shape) #How many positive and negative reviews are present in our dataset? final['Score'].value_counts()
(160176, 10)
MIT
#11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb
wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset