markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Run function to create masks
labels = "/Users/Tyler-SFG/Desktop/Box Sync/SFG Centralized Resources/Projects/Aquaculture/Waitt Aquaculture/aqua-mapping/aqua-mapping-data/aqua-images/vgg/labeled/label_china/20180410_020421_0f31_labels.json" prepped_dir = '/Users/Tyler-SFG/Desktop/Box Sync/SFG Centralized Resources/Projects/Aquaculture/Waitt Aquaculture/aqua-mapping/aqua-mapping-data/aqua-images/prepped_planet/china_20180918' masks_from_labels(labels = labels, prepped_dir = prepped_dir)
/Users/Tyler-SFG/anaconda/envs/planet/lib/python3.6/site-packages/ipykernel/__main__.py:1: FutureWarning: The value of this property will change in version 1.0. Please see https://github.com/mapbox/rasterio/issues/86 for details. if __name__ == '__main__': /Users/Tyler-SFG/anaconda/envs/planet/lib/python3.6/site-packages/rasterio/__init__.py:160: FutureWarning: GDAL-style transforms are deprecated and will not be supported in Rasterio 1.0. transform = guard_transform(transform)
MIT
notebooks/process_planet_scenes.ipynb
tclavelle/aqua_python
4.2 Positional Arguments
def add(operand_1, operand_2): print(f"The sum of {operand_1} and {operand_2} is {operand_1 + operand_2}")
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
Yipeee! we have created a new function called add which is expected to add two integers values, Just kidding 😜, thanks to the dynamic typing of the Python, we can even add float values, concat strings and many more using our `add` function, but for now, let's stick with the addition of integers 😎
add(1, 3)
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
Yup, we did got our result ⭐️. what if I forget passing a value? we would see a `TypeError` exception raised 👻
add(1)
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
The name **Positional arguments** itself says the arguments should be according to the function signature. But here's a deal, we can change the order of arguments being passed, just that we should pass them with the respective keyword 🙂 `Example`
def difference(a, b): print(f"The difference of {b} from {a} is {a - b}") difference(5, 8) difference(b=8, a=5) # Positions are swapped, but passing the objects as keywords.
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
We can see in the above example that even if the positions are changed, but as we have are passing them through keywords, the result remains the same. ⭐️ Position only arguments We do have the power ✊ to make the user call the function's position only arguments the way we want, Thanks to [PEP-570](https://www.python.org/dev/peps/pep-0570/) for Python >= 3.8 The syntax defined by the PEP-570 regarding Position only arguments is as: ```Pythondef name(positional_only_parameters, /, positional_or_keyword_parameters, *, keyword_only_parameters):```
def greet(greet_word, /, name_of_the_user): print(f"{greet_word} {name_of_the_user}!")
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
In the above example, we do have two arguments `greet_word` and `name_of_the_user` we used **`/`** to say that **Hey Python! Consider `greet_word` as Positional only Argument** When we try to call our function `greet` with greet_word as keyword name, Boom 💣, we get a `TypeError` exception.
greet(greet_word="Hello", name_of_the_user="Pythonist")
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
Try to call our `greet` with `greet_word` as positional only argument, meaning not passing it by keyword name. We can hope that there won't be any exception raised. 😁
# Calling greet function with name_of_the_user as Positional keyword. greet("Hello", "Pythonist") # Calling greet function with name_of_the_user with keyword name. greet("Hello", name_of_the_user="Pythoneer😍")
_____no_output_____
MIT
Chapters_1-5/Chapter_4/2_Positional_Arguments.ipynb
NaveenKumarReddy8/MyPython
Read data function examples for basic DEA models在此示範如何使用csv2dict()、csv2dict_sep()來讀取要被衡量的資料,以利後續放入DEA主程式時使用,並顯示它們最終形成的資料形式。※示範程式碼及csv資料存放於[這裡](https://github.com/wurmen/DEA/tree/master/Functions/basic_DEA_data%26code),可自行下載測試。
#導入存放csv2dict()、csv2dict_sep()的py檔(在此名為DEA) import DEA
_____no_output_____
MIT
Functions/basic_DEA_data&code/read_data_example.ipynb
PO-LAB/DEA
csv2dict()------- Exampl1當產出與投入資料放置同一csv檔時,並且**不指定**以2~4行當成投入資料,5~8行當成產出資料。
DMU,X,Y=DEA.csv2dict("data.csv",in_range=[2,4],out_range=[5,8],assign=False) # return DMU、X、Y print(DMU) # DMU list print(X) # input data dict print(Y) # output data dict
{'A': [23756.0, 4.0, 2.0, 870.0], 'B': [24183.0, 5.0, 3.0, 1359.0], 'C': [163483.0, 6.0, 4.0, 12449.0], 'D': [10370.0, 7.0, 5.0, 509.0], 'E': [99047.0, 8.0, 6.0, 3726.0], 'F': [128635.0, 9.0, 7.0, 9214.0], 'G': [11962.0, 10.0, 8.0, 536.0], 'I': [32436.0, 11.0, 9.0, 1462.0], 'J': [83862.0, 12.0, 10.0, 6337.0], 'K': [14618.0, 13.0, 11.0, 785.0], 'L': [99636.0, 14.0, 12.0, 6597.0], 'M': [135480.0, 15.0, 13.0, 10928.0], 'O': [74106.0, 16.0, 14.0, 4258.0]}
MIT
Functions/basic_DEA_data&code/read_data_example.ipynb
PO-LAB/DEA
Example2當產出與投入資料放置同一csv檔時,並且**指定**以2、4行當成投入資料,5、8行當成產出資料。
DMU,X,Y=DEA.csv2dict("data.csv",in_range=[2,4],out_range=[5,8],assign=True) print(DMU) print(X) print(Y)
{'A': [23756.0, 870.0], 'B': [24183.0, 1359.0], 'C': [163483.0, 12449.0], 'D': [10370.0, 509.0], 'E': [99047.0, 3726.0], 'F': [128635.0, 9214.0], 'G': [11962.0, 536.0], 'I': [32436.0, 1462.0], 'J': [83862.0, 6337.0], 'K': [14618.0, 785.0], 'L': [99636.0, 6597.0], 'M': [135480.0, 10928.0], 'O': [74106.0, 4258.0]}
MIT
Functions/basic_DEA_data&code/read_data_example.ipynb
PO-LAB/DEA
csv2dict_sep()--------------------- Example1當產出與投入分別放在不同檔案時,且**不指定**投入或產出資料。
DMU, X = DEA.csv2dict_sep("data_input.csv") print(DMU) print(X)
{'A': [392.0, 8259.0], 'B': [381.0, 9628.0], 'C': [2673.0, 70923.0], 'D': [282.0, 9683.0], 'E': [1608.0, 40630.0], 'F': [2074.0, 47420.0], 'G': [75.0, 7115.0], 'I': [458.0, 10177.0], 'J': [1722.0, 29124.0], 'K': [400.0, 8987.0], 'L': [1217.0, 34680.0], 'M': [2532.0, 51536.0], 'O': [1303.0, 32683.0]}
MIT
Functions/basic_DEA_data&code/read_data_example.ipynb
PO-LAB/DEA
Example2當產出與投入分別放在不同檔案時,且**指定**投入或產出資料。
DMU, Y = DEA.csv2dict_sep("data_output.csv", vrange =[2,4], assign=True) print(X)
{'A': [392.0, 8259.0], 'B': [381.0, 9628.0], 'C': [2673.0, 70923.0], 'D': [282.0, 9683.0], 'E': [1608.0, 40630.0], 'F': [2074.0, 47420.0], 'G': [75.0, 7115.0], 'I': [458.0, 10177.0], 'J': [1722.0, 29124.0], 'K': [400.0, 8987.0], 'L': [1217.0, 34680.0], 'M': [2532.0, 51536.0], 'O': [1303.0, 32683.0]}
MIT
Functions/basic_DEA_data&code/read_data_example.ipynb
PO-LAB/DEA
Import packages & Connect the database
# Install MYSQL client pip install PyMySQL import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) %load_ext autoreload %autoreload 2 %matplotlib inline import numpy as np import pandas as pd import datetime as dt # Connect to database import pymysql conn = pymysql.connect( host='34.69.136.137', port=int(3306), user='root', passwd='rtfgvb77884', db='valenbisi', charset='utf8mb4')
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Prepare data
# Get Stations df_station_snapshot = pd.read_sql_query("SELECT station_number, station_service_available, creation_date FROM station_snapshot WHERE station_number=31", conn) def substractTime(x): date = dt.datetime(x.year, x.month, x.day, x.hour) return (date - dt.timedelta(hours=1)) def addTime(x): date = dt.datetime(x.year, x.month, x.day, x.hour) return (date + dt.timedelta(hours=1)) def getPrevAvailable(d_f, row): new_dateTime = substractTime(row['datetime']) try: return d_f[(d_f['id'] == row['id']) & (d_f['year'] == new_dateTime.year) & (d_f['month'] == new_dateTime.month) & (d_f['day'] == new_dateTime.day) & (d_f['hour'] == new_dateTime.hour)].iloc[0, d_f.columns.get_loc('available')] except: return 0 def getNextAvailable(d_f, row): new_dateTime = addTime(row['datetime']) try: return d_f[(d_f['id'] == row['id']) & (d_f['year'] == new_dateTime.year) & (d_f['month'] == new_dateTime.month) & (d_f['day'] == new_dateTime.day) & (d_f['hour'] == new_dateTime.hour)].iloc[0, d_f.columns.get_loc('available')] except: return 0 # Update titles df_stations = df_station_snapshot.rename(index=str, columns={"station_number": "id", "station_service_available": "available", "creation_date": "datetime"}) df_stations['id'] = df_stations['id'].astype(str).astype(int); # Transform date strinf to date without seconds df_stations['datetime'] = pd.to_datetime(df_stations['datetime'], infer_datetime_format=True) df_stations['datetime'] = df_stations['datetime'].dt.floor('H') # # Sort by datetime df_stations.sort_values(by=['datetime'], inplace=True, ascending=True) # # Separate datetime in columns df_stations['date'] = df_stations['datetime'].dt.date df_stations['hour'] = df_stations['datetime'].dt.hour df_stations['year'] = df_stations['datetime'].dt.year df_stations['month'] = df_stations['datetime'].dt.month df_stations['day'] = df_stations['datetime'].dt.day df_stations['dayofweek'] = df_stations['datetime'].dt.dayofweek # Group and avg by time df_stations['available'] = df_stations.groupby(['id', 'date', 'hour'])['available'].transform('mean').astype(int) df_stations.drop_duplicates(subset=['id', 'date', 'hour'], keep='first', inplace=True) # # Set multiple avaiables df_stations['available_prev'] = df_stations.apply(lambda x: getPrevAvailable(df_stations, x), axis=1) df_stations['available_next'] = df_stations.apply(lambda x: getNextAvailable(df_stations, x), axis=1) # # Clean columns df_stations.drop(['datetime', 'day'], axis=1, inplace=True) df_stations.tail() # Get Holidays df_holiday_snapshot = pd.read_sql_query("SELECT date, enabled FROM holiday", conn) # Update titles df_holiday = df_holiday_snapshot.rename(index=str, columns={"enabled": "holiday"}) # Sort by datetime df_holiday.sort_values(by=['date'], inplace=True, ascending=True) # Get Sport Events df_event_snapshot = pd.read_sql_query("SELECT date, football, basketball FROM sport_event", conn) # Clone data frame df_event = df_event_snapshot # Sort by datetime df_event.sort_values(by=['date'], inplace=True, ascending=True) # Get Weather df_weather_snapshot = pd.read_sql_query("SELECT temperature, humidity, wind_speed, cloud_percentage, creation_date FROM weather", conn) # Update titles df_weather = df_weather_snapshot.rename(index=str, columns={"wind_speed": "wind", "cloud_percentage": "cloud", "creation_date": "datetime"}) # Transform date strinf to date without seconds df_weather['datetime'] = pd.to_datetime(df_weather['datetime'], infer_datetime_format=True) df_weather['datetime'] = df_weather['datetime'].dt.floor('H') # Separate datetime in two columns df_weather['date'] = df_weather['datetime'].dt.date df_weather['hour'] = df_weather['datetime'].dt.hour # Group by datetime and get mean of the data df_weather['temperature'] = df_weather.groupby(['hour', 'date'])['temperature'].transform('mean') df_weather['humidity'] = df_weather.groupby(['hour', 'date'])['humidity'].transform('mean') df_weather['wind'] = df_weather.groupby(['hour', 'date'])['wind'].transform('mean') df_weather['cloud'] = df_weather.groupby(['hour', 'date'])['cloud'].transform('mean') # Clean duplicated rows df_weather.drop_duplicates(subset=['date', 'hour'], keep='first', inplace=True) # Clean columns df_weather.drop(['datetime'], axis=1, inplace=True) # Merge stations with holidays df = pd.merge( df_stations, df_holiday, how='left', left_on=['date'], right_on=['date'] ) # Replace NaN with 0 df['holiday'] = df['holiday'].fillna(0) # Merge (stations with holidays) with sport events df = pd.merge( df, df_event, how='left', left_on=['date'], right_on=['date'] ) # Replace NaN with 0 df['football'] = df['football'].fillna(0) df['basketball'] = df['basketball'].fillna(0) # Merge ((stations with holidays) with sport events) with weather df = pd.merge( df, df_weather, how='left', left_on=['date', 'hour'], right_on=['date', 'hour'] ) # Replace NaN with 0 df['temperature'] = df['temperature'].fillna(0) df['humidity'] = df['humidity'].fillna(0) df['wind'] = df['wind'].fillna(0) df['cloud'] = df['cloud'].fillna(0) # Show latest data print('DATA AGGREGATED FOR STATION: ' + station) df.tail(10)
DATA AGGREGATED FOR STATION: 31
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Visualize the data
# Load libraries import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.legend_handler import HandlerLine2D import seaborn as sns; # HEATMAP CHART PER MIN (10) heatmap_data = pd.pivot_table(df[df['id']==31], values='available', index='hour', columns='date') fig, ax = plt.subplots(figsize=(20,5)) sns.heatmap(heatmap_data, cmap='RdBu', ax=ax) # HEATMAP CHART PER WEEK DAY heatmap_data_week_day = pd.pivot_table(df[df['id']==31], values='available', index='hour', columns='dayofweek') fig, ax = plt.subplots(figsize=(20,5)) sns.heatmap(heatmap_data_week_day, cmap='RdBu', ax=ax)
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Start prediction
# Load libraries import math from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor, GradientBoostingRegressor from sklearn.linear_model import LinearRegression, Lasso, LassoLars, Ridge from sklearn.tree import DecisionTreeRegressor from scipy.stats import randint as sp_randint from sklearn.model_selection import train_test_split, GridSearchCV from sklearn import metrics from sklearn.metrics import explained_variance_score from sklearn.feature_selection import SelectKBest, chi2 from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer # Evaluate model def evaluate(model, train_features, train_labels, test_features, test_labels): print('MODEL PERFORMANCE') train_pred = model.predict(train_features) print('Train set') print('| Mean Absolute Error:', metrics.mean_absolute_error(train_labels, train_pred)) print('| Mean Square Error:', metrics.mean_squared_error(train_labels, train_pred)) print('| Root Mean Square Error:', np.sqrt(metrics.mean_squared_error(train_labels, train_pred))) print('| Train Score:', model.score(train_features, train_labels)) y_pred = model.predict(test_features) print('Test set') print('| Mean Absolute Error:', metrics.mean_absolute_error(test_labels, y_pred)) print('| Mean Square Error:', metrics.mean_squared_error(test_labels, y_pred)) print('| Root Mean Square Error:', np.sqrt(metrics.mean_squared_error(test_labels, y_pred))) print('| Test Score:', model.score(test_features, test_labels)) print('| Explained Variance:', explained_variance_score(test_labels, y_pred)) if hasattr(model, 'oob_score_'): print('OOB Score:', model.oob_score_)
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Find best algoritm for our data
def quick_eval(pipeline, X_train, y_train, X_test, y_test, verbose=True): """ Quickly trains modeling pipeline and evaluates on test data. Returns original model, training RMSE, and testing RMSE as a tuple. """ pipeline.fit(X_train, y_train) y_train_pred = pipeline.predict(X_train) y_test_pred = pipeline.predict(X_test) train_score = np.sqrt(metrics.mean_squared_error(y_train, y_train_pred)) test_score = np.sqrt(metrics.mean_squared_error(y_test, y_test_pred)) if verbose: print(f"Regression algorithm: {pipeline.named_steps['regressor'].__class__.__name__}") print(f"Train RMSE: {train_score}") print(f"Test RMSE: {test_score}") print(f"----------------------------") return pipeline.named_steps['regressor'], train_score, test_score
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
After review the result we see that **RandomForestRegressor** is the best option to predict our data Random Forest
# Create a new dataframe for random forest df_rf = df[['id', 'year', 'month', 'dayofweek', 'hour', 'holiday', 'football', 'basketball', 'temperature', 'humidity', 'wind', 'cloud', 'available_prev', 'available', 'available_next']] # Prepare data for train and test # We want to predict ("available_next") X = df_rf.drop('available_next', axis=1) y = df_rf['available_next'] # Split data in train and test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) X_train.shape, y_train.shape, X_test.shape, y_test.shape # Create our imputer to replace missing values with the mean e.g. imp = SimpleImputer(missing_values=np.nan, strategy='mean') imp = imp.fit(X_train) # Impute our data, then train X_train = imp.transform(X_train) regressors = [ LinearRegression(), Lasso(alpha=.5), Ridge(alpha=.1), LassoLars(alpha=.1), DecisionTreeRegressor(), RandomForestRegressor(), AdaBoostRegressor(), GradientBoostingRegressor() ] for r in regressors: pipe = Pipeline(steps = [ ('regressor', r) ]) quick_eval(pipe, X_train, y_train, X_test, y_test)
Regression algorithm: LinearRegression Train RMSE: 2.121484327745336 Test RMSE: 2.065697137562799 ---------------------------- Regression algorithm: Lasso Train RMSE: 2.179074981399763 Test RMSE: 2.106610689982148 ---------------------------- Regression algorithm: Ridge Train RMSE: 2.1215248913825184 Test RMSE: 2.0663307505301884 ---------------------------- Regression algorithm: LassoLars Train RMSE: 4.050793742718888 Test RMSE: 4.128790234244102 ---------------------------- Regression algorithm: DecisionTreeRegressor Train RMSE: 0.0 Test RMSE: 2.7594483608258895 ---------------------------- Regression algorithm: RandomForestRegressor Train RMSE: 0.7917822755765574 Test RMSE: 2.1275180102873477 ---------------------------- Regression algorithm: AdaBoostRegressor Train RMSE: 2.2039223581670533 Test RMSE: 2.3418349911346517 ---------------------------- Regression algorithm: GradientBoostingRegressor Train RMSE: 1.6317788438198744 Test RMSE: 2.026063179178271 ----------------------------
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Find best params for Random Forest Check each property
# Find N_ESTIMATORS n_estimators = [int(x) for x in np.linspace(start = 1, stop = 200, num=50)] train_results = [] test_results = [] for estimator in n_estimators: rf = RandomForestRegressor(n_estimators=estimator, n_jobs=-1) rf.fit(X_train, y_train) train_pred = rf.predict(X_train) train_results.append(np.sqrt(metrics.mean_squared_error(y_train, train_pred))) #train_results.append(rf.score(X_train, y_train)) y_pred = rf.predict(X_test) test_results.append(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) #test_results.append(rf.score(X_test, y_test)) line1, = plt.plot(n_estimators, train_results, 'b', label='Train RSME') line2, = plt.plot(n_estimators, test_results, 'r', label='Test RSME') plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('RSME') plt.xlabel('n_estimators') plt.show() # Find MAX_DEPTH max_depths = np.linspace(start = 1, stop = 100, num=50, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: rf = RandomForestRegressor(max_depth=max_depth, n_jobs=-1) rf.fit(X_train, y_train) train_pred = rf.predict(X_train) train_results.append(np.sqrt(metrics.mean_squared_error(y_train, train_pred))) #train_results.append(rf.score(X_train, y_train)) y_pred = rf.predict(X_test) test_results.append(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) #test_results.append(rf.score(X_test, y_test)) line1, = plt.plot(max_depths, train_results, 'b', label='Train RSME') line2, = plt.plot(max_depths, test_results, 'r', label='Test RSME') plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('RSME') plt.xlabel('Tree depth') plt.show() # Find MIN_SAMPLES_SPLIT min_samples_splits = np.linspace(start = 0.01, stop = 1.0, num=10, endpoint=True) train_results = [] test_results = [] for min_samples_split in min_samples_splits: rf = RandomForestRegressor(min_samples_split=min_samples_split) rf.fit(X_train, y_train) train_pred = rf.predict(X_train) train_results.append(np.sqrt(metrics.mean_squared_error(y_train, train_pred))) #train_results.append(rf.score(X_train, y_train)) y_pred = rf.predict(X_test) test_results.append(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) #test_results.append(rf.score(X_test, y_test)) line1, = plt.plot(min_samples_splits, train_results, 'b', label='Train RSME') line2, = plt.plot(min_samples_splits, test_results, 'r', label='Test RSME') plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('RSME') plt.xlabel('min samples split') plt.show() # Find MIN_SAMPLES_LEAF min_samples_leafs = np.linspace(start = 0.01, stop = 0.5, num=5, endpoint=True) train_results = [] test_results = [] for min_samples_leaf in min_samples_leafs: rf = RandomForestRegressor(min_samples_leaf=min_samples_leaf) rf.fit(X_train, y_train) train_pred = rf.predict(X_train) train_results.append(np.sqrt(metrics.mean_squared_error(y_train, train_pred))) #train_results.append(rf.score(X_train, y_train)) y_pred = rf.predict(X_test) test_results.append(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) #test_results.append(rf.score(X_test, y_test)) line1, = plt.plot(min_samples_leafs, train_results, 'b', label='Train RSME') line2, = plt.plot(min_samples_leafs, test_results, 'r', label='Test RSME') plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('RSME') plt.xlabel('min samples leaf') plt.show() # Find MAX_FEATURES max_features = list(range(1,X.shape[1])) train_results = [] test_results = [] for max_feature in max_features: rf = RandomForestRegressor(max_features=max_feature) rf.fit(X_train, y_train) train_pred = rf.predict(X_train) train_results.append(np.sqrt(metrics.mean_squared_error(y_train, train_pred))) #train_results.append(rf.score(X_train, y_train)) y_pred = rf.predict(X_test) test_results.append(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) #test_results.append(rf.score(X_test, y_test)) line1, = plt.plot(max_features, train_results, 'b', label='Train RSME') line2, = plt.plot(max_features, test_results, 'r', label='Test RSME') plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('RSME') plt.xlabel('max features') plt.show()
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Find the best combination of params**TRY ALL PARAMS TO FIND THE BEST PARAMS FOR OUR DATA**Now that we know where to concentrate our search, we can explicitly specify every combination of settings to try.
#@title Default title text def searchBestParamsForRF(params, train_features, train_labels): # First create the base model to tune rf = RandomForestRegressor() # Instantiate the grid search model grid_search = GridSearchCV(estimator = rf, param_grid = param_grid, scoring = 'neg_mean_squared_error', cv = 5, n_jobs = -1, verbose = 2) # Fit the grid search to the data grid_search.fit(train_features, train_labels) print(f"The best estimator had RMSE {np.sqrt(-grid_search.best_score_)} and the following parameters:") print(grid_search.best_params_) # Create the parameter grid max_depth = [int(x) for x in np.linspace(10, 20, num = 3)] max_depth.append(None) param_grid = { 'bootstrap': [False, True], 'n_estimators': [int(x) for x in np.linspace(start = 40, stop = 60, num = 4)], 'max_depth': max_depth, 'min_samples_split': [float(x) for x in np.linspace(0.1, 0.2, num = 2)], 'min_samples_leaf': [float(x) for x in np.linspace(0.1, 0.2, num = 2)], 'max_features': [X.shape[1]] } # Comment or Uncomment this line to seach for the best params searchBestParamsForRF(param_grid, X_train, y_train)
Fitting 5 folds for each of 128 candidates, totalling 640 fits
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Train and evaluate model
m = RandomForestRegressor(n_estimators=60, max_features=X.shape[1]) m.fit(X_train, y_train) evaluate(m, X_train, y_train, X_test, y_test) # MODEL PERFORMANCE # Train set # | Mean Absolute Error: 0.5758625862586259 # | Mean Square Error: 0.6365449044904491 # | Root Mean Square Error: 0.7978376429389936 # | Train Score: 0.9807615052050999 # Test set # | Mean Absolute Error: 1.5209793351302785 # | Mean Square Error: 4.284529050613956 # | Root Mean Square Error: 2.0699103967597137 # | Test Score: 0.8757254225805797 # | Explained Variance: 0.8758109846903823 X_test.tail() y_test.tail() m.predict([[2020, 1, 6, 10, 0, 0, 0, 11.57, 70.50, 0.93, 0, 0, 1]]) # Show the importance of each variable in prediction def rf_feat_importance(m, df): return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}).sort_values('imp', ascending=False) fi = rf_feat_importance(m, X); fi[:].plot('cols', 'imp', 'barh', figsize=(12,7), legend=False)
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
Download model
# Import package import pickle # Generate file with open('model.pkl', 'wb') as model_file: pickle.dump(m, model_file)
_____no_output_____
MIT
Valencia/Equipo3-IAmobilitat/Grupo3SaturdaysAI_IAmobilitat.ipynb
tozanni/Projects
In Depth: Principal Component Analysis In this section, we explore what is perhaps one of the most broadly used of unsupervised algorithms, principal component analysis (PCA).PCA is fundamentally a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, for noise filtering, for feature extraction and engineering, and much more.After a brief conceptual discussion of the PCA algorithm, we will see a couple examples of these further applications.We begin with the standard imports:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set()
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Introducing Principal Component AnalysisPrincipal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data.Its behavior is easiest to visualize by looking at a two-dimensional dataset.Consider the following 200 points:
rng = np.random.RandomState(1) X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T plt.scatter(X[:, 0], X[:, 1]) plt.axis('equal');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
By eye, it is clear that there is a nearly linear relationship between the x and y variables.This is reminiscent of the linear regression data, but the problem setting here is slightly different: rather than attempting to *predict* the y values from the x values, the unsupervised learning problem attempts to learn about the *relationship* between the x and y values.In principal component analysis, this relationship is quantified by finding a list of the *principal axes* in the data, and using those axes to describe the dataset.Using Scikit-Learn's ``PCA`` estimator, we can compute this as follows:
from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(X)
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
The fit learns some quantities from the data, most importantly the "components" and "explained variance":
print(pca.components_) print(pca.explained_variance_)
[0.7625315 0.0184779]
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector:
def draw_vector(v0, v1, ax=None): ax = ax or plt.gca() arrowprops=dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0) ax.annotate('', v1, v0, arrowprops=arrowprops) # plot data plt.scatter(X[:, 0], X[:, 1], alpha=0.2) for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) draw_vector(pca.mean_, pca.mean_ + v) plt.axis('equal');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
These vectors represent the *principal axes* of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis.The projection of each data point onto the principal axes are the "principal components" of the data.If we plot these principal components beside the original data, we see the plots shown here: ![](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/figures/05.09-PCA-rotation.png?raw=1) This transformation from data axes to principal axes is an *affine transformation*, which basically means it is composed of a translation, rotation, and uniform scaling.While this algorithm to find principal components may seem like just a mathematical curiosity, it turns out to have very far-reaching applications in the world of machine learning and data exploration. PCA as dimensionality reductionUsing PCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance.Here is an example of using PCA as a dimensionality reduction transform:
pca = PCA(n_components=1) pca.fit(X) X_pca = pca.transform(X) print("original shape: ", X.shape) print("transformed shape:", X_pca.shape)
original shape: (200, 2) transformed shape: (200, 1)
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
The transformed data has been reduced to a single dimension.To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data:
X_new = pca.inverse_transform(X_pca) plt.scatter(X[:, 0], X[:, 1], alpha=0.2) plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8) plt.axis('equal');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
The light points are the original data, while the dark points are the projected version.This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance.The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much "information" is discarded in this reduction of dimensionality.This reduced-dimension dataset is in some senses "good enough" to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved. PCA for visualization: Hand-written digitsThe usefulness of the dimensionality reduction may not be entirely apparent in only two dimensions, but becomes much more clear when looking at high-dimensional data.To see this, let's take a quick look at the application of PCA to the digits data.We start by loading the data:
from sklearn.datasets import load_digits digits = load_digits() digits.data.shape
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional.To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two:
pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape)
(1797, 64) (1797, 2)
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
We can now plot the first two principal components of each point to learn about the data:
plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('Accent', 10) ) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar();
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Recall what these components mean: the full data is a 64-dimensional point cloud, and these points are the projection of each data point along the directions with the largest variance.Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits in two dimensions, and have done this in an unsupervised manner—that is, without reference to the labels. What do the components mean?We can go a bit further here, and begin to ask what the reduced dimensions *mean*.This meaning can be understood in terms of combinations of basis vectors.For example, each image in the training set is defined by a collection of 64 pixel values, which we will call the vector $x$:$$x = [x_1, x_2, x_3 \cdots x_{64}]$$One way we can think about this is in terms of a pixel basis.That is, to construct the image, we multiply each element of the vector by the pixel it describes, and then add the results together to build the image:$${\rm image}(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots x_{64} \cdot{\rm (pixel~64)}$$One way we might imagine reducing the dimension of this data is to zero out all but a few of these basis vectors.For example, if we use only the first eight pixels, we get an eight-dimensional projection of the data, but it is not very reflective of the whole image: we've thrown out nearly 90% of the pixels! ![](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/figures/05.09-digits-pixel-components.png?raw=1) The upper row of panels shows the individual pixels, and the lower row shows the cumulative contribution of these pixels to the construction of the image.Using only eight of the pixel-basis components, we can only construct a small portion of the 64-pixel image.Were we to continue this sequence and use all 64 pixels, we would recover the original image. But the pixel-wise representation is not the only choice of basis. We can also use other basis functions, which each contain some pre-defined contribution from each pixel, and write something like$$image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots$$PCA can be thought of as a process of choosing optimal basis functions, such that adding together just the first few of them is enough to suitably reconstruct the bulk of the elements in the dataset.The principal components, which act as the low-dimensional representation of our data, are simply the coefficients that multiply each of the elements in this series.This figure shows a similar depiction of reconstructing this digit using the mean plus the first eight PCA basis functions: ![](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/figures/05.09-digits-pca-components.png?raw=1) Unlike the pixel basis, the PCA basis allows us to recover the salient features of the input image with just a mean plus eight components!The amount of each pixel in each component is the corollary of the orientation of the vector in our two-dimensional example.This is the sense in which PCA provides a low-dimensional representation of the data: it discovers a set of basis functions that are more efficient than the native pixel-basis of the input data. Choosing the number of componentsA vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data.This can be determined by looking at the cumulative *explained variance ratio* as a function of the number of components:
pca = PCA().fit(digits.data) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components.For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations. PCA as Noise FilteringPCA can also be used as a filtering approach for noisy data.The idea is this: any components with variance much larger than the effect of the noise should be relatively unaffected by the noise.So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise.Let's see how this looks with the digits data.First we will plot several of the input noise-free data:
def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16)) plot_digits(digits.data)
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Now lets add some random noise to create a noisy dataset, and re-plot it:
np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy)
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
It's clear by eye that the images are noisy, and contain spurious pixels.Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:
pca = PCA(0.50).fit(noisy) pca.n_components_
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Here 50% of the variance amounts to 12 principal components.Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:
components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered)
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs. Example: EigenfacesEarlier we explored an example of using a PCA projection as a feature selector for facial recognition with a support vector machine.Here we will take a look back and explore a bit more of what went into that.Recall that we were using the Labeled Faces in the Wild dataset made available through Scikit-Learn:
from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape)
Downloading LFW metadata: https://ndownloader.figshare.com/files/5976012 Downloading LFW metadata: https://ndownloader.figshare.com/files/5976009 Downloading LFW metadata: https://ndownloader.figshare.com/files/5976006 Downloading LFW data (~200MB): https://ndownloader.figshare.com/files/5976015
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
Let's take a look at the principal axes that span this dataset.Because this is a large dataset, we will use ``RandomizedPCA``—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard ``PCA`` estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000).We will take a look at the first 150 components:
from sklearn.decomposition import PCA as RandomizedPCA pca = RandomizedPCA(150) pca.fit(faces.data)
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors,"so these types of images are often called "eigenfaces").As you can see in this figure, they are as creepy as they sound:
fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips.Let's take a look at the cumulative variance of these components to see how much of the data information the projection is preserving:
plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
We see that these 150 components account for just over 90% of the variance.That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data.To make this more concrete, we can compare the input images with the images reconstructed from these 150 components:
# Compute the components and projected faces pca = RandomizedPCA(150).fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction');
_____no_output_____
Apache-2.0
notebooks/machine-learning/RECOMMENDED_Principal_Component_Analysis.ipynb
dlmacedo/machine-learning-class
**Experiment for obtaining 24 Hr prediction from Dense Model in rainymotion library**Author: Divya S. VidyadharanFile use: For predicting 24 Hr precipitation images with **3 hr lead time.** Date Created: 19-03-21Last Updated: 20-03-21Python version: 3.8.2
import h5py import numpy as np import matplotlib import matplotlib.pyplot as plt import scipy.misc import sys import os module_path = os.path.abspath(os.path.join('/home/divya/divya/OtherNowcastings/rainymotion-master')) if module_path not in sys.path: sys.path.append(module_path) from rainymotion.models import Dense from rainymotion.metrics import * import cv2 import pandas as pd import wradlib.ipol as ipol # for interpolation from rainymotion import metrics from rainymotion import utils from scipy.ndimage import map_coordinates import timeit print(cv2.__version__) #from tvl1sindysupport import tvl1utilities -in future our own library times=['0000','0010', '0020', '0030', '0040', '0050', '0100', '0110', '0120', '0130', '0140', '0150', '0200', '0210', '0220', '0230', '0240', '0250', '0300', '0310', '0320', '0330', '0340', '0350', '0400', '0410', '0420', '0430', '0440' ,'0450', '0500', '0510', '0520', '0530', '0540', '0550', '0600', '0610', '0620', '0630', '0640', '0650', '0700', '0710', '0720', '0730', '0740', '0750', '0800', '0810', '0820', '0830', '0840', '0850', '0900', '0910', '0920', '0930', '0940', '0950', '1000', '1010', '1020', '1030', '1040', '1050', '1100', '1110', '1120', '1130', '1140', '1150', '1200', '1210', '1220', '1230', '1240', '1250', '1300', '1310', '1320', '1330', '1340', '1350', '1400', '1410', '1420', '1430', '1440', '1450', '1500', '1510', '1520', '1530', '1540', '1550', '1600', '1610', '1620', '1630', '1640', '1650', '1700', '1710', '1720', '1730', '1740', '1750', '1800', '1810', '1820', '1830', '1840', '1850', '1900', '1910', '1920', '1930', '1940', '1950', '2000', '2010', '2020', '2030', '2040', '2050', '2100', '2110', '2120', '2130', '2140', '2150', '2200', '2210', '2220', '2230', '2240', '2250', '2300', '2310', '2320', '2330', '2340', '2350'] # Common Initialization eventName = "TyphoonFaxai" eventDate ="20190908" #Latitude and Longitude of Typhoon Faxai lat1 = 32.5 lat2 = 39 long1 = 136 long2 = 143 pred_date = 20190908 #YYYYMMDD [height, width] = [781,561] eventNameDate = eventName + "_" + eventDate # startHr = 2 # startMin= 40 # predStartHr = 300 step = 5 #for rainymotion models # For radar images inputFolder = "./ForExperiments/Exp1/RadarImages/HeavyRainfall/For300/" # outputFolder= "./ForExperiments/Exp1/Results/" # print(inputFolder) fileType='.bin' timeStep = 10 # for Japan Radar Data modelName = "Dense" # startHr = 7# the first hr among for the three input images # startMin = 30 # # noOfImages = 3 stepRainyMotion = 5 # 5 minutes # outputFilePath = outputFolder+modelName+'_' # outputFilePath = outputFilePath + eventNameDate # print(outputFilePath) ##recentFramePath## recentFrameFolder = str(pred_date)+"_set_24Hr_bin" #20190908_set_24Hr_bin recentFramePath = "/home/divya/divya/OneFullDayData_7TestCases_WNIMar5/%s"%recentFrameFolder print ("\n Recent frame path ",recentFramePath) inputFolder = recentFramePath print("\n Input folder is ",inputFolder) ##Output path where predicted images for visual comparison are saved.## outputimgpath = "/home/divya/divya/OneFullDayData_7TestCases_WNIMar5/24hroutputs/%i/%s/%s"%(pred_date,modelName,"pred_images") os.makedirs(outputimgpath, exist_ok=True) print ("\n Output image path is ",outputimgpath) ##Output path where evaluation results are saved as csv files.## outputevalpath = "/home/divya/divya/OneFullDayData_7TestCases_WNIMar5/24hroutputs/%i/%s/%s"%(pred_date,modelName,"eval_results") os.makedirs(outputevalpath, exist_ok=True) print ("\n Output eval results in ",outputevalpath) savepath = outputimgpath#"Outputs/%i/%s"%(pred_date,pred_times[0]) noOfImages = 3 # Model needs 24 frames step = 5 outputFilePath = outputimgpath+'/' outputFilePath = outputFilePath + eventNameDate print(outputFilePath) hrlimit = len(times) leadsteps = 18 #6 totinputframes = 2 def gettimes24hr(pred_time): # times=np.array(times) inptimes = [] pred_times = [] index = times.index(pred_time) indexlimit = len(times) print("Leadsteps are ", leadsteps) if (index+leadsteps) < indexlimit: pred_times = times[index:index+leadsteps] if (index-totinputframes)>=0: inptimes = times[index-totinputframes:index] print("PredTimes:",pred_times) print("InpTimes:",inptimes) print("Get Time Success..") return inptimes, pred_times def readRadarImages(pred_time,inputpath,height,width, noOfImages,fileType): files = (os.listdir(recentFramePath)) files.sort() inputRadarImages = [] i = 0 index = times.index(pred_time) # print(index) inputframes = times[index-noOfImages:index] # print(len(inputframes)) while (i<noOfImages): inputframetime = "_"+inputframes[i] i = i +1 for fileName in files: if inputframetime in fileName: print("The input image at %s is available",inputframetime) print(fileName) if fileName.endswith(fileType): inputFileName =recentFramePath+'/'+fileName fd = open(inputFileName,'rb') #print(inputFileName) # straight to numpy data (no buffering) inputFrame = np.fromfile(fd, dtype = np.dtype('float32'), count = 2*height*width) inputFrame = np.reshape(inputFrame,(height,width)) inputFrame = inputFrame.astype('float16') #print(recentFrame.shape) inputRadarImages.append(inputFrame) #else: # print("Sorry, unable to find file.") inputRadarImages = np.stack(inputRadarImages, axis=0) print(inputRadarImages.shape) return inputRadarImages
_____no_output_____
MIT
examples/Dense24HrPredictionNew-3Hr.ipynb
DivyaSDV/pySINDy
**1.2 Dense**
def doDenseNowcasting(startpredtime, saveimages): model = Dense() model.input_data = readRadarImages(startpredtime,inputFolder,height,width, noOfImages,fileType) start = timeit.timeit() nowcastDense = model.run() end = timeit.timeit() sparseTime = end - start print("Dense took ",end - start) print(nowcastDense.shape) # for i in range(12): # outFrameName = outputFilePath + '_'+str(predStartHr+(i*5))+'.png' # # print(outFrameName) # if saveimages: # matplotlib.image.imsave(outFrameName, nowcastDense[i]) print("Finished Dense model nowcasting!") return nowcastDense
_____no_output_____
MIT
examples/Dense24HrPredictionNew-3Hr.ipynb
DivyaSDV/pySINDy
**2. Performance Evaluation**
def getGroundTruthImages(pred_times,leadsteps,recentFramePath,height,width,fileType): files = (os.listdir(recentFramePath)) files.sort() groundTruthImages = [] i = 0 while (i<leadsteps): groundtruthtime = "_"+pred_times[i] i = i +1 for fileName in files: if groundtruthtime in fileName: print("The ground truth at %s is available",groundtruthtime) print(fileName) if fileName.endswith(fileType): inputFileName =recentFramePath+'/'+fileName fd = open(inputFileName,'rb') #print(inputFileName) # straight to numpy data (no buffering) recentFrame = np.fromfile(fd, dtype = np.dtype('float32'), count = 2*height*width) recentFrame = np.reshape(recentFrame,(height,width)) recentFrame = recentFrame.astype('float16') #print(recentFrame.shape) groundTruthImages.append(recentFrame) #else: # print("Sorry, unable to find file.") groundTruthImages = np.moveaxis(np.dstack(groundTruthImages), -1, 0) #print(groundTruthImages.shape) return groundTruthImages def evaluate(nowcasts): fileType = '.bin' # leadsteps = 6 # 6 for 1 hr prediction, 18 for 3hr prediction groundTruthPath = recentFramePath print(pred_times) groundTruthImgs = getGroundTruthImages(pred_times,leadsteps,groundTruthPath,height,width,fileType) maelist = [] farlist = [] podlist= [] csilist= [] thres =1.0 noOfPrecipitationImages = leadsteps j = 0 # using another index to skip 5min interval data from rainymotion for i in range(noOfPrecipitationImages): mae = MAE(groundTruthImgs[i],nowcasts[j]) far = FAR(groundTruthImgs[i],nowcasts[j], threshold=0.1) pod = POD(groundTruthImgs[i],nowcasts[j], threshold=0.1) csi = CSI(groundTruthImgs[i],nowcasts[j],thres) maelist.append(mae) farlist.append(far) podlist.append(pod) csilist.append(csi) j = j + 2 return csilist,maelist,farlist,podlist
_____no_output_____
MIT
examples/Dense24HrPredictionNew-3Hr.ipynb
DivyaSDV/pySINDy
**2. 24 Hr Prediction**
startpredtime = '0110' #'1100' index = times.index(startpredtime) indexlimit = times.index('2250') # Since we have only 6 more ground truths available from this time print(index) print("Last prediction is at index ", indexlimit) csilist = [] maelist = [] podlist = [] farlist = [] pred_time = startpredtime while index<indexlimit:#len(times): print(times[index]) saveimages = 0 if (index==66): saveimages=1 intimes, pred_times = gettimes24hr(pred_time) nowcasts = doDenseNowcasting(pred_time,saveimages) csi,mae,far,pod = evaluate(nowcasts) csilist.append(csi) maelist.append(mae) podlist.append(pod) farlist.append(far) index = index+1 pred_time = times[index] DISOpticalFlow_create # For debugging print(len(maelist)) print("\n\n") print(len(csilist)) print("\n\n") print(len(podlist)) print("\n\n") print(len(farlist))
78 78 78 78
MIT
examples/Dense24HrPredictionNew-3Hr.ipynb
DivyaSDV/pySINDy
**To save results in excel workbook**
import xlwt from xlwt import Workbook # Workbook is created wb = Workbook() def writeinexcelsheet(sheetname, wb, results): sheet1 = wb.add_sheet(sheetname) sheet1.write(0, 0, 'Pred.no.') sheet1.write(0, 1, 't (pred start time)') sheet1.write(0, 2, 't + 10') sheet1.write(0, 3, 't + 20') sheet1.write(0, 4, 't + 30') sheet1.write(0, 5, 't + 40') sheet1.write(0, 6, 't + 50') col = 0 rows = len(results) cols = len(results[0]) print(cols) for rowno in range(rows): sheet1.write(rowno+1,0,rowno+1) for col in range(cols): # print(rowno+1,col+1,results[rowno][col]) sheet1.write(rowno+1,col+1,results[rowno][col].astype('float64')) # sheet1.write(row, col, str(data)) # print(row,col,data) writeinexcelsheet('CSI',wb,csilist) writeinexcelsheet('MAE',wb,maelist) writeinexcelsheet('FAR',wb,farlist) writeinexcelsheet('POD',wb,podlist) excelpath = "/home/divya/divya/OneFullDayData_7TestCases_WNIMar5/24hroutputs/20190908/Dense/eval_results/" excelpath = excelpath + 'resultsDense.xls' wb.save(excelpath)
6 6 6 6
MIT
examples/Dense24HrPredictionNew-3Hr.ipynb
DivyaSDV/pySINDy
%load_ext rpy2.ipython %%writefile binom.stan data{ int X; int N; } parameters{ real<lower=0,upper=1> p; real q; } model{ X ~ binomial(N,p); p ~ beta(10,1); } %%R d <- list(X=28, N=50) %%R d %%R system("apt-get install -y libv8-dev") install.packages("V8") %%R install.packages("rstan") %%R library(rstan) %%R d <- list(X=28, N=50) stanmodel <- stan_model(file="binom.stan") %%R fit <- sampling(stanmodel, data=d)
_____no_output_____
MIT
MOOCs/MathCalture/binom.ipynb
KeisukeShimokawa/CarND-Advanced-Lane-Lines
Dynamic Typing Python is dunamically typed.This means that the type of a variable is simply the type of the object the variable name points to (references). The variable itself has no associated type.
a = "hello" type(a) a = 10 type(a) a = lambda x: x**2 a(2) type(a)
_____no_output_____
Apache-2.0
python-tuts/0-beginner/2-Variables-Memory/04 - Dynamic vs Static Typing.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
**M**odel **U**ncertainty-based Data **Augment**ation (muAugment)&nbsp;| Mariana Alves | https://supaerodatascience.github.io/deep-learning/ Preliminary work for colab **This notebook was written in google colab, so it is recommended that you run it in colab as well.** Before starting to work on the notebook, make sure you `change the Runtime type` to **GPU**, in the `Tool` drop menu. In colab, please execute first the following cells, to retrieve the GitHub repository content.
!git clone https://github.com/Mariana-Andrade-Alves/muAugment/
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Preliminary Imports
# !pip install matplotlib # !pip install torch torchvision import torch import torchvision import numpy as np %matplotlib inline import matplotlib.pyplot as plt from torch import nn, optim import torch.nn.functional as F from torch.utils.data import DataLoader, Dataset from torchvision import datasets, transforms from torchvision.datasets import FashionMNIST device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device)
cpu
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Overview of Data Augmentation Modern machine learning models, such as deep neural networks, may have billions of parameters and, consequently, require massive labeled training datasets, which are often not available. In order to avoid the **problem of data scarcity** in such models, data augmentation has become the standard technique used in nearly every state-of-the-art model in applications such as **image** and **text classification**.> **Data augmentation refers to the technique of artificially expanding labelled training datasets by generating new data through transformation functions.** Data augmentation schemes often rely on the composition of a set of simple transformation functions (TFs) such as rotation and flip. "Label-invariant transformations." [torchvision.transforms docs](https://pytorch.org/vision/stable/transforms.htmltransforms-on-pil-image-only)As was briefly discussed in the [computer vision class](https://github.com/SupaeroDataScience/deep-learning/blob/main/vision/1_hands_on.ipynb), when chosen carefully, data augmentation schemes tuned by human experts can improve model performance. However, such heuristic strategies in practice can cause large variances in end model performance, and may not produce parameterizations and compositions needed for state-of-the-art models. In addition, they are extremely laborious. Automated Data Augmentation Schemes Instead of performing manual search, automated data augmentation approaches hold promise to search for more powerful parameterizations and compositions of transformations. The biggest difficulty with automating data augmentation is how to search over the space of transformations. This can be prohibitively expensive due to the large number of transformation functions in the search space. > **How can we design algorithms that explore the space of transformation functions efficiently and effectively, and find augmentation strategies that can outperform human-designed heuristics?**The folklore wisdom behind data augmentation is that adding more labeled data improves generalization, i.e. the performance of the trained model on unseen test data. However, even for simpler models, **it is not well-understood how training on augmented data affects the learning process, the parameters, and the decision surface of the resulting model**. Extra information on the Adversarial AutoAugment Scheme previsouly discussed in class (click to expand)One of the current state-of-the-art algorithms in terms of performance is [Adversarial AutoAugment](https://openreview.net/pdf?id=ByxdUySKvS), which makes use of [GANs](https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf), already presented in a previous [class](https://github.com/SupaeroDataScience/deep-learning/tree/main/GAN), to generate new data, rather than using the traditional heuristic transformations presented above. Li et al. "Adversarial AutoAugment training framework (Zhang et al. 2019) is formulated as an adversarial min-max game." [Automating the Art of Data Augmentation.](https://hazyresearch.stanford.edu/blog/2020-02-26-data-augmentation-part2) 2020.Although proven effective, this technique is still computationally expensive. Additionally, despite its rapid progress, this technique does not allow for a theoretical understanding of the benefits of a given transformation. How should we think of the effects of applying a transformation? Intuition for Linear Transformations Suppose we are given $n$ training data points $x_1,...,x_n \in \mathbb{R}^p$ as $X \in \mathbb{R}^{n\times p}$ with labels $Y\in \mathbb{R}^n$.> Suppose that the labels $Y$ obey the true linear model under ground parameters $\beta \in \mathbb{R}^p$, $$Y = X \beta + \epsilon,$$ where $\epsilon \in \mathbb{R}^n$ denotes i.d.d. random noise with mean zero and variance $\sigma²$.Importantly, we assume that $p>n$, hence the span of the training data does not contain the entire space of $\mathbb{R}^p$.Let's suppose we have an estimator $\hat{\beta}$ for the linear model $\beta \in \mathbb{R}^p$. The error of that given estimator is > $$e(\hat{\beta}) = \underbrace{\lVert \underset{ \epsilon}{\mathbb{E}}[\hat{\beta}]-\beta\rVert^2}_{bias} + \underbrace{\lVert\hat{\beta} - \underset{\epsilon}{\mathbb{E}}[\beta] \rVert^2}_{variance}$$where the bias part, intuitively, measures the intrinsic error of the model after taking into account the randomness which is present in $\hat{\beta}$. Label-Invariant Transformations For a matrix $F \in \mathbb{R}^{p\times p}$, we say that $F$ is a label-invariant transformation over $\chi \subseteq \mathbb{R}^p $ for $\beta \in \mathbb{R}^p$ if $$x^\top\beta = (Fx)^\top\beta, \quad \text{ for any } x \in \chi.$$> In simpler words, a label-invariant transformation will not alter the label $y$ of a given data point $x$. **But what is the effect of such a transformation?**Given a training data point $(x,y)$, let $(x^{aug},y^{aug})$ denote the augmented data where $y^{aug} = y$ and $x^{aug} = Fx$. **Note**: In order to be able to present the next result, let's consider adding the augmented data point $(z,y^{aug})$, where $z = P^\bot_X Fx$, meaning $z$ is not $x^{aug}$, but the projection of $x^{aug}$ onto $P^\bot_X = Id_p -P_X$, which denotes the projection operator which is ortogonal to the projection matrix onto the row of $X$ ($P_X$). In such a case, $y^{aug} = y - Diag[(X^\top)^†Fx]Y$.> An intuiton to understand why we chose to use the projection $P^\bot_X a^{aug}$ instead of the augmented data is to think about the idea of "adding new information". Remember: we assume that $p>n$, hence the subspace over which we can make an accurate estimation does not contain the entire space of $\mathbb{R}^p$. When we a data point belonging to a space ortogonal to the one we know, we expand the subspace over which we can make an accurate estimation, by adding a direction corresponding to $P^\bot_X a^{aug}$.Suppose the estimator $\hat{\beta}$ used to infer labels is a ridge estimator with a penalty parameter $\lambda$, given by$$\hat{\beta}(X,Y) = (X^\top X + n \lambda Id)^{-1}X^\top Y, $$where, just to recap, $X$ denotes the training data and $Y$ the training labels.Considering $e(\hat{\beta})$ and $e(\hat{\beta}^F)$ has the errors of the estimator before and after adding the augmented data point $(z,y^{aug})$ to $(X,Y)$, it is possible to demonstrate that,$$ 0 \leq e(\hat{\beta}) - e(\hat{\beta}^F) - (2+o(1))\dfrac{\langle z,\beta \rangle²}{\lambda n} \leq \dfrac{poly(\gamma/\lambda)}{n²}, $$ where $poly(\gamma/\lambda)$ denotes a polynomial of $\gamma/\lambda$.This powerful result, which we will not explain in class but can be found in the [muAugment paper](https://hazyresearch.stanford.edu/blog/2020-02-26-data-augmentation-part2), shows that* **the reduction of the estimation error**, $e(\hat{\beta}) - e(\hat{\beta}^F)$, **scales with the correlation between the new signal and the true model**, $\langle z,\beta \rangle²$. In other words, by adding $P^\bot_X Fx$, we reduce the **estimation error** of the ridge estimator at a rate proportional to $\langle z,\beta \rangle²$.Finally, we know that the larger the correlation $\langle z,\beta \rangle²$, the higher the loss of $(x^{aug},y^{aug})$ would be under $\hat{\beta}$. We this information, we can extrapolate the following:* **the reduction of the estimation error**, $e(\hat{\beta}) - e(\hat{\beta}^F)$, **scales with the loss of $(x^{aug},y^{aug})$ under $\hat{\beta}$**, $l_{\hat{\beta}}(x^{aug},y^{aug})$. > **In an intuitive sense, an augmented data point with a small loss means the model has already learned how to predict that type of data well, so if trained on it further, the model will only pick up incidental, possibly spurious patterns — overfitting. Conversely, an augmented data point with a large loss means the model has not learned the general mapping between the type of data and its target yet, so we need to train more on those kinds of data points.** Additional results regarding **label-mixing transformations** were obtained in the [muAugment paper](https://hazyresearch.stanford.edu/blog/2020-02-26-data-augmentation-part2). These results will not be discussed in the class. Uncertainty-based Sampling Scheme In order to take advantage of the last result presented, the **muAugment** algorithm was developped. The algorithm is as follows:* In a first step, for each data point, **C** compositions of **L** linear transformations are randomly sampled and fed to the learning model (in this example a neural network). * In a second step, the **S** transformed samples with the highest losses are picked for training the model and a backpropagation is performed using those samples.> **The intuition behind the sampling scheme is that these transformed samples that have the largest losses should also provide the most information.****The model learns more generalizable patterns, because the algorithm assures extra fitting on the "hard" augmentations while skipping the easy ones.**Senwu. "Uncertainty-based random Sampling Scheme for Data Augmentation. Each transformation function is randomly sampled from a pre-defined set of operations." [Dauphin](https://github.com/senwu/dauphin) 2020. Comparison to Adversarial Autoaugment (click to expand)The idea behing this sampling scheme is conceptually similar to [Adversarial Autoaugment](https://openreview.net/pdf?id=ByxdUySKvS). However, while in the case of Adversarial Autoaugment, an additional adversarial network is used to generate augmented samples with large losses, in the current case, the model uses the training network itself to generate augmented samples. Our goal today is to implement the **muAugment** algorithm and evaluate its performance. The Dataset: FashionMNIST The dataset we will use for this application is the FashionMNIST dataset. We'll download this dataset and make batching data loaders.
batch_size = 4 n_images = 10 if (batch_size>10) else batch_size # data must be normalized between -1 and 1 transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) full_trainset = FashionMNIST(root='../data', train=True, download=True, transform=transform) trainset, full_validset = torch.utils.data.random_split(full_trainset, (10000, 50000)) # 10000 images for the training set validset, _ = torch.utils.data.random_split(full_validset, (1000, 49000)) # 1000 images for the validation set trainloader = DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2) validloader = DataLoader(validset, batch_size=64, shuffle=True, num_workers=2) testset = FashionMNIST(root='../data', train=False, download=True, transform=transform) testloader = DataLoader(testset, batch_size=64, shuffle=True)
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
We can verify the normalization of our data.
images,labels = next(iter(trainloader)) images.min(),images.max()
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Let's look at some example images from the FashionMNIST set.
# get the first batch of images and labels labels_text = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"] plt.figure(figsize=(n_images,4)) for i in range(n_images): l = labels[i].numpy() plt.subplot(2, n_images/2, i+1) plt.title('%d: %s' % (l, labels_text[l])) plt.imshow(images[i].numpy()[0], cmap='Greys') plt.axis('off')
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
The Model As mentioned above, the advantage of the **muAugment** algorithm is that it uses the learning model to automate data augmentation. The goal is to generate data which will improve our training model.In today's example, we wish to learn to classify images into 10 possible labels:
labels_text
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
In order to do this, the training model we will use is a convolutional neural network, presented during a [previous class](https://github.com/SupaeroDataScience/deep-learning/blob/main/deep/PyTorch%20Ignite.ipynb).
class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.convlayer1 = nn.Sequential( nn.Conv2d(1, 32, 3,padding=1), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.convlayer2 = nn.Sequential( nn.Conv2d(32,64,3), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2) ) self.fc1 = nn.Linear(64*6*6,600) self.drop = nn.Dropout2d(0.25) self.fc2 = nn.Linear(600, 120) self.fc3 = nn.Linear(120, 10) def forward(self, x): x = self.convlayer1(x) x = self.convlayer2(x) x = x.view(-1,64*6*6) x = self.fc1(x) x = self.drop(x) x = self.fc2(x) x = self.fc3(x) return F.log_softmax(x,dim=1)
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Training In order to train the model, we must first create it and define the loss function and optimizer.
#creating model for original data model_original = CNN() # creating model for augmented data model = CNN() # moving models to gpu if available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) model_original.to(device)
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
By using the parameter `weight_decay` in our optimizer we are applying a similar penalty parameter to the one in ridge regression.
lr = 0.001 # learning rate # defining optimizer and loss for original model optimizer_original = torch.optim.SGD(model_original.parameters(), lr=lr, weight_decay=0.0001, momentum=0.9) criterion_original = nn.CrossEntropyLoss() # defining optimizer and loss for augmented model optimizer = torch.optim.SGD(model.parameters(), lr=lr, weight_decay=0.0001, momentum=0.9) criterion = nn.CrossEntropyLoss()
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
In a typical training phase, each batch of images would be treated in the following loop:```pythonfor epoch in range(max_epochs): for batch in trainloader: zero the parameter gradients optimizer.zero_grad() get inputs and labels from batch inputs, labels = batch forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step()```In order to perform data augmentation with pre-defined transforms, it would suffice to declare the transforms while generating the data loader and the loop would remain unchanged. However, because we don't wish to train the model without evaluating the performance of each transform, this loop is going to change. The Random Sampling Scheme (hands-on exercises) As mentioned above, the goal is to implement the following algorithm:Wu et al. "Uncertainty-based random Sampling Algorithm." [On the Generalization Effects of Linear Transformations in Data Augmentation](https://arxiv.org/pdf/2005.00695.pdf) 2020. Today, to simplify our work, we will not use default transformations. We will also only consider label-invariant transformations. In our implementation, lets consider the following required arguments:* **L** (int): Number of **linear** transformations uniformly sampled for each composition* **C** (int): Number of **compositions** placed on each image * **S** (int): Number of **selected** compositions for each image > **In a first exercise, let's attempt to code the lines 4 and 5 of the algorithm. Complete the function `compute_composed_data` which takes as input a `transform_list` similar to the one presented bellow, the arguments `L` and `C` described above and the images `xb` and labels `yb` of a batch and returns 2 tensors `C_images` and `C_targets` which contain the images xb$^{\mathbf{aug}}$ and labels yb$^{\mathbf{aug}}$ of the augmented data.**```pythontransform_list = [transforms.RandomAutocontrast(p=p), transforms.ColorJitter(brightness=MAGN/30), transforms.ColorJitter(contrast=MAGN/30), transforms.RandomInvert(p=p), transforms.RandomRotation(degrees=MAGN*3), transforms.RandomAdjustSharpness(0.18*MAGN+0.1, p=p), transforms.RandomAffine(degrees=0, shear=MAGN/30), transforms.RandomSolarize(MAGN*8, p=p), transforms.RandomAffine(degrees=(0,0), translate=(MAGN/30,0),shear=(0,0)), transforms.RandomAffine(degrees=(0,0), translate=(0,MAGN/30),shear=(0,0)), ]```
# the load command only works on jupyter notebook # %load solutions/compute_composed_data.py def compute_composed_data(transform_list,L, C, xb,yb): BS,N_CHANNELS,HEIGHT,WIDTH = xb.shape C_images = torch.zeros(C, BS, N_CHANNELS, HEIGHT, WIDTH, device=device) C_targets = torch.zeros(C, BS, device=device, dtype=torch.long) for c in range(C): # create a list of L linear transforms randomly sampled from the transform_list # create a composition of transforms from the list sampled above. Use nn.Sequential instead of transforms.Compose in order to script the transformations # apply the composition to the original images xb # update tensors C_images and C_targets with the generated compositions return C_images, C_targets # the cat command works on google colab #%cat muAugment/solutions/compute_composed_data.py
cat: muAugment/solutions/compute_composed_data.py: No such file or directory
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Now that we have implemented the data augmentation part, we can attempt to code the content of the main loop of the algorithm. **Remember**: the idea is to feed the transformed batches to the model without updating it and compare the losses obtained for each batch. Since you do not want to call `python loss.backward()`, you can disable gradient calculation in your function by using `python @torch.no_grad()`. > **In a second exercise, complete the function `compute_selected_data` that takes as inputs the learning `model`, the `loss` function, the tensors `C_images` and `C_targets` and the argument `S` and returns the seleted transformed images (`S_images`) and labels (`S_labels`).**
# the load command only works on jupyter notebook # %load solutions/compute_selected_data.py #disable gradient calculation def compute_selected_data(model, loss, C_images, C_targets, S): C, BS, N_CHANNELS, HEIGHT, WIDTH = C_images.shape # create a list of predictions 'pred' by applying the model to the augmented batches contained in C_images # create a list of losses by applying the loss function to the predictions and labels C_targets # convert the list to a loss tensor 'loss_tensor' through the function torch.stack # select the S indices 'S_idxs' of the loss_tensor with the highest value. You may use the function torch.topk # select the S images 'S_images' from C_images with the highest losses # convert the tensor 'S_images' so that it passes from shape [S, BS, N_CHANNELS, HEIGHT, WIDTH] to shape # [S*BS, N_CHANNELS, HEIGHT, WIDTH]. You may use the function torch.view # select the S labels 'S_targets' from C_targets corresponding to the highest losses # convert the tensor 'S_targets' so that it passes from shape [S, BS] to shape # [S*BS]. You may use the function torch.view return S_images, S_targets # the cat command works on google colab #%cat muAugment/solutions/compute_selected_data.py
cat: muAugment/solutions/compute_selected_data.py: No such file or directory
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
We have created two functions which give us the augmented data we wish to use in the training phase of our model. Back to Training (hands-on exercise) Let's consider the following arguments for the algorithm:
# algorithm arguments L = 3 # number of linear transformations sampled for each composition C = 4 # number of compositions placed on each image. S = 1 # number of selected compositions for each image
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Let's consider the following list of linear transformations, similar to the ones used in the original paper:
MAGN = 4 # (int) Magnitude of augmentation applied. Ranges from [0, 10] with 10 being the max magnitude. # function of list of linear transformations def transform_list(MAGN,p): return [transforms.RandomAutocontrast(p=p), transforms.ColorJitter(brightness=MAGN/30), transforms.ColorJitter(contrast=MAGN/30), transforms.RandomInvert(p=p), transforms.RandomRotation(degrees=MAGN*3), transforms.RandomAdjustSharpness(0.18*MAGN+0.1, p=p), transforms.RandomAffine(degrees=0, shear=MAGN/30), transforms.RandomSolarize(MAGN, p=p), transforms.RandomAffine(degrees=(0,0), translate=(MAGN/30,0),shear=(0,0)), transforms.RandomAffine(degrees=(0,0), translate=(0,MAGN/30),shear=(0,0)), ]
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
The following three code boxes were adapted from the tutorial on pytorch done in [class](https://github.com/SupaeroDataScience/deep-learning/blob/main/deep/Deep%20Learning.ipynb). In order to compare validation and training losses, we will calculate the validation losses and accuracy at each epoch.
def validation(model,criterion): correct_pred = 0 total_pred = 0 valid_loss = 0 with torch.no_grad(): for data in validloader: images, labels = data images = images.to(device) labels = labels.to(device) outputs = model(images) loss = criterion(outputs, labels) valid_loss += loss.item() # calculate predictions predictions=[] for i in range(outputs.shape[0]): ps = torch.exp(outputs[i]) predictions.append(np.argmax(ps)) # collect the correct predictions for label, prediction in zip(labels, predictions): if label == prediction: correct_pred += 1 total_pred += 1 accuracy = 100 * (correct_pred / total_pred) return valid_loss, accuracy def plot_train_val(train, valid, title, label1 = 'Training', label2 = 'Validation'): fig, ax1 = plt.subplots() color = 'tab:red' ax1.set_ylabel(label1, color=color) ax1.plot(train, color=color) ax2 = ax1.twinx() color = 'tab:blue' ax2.set_ylabel(label2, color=color) ax2.plot(valid, color=color) fig.tight_layout() plt.title(title)
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
In order to avoid overfitting, we will implement early stopping.
class EarlyStopping: def __init__(self, patience=5, delta=0): self.patience = patience self.counter = 0 self.best_score = None self.delta = delta self.early_stop = False def step(self, val_loss): score = -val_loss if self.best_score is None: self.best_score = score elif score < self.best_score + self.delta: self.counter += 1 print('EarlyStopping counter: %d / %d' % (self.counter, self.patience)) if self.counter >= self.patience: self.early_stop = True else: self.best_score = score self.counter = 0
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
It is time to implement the algorithm in the training loop! > **In the final exercise, take the almost complete code of the training loop presented bellow (adapted from the [pytorch class](https://github.com/SupaeroDataScience/deep-learning/blob/main/deep/Deep%20Learning.ipynb)) and change it, so that the algorithm is implemented.**
# the load command only works on jupyter notebook # %load solutions/train.py def train(model,criterion,optimizer, earlystopping=True,max_epochs=30,patience=2, augment=False): train_history = [] valid_history = [] accuracy_history = [] estop = EarlyStopping(patience=patience) for epoch in range(max_epochs): train_loss = 0.0 for i, data in enumerate(trainloader, 0): if augment: # generate transform list p = np.random.random() # probability of each transformation occurring transforms = transform_list(MAGN,p) # get the inputs; data is a list of [inputs, labels] xb,yb = data xb = xb.to(device) yb = yb.to(device) # generate the tensors 'C_images' and 'C_targets' <---- to complete # generated the augmented data = [inputs,labels] <---- to complete else: # get the inputs; data is a list of [inputs, labels] inputs,labels = data inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() valid_loss, accuracy = validation(model,criterion) train_history.append(train_loss) valid_history.append(valid_loss) accuracy_history.append(accuracy) print('Epoch %02d: train loss %0.5f, validation loss %0.5f, accuracy %3.1f ' % (epoch, train_loss, valid_loss, accuracy)) estop.step(valid_loss) if earlystopping and estop.early_stop: break return train_history, valid_history, accuracy_history # the cat command works on google colab #%cat muAugment/solutions/train.py
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
We did it! Let's train our models: one without and one with augmented data.
max_epochs = 30 patience = 5 #early stopping parameter print("\n Training for the original dataset...\n") train_history_original, valid_history_original, accuracy_history_original = train(model_original,criterion_original,optimizer_original,max_epochs=max_epochs,patience=patience) print("\n Training for the augmented dataset...\n") train_history, valid_history, accuracy_history = train(model,criterion,optimizer,max_epochs=max_epochs,patience=patience,augment=True)
Training for the original dataset... Epoch 00: train loss 53.41990, validation loss 5.85852, validation loss 86.0 Epoch 01: train loss 49.32130, validation loss 5.79052, validation loss 86.7 Epoch 02: train loss 47.43125, validation loss 5.62790, validation loss 87.0 Epoch 03: train loss 44.78354, validation loss 5.58822, validation loss 87.1 Epoch 04: train loss 43.11249, validation loss 5.10587, validation loss 88.8 Epoch 05: train loss 40.08070, validation loss 5.23783, validation loss 88.3 EarlyStopping counter: 1 / 3 Epoch 06: train loss 38.33838, validation loss 5.10087, validation loss 88.8 Epoch 07: train loss 36.10283, validation loss 4.92744, validation loss 88.7 Epoch 08: train loss 33.87066, validation loss 5.32297, validation loss 88.6 EarlyStopping counter: 1 / 3 Epoch 09: train loss 32.46742, validation loss 5.11203, validation loss 88.2 EarlyStopping counter: 2 / 3 Epoch 10: train loss 31.42488, validation loss 5.15724, validation loss 88.1 EarlyStopping counter: 3 / 3 Training for the augmented dataset... Epoch 00: train loss 255.33405, validation loss 14.73030, validation loss 66.7 Epoch 01: train loss 175.41405, validation loss 12.71785, validation loss 70.3 Epoch 02: train loss 152.23066, validation loss 10.88950, validation loss 74.7 Epoch 03: train loss 145.02098, validation loss 10.29128, validation loss 75.0 Epoch 04: train loss 140.47682, validation loss 10.38592, validation loss 74.2 EarlyStopping counter: 1 / 3 Epoch 05: train loss 133.23461, validation loss 9.41392, validation loss 78.9 Epoch 06: train loss 128.67845, validation loss 9.13174, validation loss 77.9 Epoch 07: train loss 127.87093, validation loss 8.72783, validation loss 79.9 Epoch 08: train loss 119.60718, validation loss 9.12756, validation loss 77.1 EarlyStopping counter: 1 / 3 Epoch 09: train loss 119.84167, validation loss 8.44272, validation loss 79.7 Epoch 10: train loss 117.55080, validation loss 8.24715, validation loss 80.2 Epoch 11: train loss 112.32505, validation loss 8.38380, validation loss 81.6 EarlyStopping counter: 1 / 3 Epoch 12: train loss 106.44310, validation loss 7.59079, validation loss 83.3 Epoch 13: train loss 108.91942, validation loss 7.41854, validation loss 82.4 Epoch 14: train loss 105.01009, validation loss 7.18891, validation loss 83.5 Epoch 15: train loss 104.73178, validation loss 6.81569, validation loss 84.8 Epoch 16: train loss 105.86425, validation loss 6.89502, validation loss 83.2 EarlyStopping counter: 1 / 3 Epoch 17: train loss 102.29675, validation loss 7.02991, validation loss 84.4 EarlyStopping counter: 2 / 3 Epoch 18: train loss 102.68404, validation loss 7.42579, validation loss 83.7 EarlyStopping counter: 3 / 3
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Plotting the Training and Validation Loss Now that we trained both models, we can compare how the loss of training and validation evolves in both cases.
plot_train_val(train_history_original, valid_history_original,"Original Data") plot_train_val(train_history, valid_history,"Augmented Data")
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Although it is not always verified, most times you can see that the training loss tends to decrease less while using the augmented data and, even, sometimes augment. This is consistent with the fact that the augmented data is more difficult to predict. However, because the model with augmented data does not excessively train the data points it already knows, the model also suffers less from overfitting. We can also compare accuracy between models.
plot_train_val(accuracy_history, accuracy_history_original,"Accuracy",label1='Augmented',label2='Original')
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Verifying models with Testing Dataset Finally, let's check the results by applying our model to the test dataset.
# put model in evaluation mode model.eval() # moving model to cpu for inference model.to('cpu') # creating arrays to save predictions y_true = [] y_pred = [] images_ = [] # disable all gradients things with torch.no_grad(): for data in iter(testloader): images, labels = data outputs = model(images) for i in range(outputs.shape[0]): images_.append(images[i].unsqueeze(0)) ps = torch.exp(outputs[i]) y_pred.append(np.argmax(ps)) y_true.append(labels[i].item())
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Firstly, let's examine the confusion matrix.
from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix print("Confusion matrix") cm = confusion_matrix(y_true, y_pred) disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels_text) fig, ax = plt.subplots(figsize=(10,10)) disp.plot(ax=ax) plt.show()
Confusion matrix
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
We can also plot some of the results of the test dataset.
# plotting the results fig = plt.figure(figsize=(n_images+5,4)) for i in range(n_images): ax = fig.add_subplot(2, n_images/2, i+1, xticks=[], yticks=[]) ax.imshow(images_[i].resize_(1, 28, 28).numpy().squeeze()) ax.set_title("{} ({})".format(labels_text[y_pred[i]], labels_text[y_true[i]]), color=("green" if y_pred[i]==y_true[i] else "red"))
_____no_output_____
CC0-1.0
muAugment.ipynb
Mariana-Andrade-Alves/muAugment
Adversarial Variational Optimization: PYTHIA TuningIn this notebook Adversarial Variational Optimization (https://arxiv.org/abs/1707.07113) is applied to tuning parameters of a simplistic detector.**Note: this notebook takes quite a long time to execute. It is recommended to run all cells at the beginning.** **Please, don't interrupt the notebook while sampling from PythiaMill. Otherwise it might stuck at the next attempt to sample from it. IF this happens, please, restart the notebook.**
%env CUDA_DEVICE_ORDER=PCI_BUS_ID %matplotlib inline import matplotlib.pyplot as plt from tqdm import tqdm_notebook as tqdm_notebook import numpy as np ### don't forget about others! import keras import tensorflow as tf gpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.2) tf_session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) keras.backend.tensorflow_backend.set_session(tf_session)
Using TensorFlow backend.
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
GeneratorsPythia-mill is a python binding to Pythia generator that can run in multiple threads (processes).For more details, please, visit https://github.com/maxim-borisyak/pythia-mill
import pythiamill as pm SEED=123
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Note about the change of problemThe reason the detector parameters (instead of Pythia parameters) are the target for the tune is a purely technical one: on each step AVO requires samples from multiples configurations of generator + detector. However, Pythia requires about half of a second to be reconfigured, which induces a tremendous overhead.By contrast, this simplistic detector is designed to accept its parameters as function arguments (effectively neglecting any overhead).The detector emulates a $32 \times 32$ spherical uniform grid in `pseudorapidity` ($\eta$)-`angle in traverse plane` ($\phi$) covering $(\eta, \phi) \in [0, 5] \times [0, 2 \pi]$.The detector is parametrized by offset in $z$-axis relative to the beam crossing point. Zero offset means that center of the sphere coincides with the collision point.
### ground truth offset, unknown in the real world problems. TRUE_OFFSET=1 options = [ ### telling pythia to be quiet. 'Print:quiet = on', 'Init:showProcesses = off', 'Init:showMultipartonInteractions = off', 'Init:showChangedSettings = off', 'Init:showChangedParticleData = off', 'Next:numberCount=0', 'Next:numberShowInfo=0', 'Next:numberShowEvent=0', 'Stat:showProcessLevel=off', 'Stat:showErrors=off', ### seeting default parameters to Monash values ### all options are taken from https://arxiv.org/abs/1610.08328 "Tune:ee = 7", "Beams:idA = 11", "Beams:idB = -11", "Beams:eCM = 91.2", "WeakSingleBoson:ffbar2gmZ = on", "23:onMode = off", "23:onIfMatch = 1 -1", "23:onIfMatch = 2 -2", "23:onIfMatch = 3 -3", "23:onIfMatch = 4 -4", "23:onIfMatch = 5 -5", ] ### defining the detector detector = pm.utils.SphericalTracker( ### with this option detector measures total energy ### of the particles traversing each pixel. is_binary=False, ### detector covers [0, 5] pseudo-rapidity range max_pseudorapidity=5.0, pseudorapidity_steps=32, phi_steps=32, ### 1 layer with radius 10 mm. n_layers=1, R_min=10.0, R_max=10.0, ) mill = pm.ParametrizedPythiaMill( detector, options, ### please, don't use number of workers higher than 4. batch_size=8, n_workers=4, seed=SEED ) def get_data(mill, detector_configurations, show_progress=False): """ Utilitary function to obtain data for a particular set of configurations. :param mill: instance of Pythia Mill to sample from. : param detector configuration: - list of configurations. each configuration should be an array of detector parameters. : param show_progress: if True shows progress via `tqdm` package. :return: - parameters: array of shape `<number of samples> x <parameters dim>`, parameters for each sample; - samples: array of shape `<number of samples> x 1 x 32 x 32`, sampled events. """ try: ### sending requests to the queue for args in detector_configurations: mill.request(*args) ### retrieving results data = [ mill.retrieve() for _ in ( (lambda x: tqdm_notebook(x, postfix='data gen', leave=False)) if show_progress else (lambda x: x) )(range(len(detector_configurations))) ] samples = np.vstack([ samples for params, samples in data ]) params = np.vstack([ np.array([params] * samples.shape[0], dtype='float32') for params, samples in data ]) return params, samples.reshape(-1, 32, 32, 1) finally: while mill.n_requests > 0: mill.retrieve() ### Generating training samples with ground truth parameters. ### For a real-world problem these arrays would correspond to real data. _, X_true_train = get_data(mill, detector_configurations=[(TRUE_OFFSET, )] * 2 ** 12, show_progress=True) _, X_true_val = get_data(mill, detector_configurations=[(TRUE_OFFSET, )] * 2 ** 12, show_progress=True) print(X_true_train.shape) print(X_true_val.shape)
(32768, 32, 32, 1) (32768, 32, 32, 1)
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Taking a look at events
n = 5 plt.subplots(nrows=n, ncols=n, figsize=(3 * n, 3 * n)) max_energy = np.max(X_true_train[:n * n]) for i in range(n): for j in range(n): k = i * n + j plt.subplot(n, n, k + 1) plt.imshow(X_true_train[k, :, :, 0], vmin=0, vmax=max_energy) plt.show()
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Aggregated events
plt.figure(figsize=(6, 6)) plt.imshow(np.sum(X_true_train, axis=(0, 3)), vmin=0) plt.show()
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Discriminator
from keras.models import Model from keras.layers import Input, Conv2D, MaxPool2D, Dense, Flatten, GlobalMaxPool2D from keras.activations import softplus, sigmoid, relu from keras.utils.vis_utils import model_to_dot
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Building conv net
inputs = Input(shape=(32, 32, 1)) activation = lambda x: relu(x, 0.05) net = Conv2D(8, kernel_size=(3, 3), padding='same', activation=activation)(inputs) net = MaxPool2D(pool_size=(2, 2))(net) net = Conv2D(12, kernel_size=(3, 3), padding='same', activation=activation)(net) net = MaxPool2D(pool_size=(2, 2))(net) # net = GlobalMaxPool2D()(net) net = Conv2D(16, kernel_size=(3, 3), padding='same', activation=activation)(net) net = MaxPool2D(pool_size=(2, 2))(net) net = Conv2D(24, kernel_size=(3, 3), padding='same', activation=activation)(net) net = MaxPool2D(pool_size=(2, 2))(net) net = Flatten()(net) predictions = Dense(1, activation=sigmoid)(net) discriminator = Model(inputs=inputs, outputs=predictions) discriminator.compile(optimizer='adam', loss='binary_crossentropy') from IPython import display from IPython.display import SVG SVG(model_to_dot(discriminator, show_shapes=True).create(prog='dot', format='svg'))
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
In Adversarial Variational Optimization, instead of searching for a single value of detector parameters, a parametrized distribution is introduced (with parameters $\psi$):$$\mathcal{L}(\psi) = \mathrm{JS}(X_\psi, X_\mathrm{data})$$where:- $X_\psi \sim \mathrm{detector}(\theta), \theta \sim P_\psi$;- $X_\mathrm{data} \sim \mathrm{reality}$.Note that $\mathcal{L}(\psi)$ is a vaiational bound on adversarial loss:$$\mathcal{L}(\psi) \geq \min_\theta \mathcal{L}_\mathrm{adv}(\theta) = \mathrm{JS}(X_\theta, X_\mathrm{data})$$In this example, detector parameters consist of a signle `offset` parameter. For simplicity normal distibution is used:$$\mathrm{offset} \sim \mathcal{N}(\mu, \sigma)$$In order to avoid introducing constraints $\sigma \geq 0$, an auxiliary *free variable* $\sigma'$ is introduced (denoted as `detector_params_sigma_raw` in the code):$$\sigma = \log(1 + \exp(\sigma'))$$Note that if there exists configuration of detector perfectly matching real data, then minimum of variational bound is achieved when the `offset` distribution collapses into delta function with the center at minumum of adversarial loss.Otherwise, a mixture of detector configuations might be a solution (unlike convetional variational optimization).
X = tf.placeholder(dtype='float32', shape=(None, 32, 32, 1)) proba = discriminator(X)[:, 0] detector_params = tf.placeholder(dtype='float32', shape=(None, 1)) detector_params_mean = tf.Variable( initial_value=np.array([0.0], dtype='float32'), dtype='float32' ) detector_params_sigma_raw = tf.Variable( initial_value=np.array([2.0], dtype='float32'), dtype='float32' ) detector_params_sigma = tf.nn.softplus(detector_params_sigma_raw) neg_log_prob = tf.reduce_sum( tf.log(detector_params_sigma) ) + tf.reduce_sum( 0.5 * (detector_params - detector_params_mean[None, :]) ** 2 / detector_params_sigma[None, :] ** 2 , axis=1 ) detector_params_loss = tf.reduce_mean(neg_log_prob * proba) get_distribution_params = lambda : tf_session.run([detector_params_mean, detector_params_sigma]) n = tf.placeholder(dtype='int64', shape=()) params_sample = tf.random_normal( mean=detector_params_mean, stddev=detector_params_sigma, shape=(n, 1), dtype='float32' ) distribution_opt = tf.train.AdamOptimizer(learning_rate=0.02).minimize( detector_params_loss, var_list=[detector_params_mean, detector_params_sigma_raw] ) tf_session.run(tf.global_variables_initializer()) def train_discriminator(n_samples=2 ** 16, n_epoches=16, plot=False): sample_of_detector_params = tf_session.run(params_sample, { n : n_samples // 8 }) _, X_gen_train = get_data( mill, detector_configurations=sample_of_detector_params, show_progress=True ) X_train = np.vstack([ X_gen_train, X_true_train ]) y_train = np.hstack([ np.zeros(X_gen_train.shape[0]), np.ones(X_true_train.shape[0]) ]).astype('float32') history = discriminator.fit(x=X_train, y=y_train, batch_size=32, epochs=n_epoches, verbose=0) if plot: plt.figure(figsize=(8, 4)) plt.plot(history.history['loss'], label='train loss') plt.legend() plt.show() def train_generator(): sample_of_detector_params = tf_session.run(params_sample, { n : 2 ** 8 }) params_train, X_gen_train = get_data(mill, detector_configurations=sample_of_detector_params) tf_session.run( distribution_opt, feed_dict={ X : X_gen_train, detector_params : params_train } )
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Pretraining AVO makes small changes in parameter distribution. When starting with the optimal discriminator from the previous iterations, adjusting discriminator to these changes should require relatively few optimization steps.However, the initial discriminator state (which is just random weights), most probably, does not correspond to any optimal discriminator. Therefore, we pretrain discriminator in order to ensure that only a few epoches needed on each iteration to achieve an optimal discriminator.
%%time train_discriminator(n_samples=2**16, n_epoches=4, plot=True)
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
Variational optimization
from IPython import display n_iterations = 256 generator_mean_history = np.ndarray(shape=(n_iterations, )) generator_sigma_history = np.ndarray(shape=(n_iterations, )) for i in range(n_iterations): train_discriminator(n_samples=2**12, n_epoches=1) train_generator() m, s = get_distribution_params() generator_mean_history[i] = np.float32(m[0]) generator_sigma_history[i] = np.float32(s[0]) display.clear_output(wait=True) plt.figure(figsize=(18, 9)) plt.plot(generator_mean_history[:i + 1], color='blue', label='mean ($\\mu$)') plt.fill_between( np.arange(i + 1), generator_mean_history[:i + 1] - generator_sigma_history[:i + 1], generator_mean_history[:i + 1] + generator_sigma_history[:i + 1], color='blue', label='sigma ($\\sigma$)', alpha=0.2 ) plt.plot([0, n_iterations - 1], [TRUE_OFFSET, TRUE_OFFSET], '--', color='black', alpha=0.5, label='ground truth') plt.ylim([-2, 4]) plt.legend(loc='upper left', fontsize=18) plt.legend(fontsize=18) plt.xlabel('AVO step', fontsize=16) plt.ylabel('detector offset', fontsize=16) plt.show()
_____no_output_____
Apache-2.0
day4-Fri/Pythia-Tune-AVO.ipynb
mi-stankai/mlhep2018
1) Matplotlib Part 1 1) Functional method
import numpy as np import matplotlib.pyplot as plt from numpy.random import randint x = np.linspace(0,10,20) x y = randint(0,50,20) y y = np.sort(y) y plt.plot(x,y, color='m', linestyle='--', marker='*', markersize=10, lw=1.5) plt.xlabel('X axis') plt.ylabel('Y axis') plt.title('X vs Y axis') plt.show() # multiple plots on same canvas plt.subplot(1,2,1) plt.plot(x,y,color='r') plt.subplot(1,2,2) plt.plot(x,y,color='m')
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
2) Object Oriented Method
import numpy as np import matplotlib.pyplot as plt from numpy.random import randint x = np.linspace(0,10,20) y = randint(1, 50, 20) y = np.sort(y) x y fig = plt.figure() axes = fig.add_axes([0.1,0.1,1,1]) axes.plot(x,y) axes.set_xlabel('X axis') axes.set_ylabel('Y axis') axes.set_title('X vs Y axis') # 2 sets of figures to 1 canvas fig = plt.figure() ax1 = fig.add_axes([0.1,0.1,0.8,0.8]) ax2 = fig.add_axes([0.2,0.5,0.4,0.3]) ax1.plot(x,y,color='r') ax1.set_xlabel('X axis') ax1.set_ylabel('Y axis') ax1.set_title('Plot 1') ax2.plot(x,y,color='m') ax2.set_xlabel('X axis') ax2.set_ylabel('Y axis') ax2.set_title('Plot 2')
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
2) Matplotlib Part 2 1) Subplots method
import numpy as np import matplotlib.pyplot as plt from numpy.random import randint x = np.linspace(0,10,20) y = randint(1, 50, 20) y = np.sort(y) x y fig,axes = plt.subplots() axes.plot(x,y) fig,axes = plt.subplots(nrows=2,ncols=3) plt.tight_layout() axes fig,axes = plt.subplots(nrows=1,ncols=2) for current_ax in axes: current_ax.plot(x,y) fig,axes = plt.subplots(nrows=1,ncols=2) axes[0].plot(x,y) axes[1].plot(x,y) axes[0].set_title('Plot 1') axes[1].set_title('Plot 2')
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
2) Figure size, Aspect ratio and DPI
import numpy as np import matplotlib.pyplot as plt from numpy.random import randint x = np.linspace(0,10,20) y = randint(1, 50, 20) y = np.sort(y) fig = plt.figure(figsize=(3,2),dpi=100) ax = fig.add_axes([0,0,1,1]) ax.plot(x,y) fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(7,2)) axes[0].plot(x,y) axes[1].plot(x,y) fig fig.savefig('my_pic.png',dpi=100) fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y) ax.set_xlabel('X axis') ax.set_ylabel('Y axis') ax.set_title('X vs Y') # legends fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,x**2,label='X vs X square') ax.plot(x,x**3,label='X vs X cube') ax.legend(loc=0)
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
3) Matplotlib Part 3
import numpy as np import matplotlib.pyplot as plt from numpy.random import randint x = np.linspace(0,10,20) y = randint(1, 50, 20) y = np.sort(y) fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='g',linewidth=3,ls='--',alpha=0.8,marker='o',markersize=10,markerfacecolor='yellow') fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='r',linewidth=3) ax.set_xlim([0,1]) ax.set_ylim([0,10])
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
4) Different Plots 1) Scatter Plots
import matplotlib.pyplot as plt y_views=[534,690,258,402,724,689,352] f_views=[123,342,700,305,406,648,325] t_views=[202,209,176,415,824,389,550] days=[1,2,3,4,5,6,7] plt.scatter(days,y_views,label='Youtube Views',marker='o') plt.scatter(days,f_views,label='Facebook Views',marker='o') plt.scatter(days,t_views,label='Twitter Views',marker='o') plt.xlabel('Days') plt.ylabel('Views') plt.title('Social Media Views') plt.grid(color='r',linestyle='--') plt.legend()
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
2) Bar plot
plt.bar(days,y_views,label='Youtube views') plt.bar(days,f_views,label='Facebook views') plt.xlabel('Days') plt.ylabel('Views') plt.title('Social Media Views') plt.legend()
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
3) Histogram
points=[22,55,62,45,21,22,99,34,42,4,102,110,27,48,99,84] bins=[0,20,40,60,80,100,120] plt.hist(points,bins) plt.xlabel('Bins') plt.ylabel('Frequency') plt.title('Bins vs Frequency') plt.show()
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
4) Pie chart
labels_1=['Facebook','Instagram','Youtube','linkedin'] views=[300,350,400,450] explode_1=[0,0,0,0.2] plt.pie(views,labels=labels_1,autopct='%1.1f%%',explode=explode_1,shadow=True) plt.show()
_____no_output_____
MIT
16DataVisualizationWithMatplotlib.ipynb
MBadriNarayanan/NaturalLanguageProcessing
!git clone https://github.com/MadhabBarman/Epidemic-Control-Model.git cd Epidemic-Control-Model/ %matplotlib inline import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy.integrate import odeint from scipy.io import savemat, loadmat import numpy.linalg as la from matplotlib.lines import Line2D M = 16 my_data = np.genfromtxt('data/age_structures/India-2019.csv', delimiter=',', skip_header=1) Real_data = np.genfromtxt('data/covid-cases/case_time_series.csv', delimiter=',', skip_header=1) aM, aF = my_data[:, 1], my_data[:, 2] Ni=aM+aF; Ni=Ni[0:M]; N=np.sum(Ni) # contact matrices my_data = pd.read_excel('data/contact_matrices_152_countries/MUestimates_home_1.xlsx', sheet_name='India',index_col=None) CH = np.array(my_data) my_data = pd.read_excel('data/contact_matrices_152_countries/MUestimates_work_1.xlsx', sheet_name='India',index_col=None) CW = np.array(my_data) my_data = pd.read_excel('data/contact_matrices_152_countries/MUestimates_school_1.xlsx', sheet_name='India',index_col=None) CS = np.array(my_data) my_data = pd.read_excel('data/contact_matrices_152_countries/MUestimates_other_locations_1.xlsx', sheet_name='India',index_col=None) CO = np.array(my_data) my_data = pd.read_excel('data/contact_matrices_152_countries/MUestimates_all_locations_1.xlsx', sheet_name='India',index_col=None) CA = np.array(my_data) CM = CH + CW + CS + CO my_data_nw = np.genfromtxt('data/covid-cases/india_10april.txt', delimiter='', skip_header=6) death_case, active_case = my_data_nw[:,4], my_data_nw[:,5] active = Real_data[:,7] active_new = active[34:107] death = Real_data[:,6] death_new = death[34:107] #save_results_to = 'C:/Users/HP/Desktop/Lat_radon/double peak/EPS_file/' alpha_d = 0.05 #fractional constant beta = 0.37 #rate of infection rho = 0.75 #control parameter of H xi = 0.29 #recovery rate from E alpha_1 = 0.7 #fractional part of E-->Q alpha_2 = 0.2 #fractional part of E-->A alpha_3 = 1-(alpha_1+alpha_2) #fractional part of E-->I phi_qh = 1/10 #Recovery rate of Q-->H q = 0.1 #fractional part of Q-->H g_as = 0.1 #rate A-->I d_ar = 2./7 #Recovery rate of A phi_sh = 1./2 #rate I-->H d_sr = 1./7 #Recovery rate of I d_hr = (1-alpha_d)/10 #Recovery rate of H eta = alpha_d/10 #Death rate fsa = 0.1 #Fraction of the contact matrix Cs fsh = 0.1 #Fraction of the contact matrix Ch # initial conditions E_0 = np.zeros((M)); Q_0 = np.zeros((M)) A_0 = np.zeros((M)) I_0 = np.zeros((M)); I_0[6:13]=2; I_0[2:6]=1 H_0 = np.zeros((M)) R_0 = np.zeros((M)) D_0 = np.zeros((M)) S_0 = Ni - (E_0+ Q_0 + A_0 + I_0 + H_0 + R_0 + D_0) Tf = 300; Nf = 3000 #Tf -->final time from 0, Nf-->total number points t = np.linspace(0,Tf,Nf) #time span #lockdown function ld = lambda t, t_on, t_off, t_won, t_woff, pld: 1 + pld*0.5*(np.tanh((t - t_off)/t_woff) - np.tanh((t - t_on)/t_won)) #staggered lockdown uc = lambda t:0.7-0.4*(np.tanh((t - 21)/4)) + 0.3*0.3*(1.0*np.tanh((t - 42)/4)-np.tanh((t - 93)/4))+\ 0.2+0.1*(np.tanh((t - 75)/4)) + 0.4*0.5*(np.tanh((t - 93)/4)) #LD2 #uc = lambda t:0.7-0.4*(np.tanh((t - 21)/4)) + 0.3*0.3*(1.0*np.tanh((t - 42)/4)-np.tanh((t - 93)/4))+\ # 0.2+0.1*(np.tanh((t - 75)/4)) + 0.4*0.5*(np.tanh((t - 93)/4)) +\ #ld(t,128, 153, 2, 2, 0.6-0.2) + ld(t,153,193, 2, 2, 0.8-0.2) + ld(t,193,233, 2, 2, 0.6-0.2)+ld(t,233,360, 2, 2, 0.4-0.2)-4.0 #LD3 #uc = lambda t:0.7-0.4*(np.tanh((t - 21)/4)) + 0.3*0.3*(1.0*np.tanh((t - 42)/4)-np.tanh((t - 93)/4))+\ # 0.2+0.1*(np.tanh((t - 75)/4)) + 0.4*0.5*(np.tanh((t - 93)/4)) +\ #ld(t,130, 160, 2, 2, 0.6-0.2)+ld(t,160, 230, 2, 2, 0.8-0.2) + ld(t,230, 300, 2, 2, 0.6-0.2) + ld(t,300, 420, 2, 2, 0.4-0.2) - 4.0 beta_max, k, t_m, beta_min = beta, 0.2, 49, 0.21 # def beta_f(t): return ((beta_max-beta_min) / (1 + np.exp(-k*(-t+t_m))) + beta_min) plt.figure(figsize=(16,5)) plt.rcParams['font.size']=26 plt.subplot(1,2,1) plt.plot(t,beta_f(t),lw=3); plt.title(r'$\beta(t)$') plt.grid(True) plt.xlim(0,100); plt.subplot(1,2,2) plt.plot(t, uc(t),lw=3) plt.title('Lockdown Strategy') plt.tight_layout(True) plt.grid(True) def cont(t): return CH + uc(t)*(CW + CO + CS) #return CM # S=y[i], E=y[M+i], Q=y[2M+i],A=y[3M+i], I=y[4M+i], H=y[5M+i], R=y[6M+i] for i=1,2,3,...,M dy = np.zeros(7*M) def rhs(y, t, cont, beta_f): CM = cont(t) #contact matrix for i in range(M): lmda=0 for j in range(M): lmda += beta_f(t)*(CM[i,j]*y[3*M+j] + fsa*CM[i,j]*y[4*M+j] +fsh*(1.0-rho)*CM[i,j]*y[5*M+j])/Ni[j] dy[i] = - lmda*y[i] + (1-q)*phi_qh*y[2*M+i] # S susceptibles dy[i+M] = lmda*y[i] - xi*y[M+i] #E exposed class dy[i+2*M] = alpha_1*xi*y[M+i] - phi_qh*y[2*M+i] #Q Quarantined dy[i+3*M] = alpha_2*xi*y[M+i] - (g_as + d_ar )*y[3*M+i] #A Asymptomatic infected dy[i+4*M] = alpha_3*xi*y[M+i] + g_as*y[3*M+i] - (phi_sh + d_sr)*y[4*M+i] #I Symptomatic infected dy[i+5*M] = phi_sh*y[4*M+i] + q*phi_qh*y[2*M+i] - (d_hr + eta)*y[5*M+i] #H Isolated dy[i+6*M] = d_ar*y[3*M+i] + d_sr*y[4*M+i] + d_hr*y[5*M+i] #Recovered return dy data = odeint(rhs, np.concatenate((S_0, E_0, Q_0, A_0, I_0, H_0, R_0)), t, args=(cont,beta_f)) tempS, tempE, tempQ, tempA, tempI, tempH, tempR = np.zeros((Nf)),\ np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)) for i in range(M): tempS += data[:, 0*M + i] tempE += data[:, 1*M + i] tempQ += data[:, 2*M + i] tempA += data[:, 3*M + i] tempI += data[:, 4*M + i] tempH += data[:, 5*M + i] tempR += data[:, 6*M + i] IC_death = N - (tempS + tempE + tempQ + tempA + tempI + tempH + tempR)
_____no_output_____
MIT
SEIRD_ControlModel.ipynb
MadhabBarman/Epidemic-Control-Model
**Simulated individuals figure**
fig = plt.figure(num=None, figsize=(28, 12), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 26}) plt.plot(t, (tempA + tempI + tempH)/N, '--', lw=6, color='g', label='Active Case', alpha=0.8) plt.plot(t, (tempA + tempI)/N , '-', lw=7, color='k', label='$A + I$', alpha=0.8) plt.plot(t, IC_death/N, '-.', lw=4, color='r', label='Death', alpha=0.8) plt.plot(t, tempH/N, '-', lw=3, color='b', label='H', alpha=0.8) plt.legend(fontsize=26, loc='best'); plt.grid() plt.autoscale(enable=True, axis='x', tight=True) plt.ylabel('Individuals(Normalized)'); plt.text(163.5,0.0175,'14-Aug(163Days)',rotation=90) plt.xlim(0,300); plt.xlabel('Time(Days)') plt.axvline(163,c='k',lw=3,ls='--'); #plt.savefig(save_results_to+'Figure10.png', format='png', dpi=200)
_____no_output_____
MIT
SEIRD_ControlModel.ipynb
MadhabBarman/Epidemic-Control-Model
**Analysis between real case data vs numerical**
fig = plt.figure(num=None, figsize=(28, 12), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 26, 'text.color':'black'}) plt.plot(t, tempA + tempI + tempH, '--', lw=4, color='g', label='Active case numerical', alpha=0.8) plt.plot(active_new, 'o-', lw=4, color='#348ABD', ms=16, label='Active case data', alpha=0.5) plt.plot(t, IC_death, '-.', lw=4, color='r', label='Death case numerical', alpha=0.8) plt.plot(death_new, '-*', lw=4, color='#348ABD', ms=16, label='death case data', alpha=0.5) plt.xticks(np.arange(0, 200, 14),('4 Mar','18 Mar','1 Apr','15 Apr','29 Apr','13 May','27 May','10Jun','24Jun')); plt.legend(fontsize=26, loc='best'); plt.grid() plt.autoscale(enable=True, axis='x', tight=True) plt.ylabel('Number of individuals'); plt.xlabel('Time(Dates)') plt.ylim(0, 60000); plt.xlim(0, 98);
_____no_output_____
MIT
SEIRD_ControlModel.ipynb
MadhabBarman/Epidemic-Control-Model
**Sensitivity of hospitalization parameter $\rho$**
q = 1.0 rhos = [0.0, 0.25, 0.5, 0.75, 1.0] fig = plt.figure(num=None, figsize=(20, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 20}) for rho in rhos: data = odeint(rhs, np.concatenate((S_0, E_0, Q_0, A_0, I_0, H_0, R_0)), t, args=(cont,beta_f)) tempS, tempE, tempQ, tempA, tempI, tempH, tempR = np.zeros((Nf)),\ np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)) for i in range(M): tempA += data[:, 3 * M + i] tempI += data[:, 4 * M + i] tempH += data[:, 5 * M + i] if rho==1.0: yy = tempA/N + tempI/N + tempH/N plt.plot(t,yy, lw = 2, ls='-',c='b', label=r'$\rho = $' + str(rho)) plt.plot(t[::100],yy[::100], '>', label=None, markersize=11, c='b') elif rho==0.75: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='orange') elif rho==0.5: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='g') elif rho==0.25: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='r') else: yy = tempA/N + tempI/N + tempH/N plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, ls='-', c='k') plt.plot(t[::100],yy[::100], '.', label=None, markersize=14, c='k') plt.ylabel('Active Case(Normalized)'); plt.xlabel('Time (Days)'); plt.autoscale(enable=True, axis='x',tight=True) plt.grid(True) colors = ['k', 'r','g','orange', 'b'] marker = ['.', None, None, None, '>'] lines = [Line2D([0], [0], color=c, linewidth=3, linestyle='-',marker=r, markersize=14) for (c,r) in zip(colors,marker)] labels = [r'$\rho=0.0$',r'$\rho=0.25$',r'$\rho=0.5$',r'$\rho=0.75$',r'$\rho=1.0$'] plt.legend(lines, labels,title=r'$q$ ='+str(q)+'(Fixed)') #plt.savefig('rho_var1.png', format='png',dpi=200) #plt.savefig(save_results_to+'Figure08.png', format='png',dpi=200)
_____no_output_____
MIT
SEIRD_ControlModel.ipynb
MadhabBarman/Epidemic-Control-Model
**Sensitivity of quarantine parameter $q$**
rho = 1.0 qs = [0.0, 0.25, 0.5, 0.75, 1.0] fig = plt.figure(num=None, figsize=(20, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 20}) for q in qs: data = odeint(rhs, np.concatenate((S_0, E_0, Q_0, A_0, I_0, H_0, R_0)), t, args=(cont,beta_f)) tempS, tempE, tempQ, tempA, tempI, tempH, tempR = np.zeros((Nf)),\ np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)), np.zeros((Nf)) for i in range(M): tempA += data[:, 3 * M + i] tempI += data[:, 4 * M + i] tempH += data[:, 5 * M + i] if q==1.0: yy = tempA/N + tempI/N + tempH/N plt.plot(t,yy, lw = 2, ls='-',c='b') plt.plot(t[::100],yy[::100], '>', label=None, markersize=11, c='b') elif q==0.75: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='orange') elif q==0.5: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='g') elif q==0.25: plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, c='r') else: yy = tempA/N + tempI/N + tempH/N plt.plot(t,tempA/N + tempI/N + tempH/N, lw = 3, ls='-', c='k') plt.plot(t[::100],yy[::100], '.', label=None, markersize=14, c='k') plt.ylabel('Active Case(Normalized)'); plt.xlabel('Time (Days)'); plt.autoscale(enable=True, axis='x',tight=True) plt.grid(True) colors = ['k','r','g','orange','b'] marker = ['.', None, None, None, '>'] lines = [Line2D([0], [0], color=c, linewidth=3, linestyle='-',marker=r, markersize=14) for (c,r) in zip(colors,marker)] labels = [r'$q=0.0$',r'$q=0.25$',r'$q=0.5$',r'$q=0.75$',r'$q=1.0$'] plt.legend(lines, labels,title=r'$\rho$ ='+str(rho)+'(Fixed)') #plt.savefig('q_var1.png', format='png',dpi=200) #plt.savefig(save_results_to+'Figure07.png', format='png',dpi=200)
_____no_output_____
MIT
SEIRD_ControlModel.ipynb
MadhabBarman/Epidemic-Control-Model
Select no SQLite
import sqlite3 import random import time import datetime # criando conexão conn = sqlite3.connect('dsa.db') # cursor c = conn.cursor() # criar tabela def create_table(): comando_create = 'CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, prod_name TEXT,valor REAL)' c.execute(comando_create) def data_insert(): comando_insert = "INSERT INTO produtos VALUES(002, '02-05-2016', 'teclado', 130)" c.execute(comando_insert) conn.commit() c.close() conn.close() def data_insert_var(): new_date = datetime.datetime.now() new_prod = 'monitor2' new_valor = random.randrange(50,100) c.execute("INSERT INTO produtos (date, prod_name, valor) VALUES (?, ?, ?, ?)", (new_date, new_prod, new_valor)) conn.commit() def leitura_todos_dados(): c.execute("SELECT * FROM produtos") for linha in c.fetchall(): print(linha) def leitura_registros(): c.execute("SELECT * FROM produtos WHERE valor > 60.0") for linha in c.fetchall(): print(linha) # leitura de coluna especifica def leitura_colunas(): c.execute("SELECT * FROM produtos") for linha in c.fetchall(): print(linha[3]) leitura_todos_dados() leitura_registros() leitura_colunas() c.close() conn.close()
_____no_output_____
MIT
06-Manipulando_Banco_Dados_Python/04-SQLite - Select no SQLite.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Archive dataThe Wellcome archive sits in a collections management system called CALM, which follows a rough set of standards and guidelines for storing archival records called [ISAD(G)](https://en.wikipedia.org/wiki/ISAD(G). The archive is comprised of _collections_, each of which has a hierarchical set of series, sections, subjects, items and pieces sitting underneath it. In the following notebooks I'm going to explore it and try to make as much sense of it as I can programatically.Let's start by loading in a few useful packages and defining some nice utils.
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style("white") plt.rcParams["figure.figsize"] = (20, 20) import pandas as pd import numpy as np import networkx as nx from sklearn.cluster import AgglomerativeClustering from umap import UMAP from tqdm import tqdm_notebook as tqdm def flatten(input_list): return [item for sublist in input_list for item in sublist] def cartesian(*arrays): return np.array([x.reshape(-1) for x in np.meshgrid(*arrays)]).T def clean(subject): return subject.strip().lower().replace("<p>", "")
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
let's load up our CALM data. The data has been exported in its entirety as a single `.json` where each line is a record. You can download the data yourself using [this script](https://github.com/wellcometrust/platform/blob/master/misc/download_oai_harvest.py). Stick the `.json` in the neighbouring `/data` directory to run the rest of the notebook seamlessly.
df = pd.read_json("data/calm_records.json") len(df) df.astype(str).describe()
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science
Exploring individual columnsAt the moment I have no idea what kind of information CALM contains - lets look at the list of column names
list(df)
_____no_output_____
MIT
notebooks/archive_exploration/notebooks/01 - subject coocurrence.ipynb
wellcomecollection/data-science