code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import spacy from tqdm import tqdm # - # !head -n10 dataset_40163_1.txt nlp = spacy.load('en_core_web_sm') result = [] with open('dataset_40163_1.txt') as f: for i, text in tqdm(enumerate(f)): line = [] for ent in nlp(text).ents: type_ = ent.label_ if type_ == 'ORG' or type_ == 'PERSON': line.append(f'{ent.start_char} {ent.end_char - ent.start_char} {type_}') result.append(' '.join(line)) result[0] with open('output.txt', 'w') as f: for line in result: f.write(line) if line: f.write(' ') f.write('EOL\n') # !head -n10 output.txt
entities.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.4 64-bit (''base'': conda)' # language: python # name: python37464bitbasecondabffb7192b95d4d4b82c65d1b674e2a7e # --- # ## Bonus (Optional) # # As you examine the data, you are overcome with a creeping suspicion that the dataset is fake. You surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee. To confirm your hunch, you decide to take the following steps to generate a visualization of the data, with which you will confront your boss: # # 1. Import the SQL database into Pandas. (Yes, you could read the CSVs directly in Pandas, but you are, after all, trying to prove your technical mettle.) This step may require some research. Feel free to use the code below to get started. Be sure to make any necessary modifications for your username, password, host, port, and database name: # # ```sql # from sqlalchemy import create_engine # engine = create_engine('postgresql://localhost:5432/<your_db_name>') # connection = engine.connect() # ``` # # * Consult [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/latest/core/engines.html#postgresql) for more information. # # * If using a password, do not upload your password to your GitHub repository. See [https://www.youtube.com/watch?v=2uaTPmNvH0I](https://www.youtube.com/watch?v=2uaTPmNvH0I) and [https://martin-thoma.com/configuration-files-in-python/](https://martin-thoma.com/configuration-files-in-python/) for more information. # # 2. Create a histogram to visualize the most common salary ranges for employees. # # 3. Create a bar chart of average salary by title. # # + import pandas as pd # Dependencies # ---------------------------------- # Imports the method used for connecting to DBs from sqlalchemy import create_engine, Column, Integer, String, Float # Imports the methods needed to abstract classes into tables from sqlalchemy.ext.declarative import declarative_base import matplotlib.pyplot as plt from config import key # - engine = create_engine(f"postgresql://postgres:{key}@localhost:5432/SQL_Challenge") connection = engine.connect() # + salaries = engine.execute("SELECT t.title, s.salary FROM titles AS t JOIN salaries AS s ON s.emp_no=t.emp_no") df = pd.DataFrame(salaries, columns=["title", "salary"]) avg_salary= df.groupby("title").mean() avg_salary.style.format("${:.2f}") # - avg_salary.plot.bar() plt.title("Average Salary by Job Title") plt.xlabel("Job Title") plt.ylabel("Annual Salary in Dollars") plt.ylim(35000,60000) plt.show() # ## Epilogue # # Evidence in hand, you march into your boss's office and present the visualization. With a sly grin, your boss thanks you for your work. On your way out of the office, you hear the words, "Search your ID number." You look down at your badge to see that your employee ID number is 499942. # # # + epilog = engine.execute("SELECT emp_no, first_name,last_name FROM employees where emp_no = 499942 ") epilog_df = pd.DataFrame(epilog, columns=["Emp_Num","first_name","last_name"]) #select * from employees where emp_no = 499942 #select * from salaries where emp_no = 499942 epilog_df # - # ## Submission # # * Create an image file of your ERD. # # * Create a `.sql` file of your table schemata. # # * Create a `.sql` file of your queries. # # * (Optional) Create a Jupyter Notebook of the bonus analysis. # # * Create and upload a repository with the above files to GitHub and post a link on BootCamp Spot. # connection.close()
Bonus.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # # Kaggle - Bosch Production Line Performance # ### (Handling Large Data With Limited Memory) # # Welcome! This jupyter notebook will demonstrate how to work with large datasets in python by analyzing production line data associated with the Bosch Kaggle competition (https://www.kaggle.com/c/bosch-production-line-performance). # # [This notebook is still a work in progress and will be updated as I improve algorithm performance.] # # Questions, comments, suggestions, and corrections can be sent to <EMAIL>. # # ## Business Challenge # Bosch, a manufacturing company, teamed up with Kaggle to challenge teams to create a classification algorithm that predicts "internal failures along the manufacturing process using thousands of measurements and tests made for each component along the assembly line." # # # ## Data # Bosch provided six huge files worth of data for the challenge (https://www.kaggle.com/c/bosch-production-line-performance/data). Three sets of training data--numeric, categorical, and dates--and the equivalent sets of test data. They contain a large number of features (one of the largest sets ever hosted on Kaggle), and the uncompressed files come out to **14.3 GB**. # # One of the largest difficulties associated with the competition is handing this amount of data. One strategy is to move the data to Amazon Web Services and use big data tools like Spark and Hadoop. Often, however, we are forced to extract value from data given real-world constraints like less memory and processing power. In this notebook, I'll work through an alternative approach where I split and simplify the data in order to process it on my 8GB RAM laptop. # # Let's start by examining the training data. Because the files are so large, we can't do the usual practice of using pandas to read the .CSV file into a dataframe. Instead, let's just look at a few lines. # + import pandas as pd line_count = 0 extracted_lines = [] with open('train_numeric.csv') as f: for line in f: if line_count < 6: extracted_lines.append(line) line_count += 1 else: break for line in extracted_lines: print line[:40], '...', line[-40:] # - # We see that each line in train_numeric.csv represents a component with an Id, a long list of features (many of which are blank), and a Response indicating passage or failure of QC. Further examination shows that only 0.58% of Responses are failures, or *1*. # # Because we already have more data than we can handle, we're going to simplify by only working with train_numeric.csv and disregard train_categorical.csv and train_date.csv. Furthermore, we need to deal with the fact that train_numeric.csv is larger than we can handle and also is highly imbalanced. To do this, we're going to pull out all of the rows with Positive responses and randomly sample an equivalent number of negative rows. We'll make a new .CSV file that is 1/100th the size of the original and is now equally balanced. # + import random line_count = 0 extracted_positive_lines = [] with open('train_numeric.csv') as f: for line in f: if line_count == 0: extracted_positive_lines.append(line) line_count += 1 elif line[-2] == '1': extracted_positive_lines.append(line) line_count = 0 extracted_negative_lines = [] with open('train_numeric.csv') as f: for line in f: if line_count == 0: line_count += 1 continue if line_count > 0 and random.random() < 0.0058: extracted_negative_lines.append(line) combined_extracted_lines = extracted_positive_lines + extracted_negative_lines with open('train_numeric_short.csv', 'w') as f: for line in combined_extracted_lines: f.write(line) # - # Now we can move the new .CSV to a pandas dataframe and replace the empty features with *0*. train_numeric_short_df = pd.read_csv('train_numeric_short.csv') train_numeric_short_df.fillna(value=0, inplace=True) train_numeric_short_df.shape # We're now working with 13769 samples with 968 features not including Id and Response. Let's use train_test_split from sklearn.cross_validation to split our training data, which will let us quickly evaluate and compare various classifiers. # + from sklearn.cross_validation import train_test_split X = train_numeric_short_df.drop(['Response', 'Id'], axis=1) y = train_numeric_short_df['Response'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) # - # ## Comparing Classifiers # With our training data split into new training and test sets, we can feed it into various sci-kit learn classifiers. The Kaggle competition is being judged using the Matthews correlation coefficient, so we'll use that to find the best classifier. # * https://en.wikipedia.org/wiki/Matthews_correlation_coefficient # * http://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html # # Additionally, we can use [recursive feature elimination with cross-validation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html). Our data set is high dimensional with 968 features. Removing features of low importance can reduce the model complexity, overfitting, and training time. from sklearn.metrics import matthews_corrcoef from sklearn.feature_selection import RFECV # We can start with a simple [logistic regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) combined with the recursive feature elimination. # + from sklearn.linear_model import LogisticRegression clf = RFECV(LogisticRegression(), step=200) clf.fit(X_train, y_train) y_output = clf.predict(X_test) matthews_corrcoef(y_test, y_output) # - # Next, let's try a [linear SVC model](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html). # + from sklearn.svm import LinearSVC clf = RFECV(LinearSVC(), step=200) clf.fit(X_train, y_train) y_output = clf.predict(X_test) matthews_corrcoef(y_test, y_output) # - # Let's try the [ExtraTreesClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html). # + from sklearn.ensemble import ExtraTreesClassifier forest = ExtraTreesClassifier(n_estimators=250, random_state=0) clf = RFECV(forest, step=200) clf.fit(X_train, y_train) y_output = clf.predict(X_test) matthews_corrcoef(y_test, y_output) # - # Now that we've settled on the ExtraTreesClassifier, let's retrain it using our full training set from before we split it with train_test_split. clf.fit(X, y) # We're ready to analyze the actual test data provided by Bosch. As with the training data, though, the 2.1 GB file is quite large for my laptop. We can split up the test data into files of 100000 lines each and get predictions for each smaller file and then stitch the predictions back together for a final submission file. # # Fortunately, pandas can read .CSV files in chunks which makes it easy to split up the test data file. test = pd.read_csv('test_numeric.csv', chunksize=100000) file_number = 0 for chunk in test: path = 'test_data/short' + str(file_number) + '.csv' chunk.to_csv(path) file_number += 1 for i in range(12): test_numeric_short_df = pd.read_csv('test_data/short' + str(i) + '.csv').fillna(value=0) Ids = test_numeric_short_df.ix[:,'Id'] X_test_real = test_numeric_short_df.drop(['Id', 'Unnamed: 0'], axis=1) y_output_real = selector.predict(X_test_real) output = pd.Series(y_output_real, name='Response') output = pd.concat([Ids, output], axis=1) output.to_csv('test_output/test_output' + str(i) + '.csv', index=False) # Now we just have to put our prediction files together into a single file. # + import shutil shutil.copyfile('test_output/test_output0.csv', 'test_output/output_combined.csv') output_combined = open('test_output/output_combined.csv', 'a') for i in range(1,12): lines = open('test_output/test_output' + str(i) + '.csv', 'r').readlines() for line in lines[1:]: output_combined.write(line) output_combined.close() # - # ## Conclusions # # Submitting our file to Kaggle gets us a score of 0.04623.
bosch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Module 1 Project # <NAME> # <br>Part Time # <br>Instructor: <NAME> # <br>Blog URL: https://akwon100.github.io/module1_project_ds-pt ''' Organization: We split each cell with the following labels: 1. Import 2. Webscraping 3. Making Dataframe 4. Cleaning Dataframe 5. Organizing data 6. Visualization For each label we add a small description if necessary For this project we need not use databases HOWEVER, they can be used and is often a good idea to use when handling large amounts of data. Hence at the end after all questions are anwered we show how to create a table and insert the information scraped for those who want to use databases. The scraped data will have a (*) pairing with the cells at the end Note: This is the final version of the code, hence none of the cells will be run as they have already been run and tested in a draft version ''' #Import from bs4 import BeautifulSoup import requests import urllib.parse import json # + #WEBSCRAPING: FUNCTIONS IMDB VIA API #Getting movies by title # Constants API_KEY = '<KEY> API_URL_SEARCH_MOVIE = 'https://imdb-api.com/en/API/SearchMovie/' + API_KEY API_URL_SEARCH_TITLE = 'https://imdb-api.com/en/API/Title/' + API_KEY # Logging movie_number = 0 found_id_count = 0 id_not_found = 0 movie_not_found = 0 total_added = 0 def fetchMovieByTitle(movie_title, movie_year, keys, catch): ''' Parameters: movie_title: <str> string to find movie id movie_year: <str or int> string or int to find movie id keys: <list or tuple> list of keys of dictionary of response object catch: <list or tuple> an empty list to catch results Returns: response object and appends to catch ''' encoded_movie_title = urllib.parse.quote(str(movie_title), safe='/', encoding=None, errors=None) url = API_URL_SEARCH_MOVIE + encoded_movie_title response = requests.request("GET", url) if response.status_code != 200: print(response.status_code, 'cannot find movie:', movie_title) # Logging global id_not_found id_not_found = id_not_found + 1 return False else: print(response.status_code, 'movie found', movie_title) data = json.loads(response.text) getMovieIdFromResults(data.get('results'), movie_title, movie_year, keys, catch) # Logging global found_id_count found_id_count = found_id_count + 1 return True def getMovieIdFromResults(api_result, movie_title, movie_year, keys, catch): ''' Parameters: api_result:<dict> response obj movie_title: <str> string to find movie id movie_year: <str or int> string or int to find movie id keys: <list or tuple> list of keys of dictionary of response object catch: <list or tuple> an empty list to catch results Returns: response object and appends to catch ''' for result in api_result: if type(result) is dict: get_title = result.get('title') get_year = result.get('description') get_id = result.get('id') if str(movie_title) in str(get_title) and str(movie_year) in str(get_year): fetchMovieDataFromId(get_id, keys, catch) return True else: print('the movie', str(movie_title), 'was not found') # Logging global movie_not_found movie_not_found = movie_not_found + 1 return True else: print('the result is not a dict') return True def fetchMovieDataFromId(movie_id, keys, catch): ''' Parameters: movie_id: <str> imdb movie id keys: <list or tuple> list of keys of dictionary of response object catch: <list or tuple> an empty list to catch results Returns: response object and appedns to catch ''' global total_added #all data is present if total_added >= 2000: return True url = API_URL_SEARCH_TITLE + movie_id response = requests.request("GET", url) if response.status_code != 200: print(response.status_code, 'cannot find movie for id:', movie_id) return False else: results = json.loads(response.text) log_results = [] result = [movie_id] + [results.get(key) for key in keys if results.get(key) != ''] catch.append(tuple(result)) log_results.append(result) # Logging total_added = total_added + 1 print('appending', log_results, total_added) # + #WEBSCRAPING: Functions scraping IMDB via beautiful soup #Letting sites know we are a browser def iAmBrowser(url): headers = requests.utils.default_headers() headers.update({'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',}) page = requests.get(url, headers=headers) soup = BeautifulSoup(page.content ,'lxml') return soup #making dictionary to create dataframe with metascore, metacritic, maturity rating, imdb rating, num of imdb votes def making_list(movie_id, catch): ''' Parameters: movie_id: <str> imdb movie id catch:<list or tuple> empty list to catch result Returns: <list> catch ''' metascore = retrieveMetascore(movie_id)[0] metacritic = retrieveMetascore(movie_id)[1] mrating = retrieveMaturityRating(movie_id) imdbScore = retrieveImdbrating(movie_id)[0] imdbVotes = retrieveImdbrating(movie_id)[1] result = (movie_id, metascore, metacritic, mrating, imdbScore, imdbVotes) catch.append(result) return catch #retrieving metascore for each movie def retrieveMetascore(movie_id): ''' Parameters: movie_id: <str> string to get url for movie Returns: metascore:<str> metascore of movie and number of metacritics ''' url = 'https://www.imdb.com/title/' + movie_id + '/criticreviews?ref_=tt_ov_rt' soup = iAmBrowser(url) metascore = soup.find('span', {'itemprop': 'ratingValue'}) if metascore: metascore = metascore.text.strip() #print('appending metascore:' + metascore) else: metascore = 'None' #print('Not found') metacritics = soup.find('span', {'itemprop': 'ratingCount'}) if metacritics: metacritics = metacritics.text.strip() #print('appending metacritic:' + metacritics) else: metacritics = 'None' #print('Not found') return metascore, metacritics #retrieving maturity rating for each movie def retrieveMaturityRating(movie_id): ''' Parameters: movie_id:<str> movie id Returns: <str> maturity rating ''' url = 'https://www.imdb.com/title/' + movie_id + '/parentalguide?ref_=tt_stry_pg' soup = iAmBrowser(url) MaturityRating = soup.find(id='mpaa-rating') if MaturityRating: MaturityRating = MaturityRating.find_all("td")[1].string.split()[:2] #print('appending Mrating') else: MaturityRating = 'None' #print('Not found') return listToString(MaturityRating) #retrieving Imdb rating and number of votes for each movie def retrieveImdbrating(movie_id): ''' Parameters: movie_id:<str> movie id Returns: <str> imdb score and number of imdb voters ''' url = "https://www.imdb.com/title/" + movie_id + "/ratings" soup = iAmBrowser(url) numVotes = soup.find('div',{'class': 'allText'}) if numVotes: numVotes = numVotes.text.strip() #print('appending numVotes') else: numVotes = 'None' #print('Not found') rating = soup.find('div',{'class': 'allText'}) if rating: rating = rating.text.strip() #print('appending rating') else: rating = 'None' #print('Not found') return numVotes.split()[0], listToString(rating.split()[-3:]) def listToString(s): str1 = "" for item in s: str1 += ' ' + item return str1 # - #retrieving imdb movie id for each movie from top 100 of each year def top_100(url, catch): ''' Parameters: url: <url> url to request catch: <list or tuple> empty list or tuple to catch results Returns: appends movie ids to catch ''' soup = iAmBrowser(url) Movie = soup.find('div', attrs = {'class': 'lister list detail sub-list'}) Movie = Movie.find('div', attrs = {'class' : 'lister-list'}) items = Movie.find_all('div', class_= 'lister-item mode-detail') for item in items: movie_id = item.find('div', attrs={'class': 'lister-item-image ribbonize'}) movie_id = str(item) for strings in str(movie_id).split(): if 'data-tconst' in strings: movie_id = strings.split('=') movie_id = movie_id[1].split('>') movie_id = movie_id[0].strip('\"') catch.append(movie_id) #catch = list(dict.fromkeys(catch)) #return catch # + #WEBSCRAPING: functions scraping IMDB via beautiful soup #function which retrieves votes def retrieveVotes(movie_id): ''' Parameters: movie_id:<str> movie id Returns: <list> list of number of votes ''' url = "https://www.imdb.com/title/" + movie_id + "/ratings" soup = iAmBrowser(url) table = soup.find('table') if not table: votes_list.append((movie_id , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)) result =[] rows = table.find_all('tr') for tr in rows: votes = tr.find('div',{ 'class':'leftAligned'}).text.strip() if votes != 'Votes': votes = int(votes.replace(',', '')) result.append(votes) print('appending Votes') reverse_votes = result[::-1] return reverse_votes #function which makes the tuple of votes we've found def makeVotesTuple(movie_id, title, catch): ''' Parameters: movie_id:<str> movie id title:<str> title of movie catch:<list> empty list to catch results Returns: appends results of retrieveVotes() to list catch ''' catch.append((movie_id, title) + tuple([i for i in retrieveVotes(movie_id)])) # - ''' Procedure (A): 1. We will first scrape the movie ids of 100 movies sorted by popularity for each year for the past ten years 2. We then scrape using API: directors, genres, titles, boxoffice, year 3. We then scrape using beautiful soup: metascore, imdb-score, maturity rating 4. We then scrap using beautiful soup: imdb vote chart from 1-10 ''' # + #WEBSCRAPING: we are going to scrape the movie ids (see procedure(A) 1) #top 100 movies sorted by popularity for each year for past ten years url_2019 = 'https://www.imdb.com/list/ls041214362/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2018 = 'https://www.imdb.com/list/ls047677021/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2017 = 'https://www.imdb.com/list/ls023426386/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2016 = 'https://www.imdb.com/list/ls063924870/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2015 = 'https://www.imdb.com/list/ls073386152/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2014 = 'https://www.imdb.com/list/ls058177122/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2013 = 'https://www.imdb.com/list/ls053040009/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2012 = 'https://www.imdb.com/list/ls006206951/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2011 = 'https://www.imdb.com/list/ls000463584/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' #url_2010 = 'https://www.imdb.com/list/ls009654807/?sort=moviemeter,asc&st_dt=&mode=detail&page=1' top_100_2019 = [] #top_100_2018 = [] #top_100_2017 = [] #top_100_2016 = [] #top_100_2015 = [] #top_100_2014 = [] #top_100_2013 = [] #top_100_2012 = [] #top_100_2011 = [] #top_100_2010 = [] top_100(url_2019, top_100_2019) #top_100(url_2018, top_100_2018) #top_100(url_2017, top_100_2017) #top_100(url_2016, top_100_2016) #top_100(url_2015, top_100_2015) #top_100(url_2014, top_100_2014) #top_100(url_2013, top_100_2013) #top_100(url_2012, top_100_2012) #top_100(url_2011, top_100_2011) #top_100(url_2010, top_100_2010) # - #(*) see cell 1 at the end #WEBSCRAPING: we will use the combined ids obtained to retrieve data (see procedure(A) 2) #top_100_10yrs = top_100_2019+top_100_2018+top_100_2017+top_100_2016+top_100_2015+top_100_2014+top_100_2013+top_100_2012+top_100_2011+top_100_2010 top100_10yrs = [] keys = ['title', 'genres', 'directors', 'year', 'boxOffice'] for movie_id in list(set(top_100_2019))[:10]: fetchMovieDataFromId(movie_id, keys, top100_10yrs) # + #top100_10yrs # - #(*) see cell 2 at the end #WEBSCRAPING: we will use the combined ids obtained to retrieve data (see procedure(A) 3) top100_10yrs2 = [] for movie_id in list(set(top_100_2019))[:10]: making_list(movie_id, top100_10yrs2) # + #top100_10yrs2 # - import pandas as pd # + #(*) see cell 1 at the end #MAKING DATAFRAME: Before proceeding to procedure 4 we will make a dataframe called all.csv (see prcedure(B) 1) #separate boxoffice which is a dictionary with the rest of data CONFIG = {'MOVIE_INFO_COLUMN_NAMES' : ['ids', 'titles', 'genres', 'directors', 'year'], 'MOVIE_RATING_COLUMN_NAMES' : ['ids', 'metascore', 'metacritics', 'mrating', 'imdbvotes','imdbScore'], 'VOTES_COLUMN_NAMES' : ['ids', 'title','one', 'two', 'three','four', 'five', 'six', 'seven', 'eight', 'nine', 'ten'] } boxOffice = [x[5] for x in top100_10yrs] listToDf = [(x[0],x[1],x[2],x[3], x[4]) for x in top100_10yrs] #make each into a dataframe listToDf_df = pd.DataFrame(listToDf, columns = CONFIG.get('MOVIE_INFO_COLUMN_NAMES')) boxoffice_df = pd.DataFrame(boxOffice) #concatenate the two dataframes movie_info_df = pd.concat([listToDf_df, boxoffice_df], axis =1, join = 'inner') #make the dataframe of the retrieved ratings movie_ratings_df = pd.DataFrame(top100_10yrs2, columns = CONFIG.get('MOVIE_RATING_COLUMN_NAMES')) #merge the two dataframes on ids all_df = movie_info_df.merge(movie_ratings_df, on = 'ids') #save this dataframe as all.csv #all_df.to_csv(r'<insert path here>/all.csv', mode='w', header=True, index=False) # - all_df #WEBSCRAPING: we will use the combined ids obtained to retrieve data (see procedure(A) 4) votes_list = [] list_movie_id = all_df['ids'] list_titles = all_df['titles'] for movie_id, title in zip(list_movie_id, list_titles): makeVotesTuple(movie_id, title, votes_list) # + #votes_list # - ''' Procedure(B): 1. We make a dataframe called all_df and save as all.csv which includes data from procedure(A)1,2,3 2. We make a dataframe called votes_df and save as votes.csv which includes data from procedure(A)4 ''' # + #MAKING DATAFRAME: making dataframe votes_df and saving as votes.csv (see procedure(B)2) votes_df = pd.DataFrame(votes_list, columns = CONFIG.get('VOTES_COLUMN_NAMES')) #save dataframe #votes_df.to_csv(r'<insert path here>/votes.csv', mode='w', header=True, index=False) # - votes_df # ## Question 1: # #### What is the 'best' genres and maturity rating pair? ''' Procedure(C): We want to clean data according to how we want to answer our first question 1. a)We organize to see if there are any genres which trend by counting genre for each year. b)Visualize using a bar graph for each year see if there are any trends in genre. c)Save image as All_graphs. d) this will determine which combination will be a good combination for genres 2. a)We calculate average profit for each genre and maturity rating pair. b)Visualize using bar graph for profit vs genre,maturity rating pair. c)Save image as Genre_profit. 3. a)We calculate average metascore and imdb score for each genre and maturity rating pair. b)Visualize using bargraph for score vs genre,maturity rating pair. c)Save image as Genre_score. 4. a)We calculate average skewness for each genre and maturty rating pair. b)Visualize using bargraph for skewness vs genre,maturity rating pair. c)Save image as Genre_skew We take the average since there are no major outliers from observation. ''' # + #ORGANIZING DATA: function which counts genre per year (see procedure(C)1) genre_dict = {'Action':0, 'Adult':0, 'Adventure':0, 'Animation':0,'Biography':0, 'Comedy':0, 'Crime':0, 'Documentary':0, 'Drama':0, 'Family':0, 'Fantasy':0, 'Film_Noir':0, 'Game_Show':0, 'History':0, 'Horror':0, 'Musical':0, 'Music':0, 'Mystery':0,'News':0, 'Reality_TV':0, 'Romance':0, 'Sci_Fi':0, 'Short':0, 'Sport':0, 'Talk_Show':0,'Thriller':0, 'War':0, 'Western':0} def counting_genre(value): for key in genre_dict.keys(): if key in value: genre_dict[key] = genre_dict[key]+ 1 return True # - #ORGANIZING DATA: we separate into each year (see procedure(C)1) all2019_df = all_df[all_df['year']== '2019'] #all2018_df = all_df[all_df['year']== '2018'] #all2017_df = all_df[all_df['year']== '2017'] #all2016_df = all_df[all_df['year']== '2016'] #all2015_df = all_df[all_df['year']== '2015'] #all2014_df = all_df[all_df['year']== '2014'] #all2013_df = all_df[all_df['year']== '2013'] #all2012_df = all_df[all_df['year']== '2012'] #all2011_df = all_df[all_df['year']== '2011'] #all2010_df = all_df[all_df['year']== '2010'] # + #all2019_df # - #ORGANIZING DATA: counting genre for each year (see procedure(C)1) #NOTE: we will have to run this for each year separately and produce graphs separately all2019_df['genres'].apply(lambda x: counting_genre(x)) #IMPORT: for visualization import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline import numpy as np import sys from PIL import Image #VISUALIZATION: bar graph shows which is most reoccuring genre (see procedure(C)1) #NOTE: we will have to run this for each year separately Look at visualizations folder objects = genre_dict.keys() y_pos = np.arange(len(objects)) performance = genre_dict.values() plt.rcParams.update({'font.size': 13}) plt.subplots(figsize=(12,9)) plt.barh(y_pos, performance, align = 'center', alpha = 1.0) plt.yticks(y_pos, objects) plt.ylabel('Genres') plt.xlabel('Count') plt.title('Genre Popularity in 2019') #plt.savefig('Genre_2019.png', bbox_inches = 'tight') plt.show() # + #the next step please skip: there are ways to graph side by side via matplotlib rather than to concatenate images # + #DO NOT RUN #VISUALIZATION: concatenating all images (see procedure(C)1) #the functions below were borrowed from https://note.nkmk.me/en/python-pillow-concat-images/ im1 = Image.open('MoreData/2010_12.jpg') im2 = Image.open('MoreData/2013_15.jpg') im3 = Image.open('MoreData/2016_18.jpg') im4 = Image.open('MoreData/Genre_2019.png') im_list = [im1, im2, im3, im4] #function to concatenate multiple images horizontally def get_concat_h_multi_resize(im_list, resample=Image.BICUBIC): min_height = min(im.height for im in im_list) im_list_resize = [im.resize((int(im.width * min_height / im.height), min_height),resample=resample) for im in im_list] total_width = sum(im.width for im in im_list_resize) dst = Image.new('RGB', (total_width, min_height)) pos_x = 0 for im in im_list_resize: dst.paste(im, (pos_x, 0)) pos_x += im.width return dst #function to concatenate multiple images vertically def get_concat_v_multi_resize(im_list, resample=Image.BICUBIC): min_width = min(im.width for im in im_list) im_list_resize = [im.resize((min_width, int(im.height * min_width / im.width)),resample=resample) for im in im_list] total_height = sum(im.height for im in im_list_resize) dst = Image.new('RGB', (min_width, total_height)) pos_y = 0 for im in im_list_resize: dst.paste(im, (0, pos_y)) pos_y += im.height return dst #saving image get_concat_v_multi_resize(im_list).save('MoreData/All_graphs.jpg') # + #DATACLEANING all_df['budget'] = all_df['budget'].astype(str) all_df['budget'] = all_df['budget'].apply(lambda x: x.replace('$', '')) all_df['budget'] = all_df['budget'].apply(lambda x: x.replace(',', '')) all_df['budget'] = all_df['budget'].apply(lambda x: x.replace('(estimated)', '')) all_df['grossUSA'] = all_df['grossUSA'].astype(str) all_df['grossUSA'] = all_df['grossUSA'].apply(lambda x: x.replace('$', '')) all_df['grossUSA'] = all_df['grossUSA'].apply(lambda x: x.replace(',', '')) all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].astype(str) all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].apply(lambda x: x.replace('$', '')) all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].apply(lambda x: x.replace(',', '')) # + #DATA CLEANING def convertUSD(value): if 'EUR' in value: return float(value.replace('EUR',''))*(1.09) elif 'NOK' in value: return float(value.replace('NOK',''))*(0.096) elif 'JPY' in value: return float(value.replace('JPY',''))*(0.0093) elif 'DKK' in value: return float(value.replace('DKK',''))*(0.15) elif 'GBP' in value: return float(value.replace('GBP',''))*(1.25) elif 'INR' in value: return float(value.replace('INR',''))*(0.013) elif 'KRW' in value: return float(value.replace('KRW',''))*(0.00082) elif 'CAD' in value: return float(value.replace('CAD',''))*(0.71) elif 'AUD' in value: return float(value.replace('AUD',''))*(0.64) else: return value all_df['budget'] = all_df['budget'].apply(lambda x: convertUSD(x)) # - #DATA CLEANING all_df = all_df[all_df['budget'] != ''] #DATA CLEANING def fillWithNaN(value): if value == '': return np.nan else: return value all_df['grossUSA'] = all_df['grossUSA'].apply(lambda x: fillWithNaN(x)) all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].apply(lambda x: fillWithNaN(x)) all_df #DATA CLEANING all_df['grossUSA'] = all_df['grossUSA'].astype(float) all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].astype(float) all_df['budget'] = all_df['budget'].astype(float) #DATA CLEANING sns.scatterplot(x = 'grossUSA', y = 'cumulativeWorldwideGross', data= all_df) #there exist a correlation #DATA CLEANING all_df['cumulativeWorldwideGross'] = all_df['cumulativeWorldwideGross'].dropna() no_null_df = all_df[all_df['grossUSA'] != 0 ] no_null_df['ratio'] = no_null_df['grossUSA']/no_null_df['cumulativeWorldwideGross'] mean = no_null_df['ratio'].mean() #DATA CLEANING all_df['grossUSA'].fillna(all_df['cumulativeWorldwideGross']* mean, inplace = True) #ORGANIZING DATA all_df['profit'] = (0.4*(all_df['cumulativeWorldwideGross'] - all_df['grossUSA']) + 0.6*(all_df['grossUSA']))-all_df['budget'] all_df['cumulativeWorldwideGross'].fillna(0) all_df = all_df[all_df['cumulativeWorldwideGross']!= 0] # + #DATA CLEANING #check if its normal distribution: #pearson formula using median def pearsonSkew(median, mean, std): p = 3*(mean-median)/std return p dom_med = no_null_df['grossUSA'].median() dom_mean = no_null_df['grossUSA'].mean() dom_std = no_null_df['grossUSA'].std() pearsonSkew(dom_med, dom_mean, dom_std) # - #MAKING DATAFRAME: making data frame to separate genres (see procedure(C)2) df_list=[] for index, rows in all_df.iterrows(): my_list =[rows[i] for i in all_df.columns] if len(my_list[2].split(',')) > 1: for item in my_list[2].split(','): df_list.append([*my_list[:2],item,*my_list[3:]]) #MAKING DATAFRAME: making dataframe separted genres (see procedure(C)2) singleGenre_df = pd.DataFrame(df_list, columns = all_df.columns) #singleGenre_df # we drop information we dont need singleGenre_df = singleGenre_df.drop(['openingWeekendUSA'], axis=1) singleGenre_df #DO NOT RUN #CLEANING DATAFRAME: We only want movies with a maturty rating (see procedure(C)2) singleGenre_df['mrating'] = singleGenre_df['mrating'].fillna(' N o n e') singleGenre_df = singleGenre_df[singleGenre_df['mrating'] != ' N o n e'] #MAKING DATAFRAME: groupby genre and maturity rating mean (see procedure(C)2) singleGenre_df.groupby(['genres', 'mrating']).mean() # + #VISUALIZATION: Genre and Maturity-rating vs Profit (see procedure(C)2) df = singleGenre_df.groupby(['genres', 'mrating'])['profit'].mean() ax = df.plot(kind='bar', figsize=(25,13), color="indigo", fontsize=10); ax.set_alpha(0.8) ax.set_title("genre and Maturity-rating vs Profit", fontsize=22) ax.set_ylabel("genre and m-rating", fontsize=15); #saving image #plt.savefig('Genre_Profit.png', bbox_inches = 'tight') plt.show() # - #CLEANING DATAFRAME: We want imdbscore and metascore as floats, (see procedure(C)3) singleGenre_df['imdbScore'] = singleGenre_df['imdbScore'].apply(lambda x: x[:4]) singleGenre_df['imdbScore'] = singleGenre_df['imdbScore'].astype(float) # + #we want imdbScore to be similar to metascore so we multiply by 10 so both scores are out of 100 singleGenre_df['imdbScore'] = singleGenre_df['imdbScore'].apply(lambda x: x) singleGenre_df['metascore'] = singleGenre_df['metascore'].fillna(0) #We only look at those values which are not 0 singleGenre_df['metascore'].astype(float) singleGenre_df = singleGenre_df[singleGenre_df['metascore'] !=0] # - singleGenre_df['metascore'] = singleGenre_df['metascore'].apply(lambda x: float(x)) singleGenre_df #MAKING DATAFRAME: groupby genre and maturity rating via mean. (see procedure(C)3) singleGenre_df.groupby(['genres', 'mrating']).mean() # + #VISUALIZATION: Genre Maturity rating vs Score (see procedure(C)3) x= singleGenre_df[['genres','mrating','metascore','imdbScore' ]] y= x.set_index(['genres', 'mrating']) z=y.groupby(['genres', 'mrating']).mean() ax = z.plot(kind= 'bar', figsize=(25,8), stacked=True) ax.set_title("Genre and Maturity-rating vs Scores", fontsize=30) ax.set_ylabel("Scores", fontsize=20) #save image ax.set_xlabel("Genre and Maturity-rating", fontsize=15) plt.savefig('Genre_score.png', bbox_inches = 'tight') # + #ORGANIZING DATA: functions to calculate skewness for each movie (see procedure(C)4) #calculating Bowleyskewness or Yules coeff def bowleySkew(data, freq): ''' Parameters: data:<list> list of data freq:<list> frequency of data Returns: <float> Bowley's skewness ''' Q1 = calculateValue(25,data,freq) Q2 = calculateValue(50,data,freq) Q3 = calculateValue(75,data,freq) Yule_coeff = (Q3 + Q1 - 2*Q2)/(Q3-Q1) return round(Yule_coeff,4) #calculating value num def calculateValue(percentile, data, freq): ''' Parameters: percentile: <int> percentile data:<list> data freq:<list> frequency of data Returns: <float> value number ''' cum_freq = cumulativeFrequency(freq) total = cum_freq[-1] value_num = (percentile/100)*(total+1) if type(value_num) == float: value_num = round(value_num) if value_num <= cum_freq[0]: return data[0] else: for x in range(0,len(cum_freq)): if value_num >= cum_freq[x-1] and value_num <= cum_freq[x]: return data[x] #calculating cumulative freq def cumulativeFrequency(freq): ''' Parameters: freq:<list> frequency of data Returns: <list> cumulative frequency ''' cum_freq = [] cum_freq.append(freq[0]) for x in range(1, len(freq)): cum_freq.append(cum_freq[x-1] + freq[x]) return cum_freq # + #MAKING DATAFRAME: adding skewness to votes_df (see procedure(C)4) skew_list =[] votes = [1,2,3,4,5,6,7,8,9,10] for index, rows in votes_df.iterrows(): my_list =[rows.one, rows.two, rows.three, rows.four, rows.five, rows.six, rows.seven, rows.eight, rows.nine, rows.ten] skew_list.append(bowleySkew(votes, my_list)) votes_df['skewness']= skew_list # - votes_df = all_df.merge(votes_df, on = 'ids') votes_df # + #MAKING DATAFRAME: function separating into genres again (see procedure(C)4) df_list2=[] for index, rows in votes_df.iterrows(): my_list =[rows[i] for i in votes_df.columns] if len(my_list[2].split(',')) > 1: for item in my_list[2].split(','): df_list2.append([*my_list[:2],item,*my_list[3:]]) skewd_df = pd.DataFrame(df_list2, columns = votes_df.columns) # - skewd_df #CLEANING DATAFRAME: cleaning mrating same as singleGenres (see procedure(C)4) skewd_df['mrating'] = skewd_df['mrating'].fillna(' N o n e') skewd_df.mrating.unique() skewd_df = skewd_df[skewd_df['mrating'] != ' N o n e'] #VISUALIZATION: Genre Maturity rating vs skew (see procedure(C)4) df = skewd_df.groupby(['genres','mrating'])['skewness'].mean() ax = df.plot(kind='bar', figsize=(25,8), color="indigo", fontsize=10); ax.set_alpha(0.8) ax.set_title("Genre vs skewness", fontsize=30) ax.set_ylabel("Skewness", fontsize=20) ax.set_xlabel("Genre", fontsize=15) plt.savefig('Genre_skew.png', bbox_inches = 'tight') plt.show() # ## ANSWER 1: ((Sci-fi, Action), PG-13), ((Crime, Action), PG), ((Mystery,action),PG) ''' Answer: Via the visualization we get the following information: 1. Throughout the years Action seems to appear the most, whereas horror and drama seem to lose trend 2. We look at top 5-6 movies from each: genre and maturity rating vs (1: profit, 2: skewness, 3: scores) If a movie appears twice or three times in each we take that as a successful genre and maturty rating pair: What we found: (Sci-fi, PG13),(Crime, PG), (Mystery,PG), (Documentary, rated R) Note: we want to make a movie that can profit off of product placement and possible merchandising. Hence we will discard (Documentary, rated R) Since Action was a reoccuring genre with most count throughout ten years the pair (Sci-fi, action), (Crime, action), (Mystery, action) is most likely a good combination of genres ''' # ## Question2: # ### What are the top directors for such genre/maturity-rating pair? ''' Procedure(D): There is not much difference to the procedures in D with procedures in C 1. We identify the directors of genre/maturity-rating pair 2. See director vs profit (the procedure is similar to that of Procedure (C)1) save as director_score.png 3. directors vs scores (the procedure is similar to that of Procedure (C)2) save as director_profit.png ''' # + #DO NOT RUN #MAKING DATAFRAME: See procedure(D) 1 MysteryPG_df = singleGenre_df[(singleGenre_df['genres'] == ' Mystery')&(singleGenre_df['mrating'] == ' Rated PG')] SciFiPG13_df = singleGenre_df[(singleGenre_df['genres'] == ' Sci-Fi')&(singleGenre_df['mrating'] == ' Rated PG-13')] CrimePG_df = singleGenre_df[(singleGenre_df['genres'] == ' Crime')&(singleGenre_df['mrating'] == ' Rated PG')] pieces = (MysteryPG_df,SciFiPG13_df,CrimePG_df) df_final = pd.concat(pieces, ignore_index = True) # + #DO NOT RUN #VISUALIZATION: directors vs score x= df_final[['directors','metascore','imdbScore' ]] y= x.set_index(['directors']) ax = y.plot(kind= 'bar', figsize=(25,8), stacked=True) ax.set_title("Directors vs Scores", fontsize=30) ax.set_ylabel("Scores", fontsize=20) ax.set_xlabel("Directors", fontsize=15) #save image plt.savefig('director_score.png', bbox_inches = 'tight') # + #DO NOT RUN #VISUALIZATION: directors vs profit x= df_final[['directors','profit' ]] y= x.set_index(['directors']) ax = y.plot(kind= 'bar', figsize=(25,8), color = 'green') ax.set_title("Directors vs profit", fontsize=30) ax.set_ylabel("profit", fontsize=20) ax.set_xlabel("Directors", fontsize=15) #save image plt.savefig('director_profit.png', bbox_inches = 'tight') # - # ## ANSWER 2: <NAME> and <NAME> ((Sci-fi, Action), PG-13) ''' Answer: From our findings it seems <NAME> and <NAME> is a good candidate for director They specialize in Sci-Fi PG-13 Note: Most movies if not all movies contain multiple genres, any movie which contains Sci-Fi as one of its genre is a movie of genre Sci-Fi. The movie which awarded <NAME> and <NAME> a successful Sci-Fi PG-13 movie was Avengers:EndGame So one should be careful what this genre/maturty pairing really means ''' # ## QUESTION 3: # ### Is there a correlation between scores and profit? ''' Procedure(E): 1. Since imdb scores, metascore as well as profit ranges are very different we first normalize using z-score normalization. 2. Visualize using a scatter plot the correlation between profit and scores ''' singleGenre_df.drop_duplicates('ids', keep = 'first') # + #ORGANIZING DATA: function calculating z-score. (See procedure(D) 1) #normalize using z-score normalization metascore_mean = round(singleGenre_df['metascore'].mean(),3) metascore_std = round(singleGenre_df['metascore'].std(),3) imdb_mean = round(singleGenre_df['imdbScore'].mean(),3) imdb_std = round(singleGenre_df['imdbScore'].std(),3) profit_mean = round(singleGenre_df['profit'].mean(),3) profit_std = round(singleGenre_df['profit'].std(),3) def zScoreNormalization(value, mean, std): z= (value - mean)/std return z # + #metascore_mean #metascore_std #imdb_mean #imdb_std #= 0 PROBLEM #profit_mean #profit_std # - singleGenre_df singleGenre_df['metascore'] = singleGenre_df['metascore'].apply(lambda x: zScoreNormalization(x, metascore_mean, metascore_std)) singleGenre_df['imdbScore'] = singleGenre_df['imdbScore'].apply(lambda x: zScoreNormalization(x, imdb_mean, imdb_std)) singleGenre_df['profit'] = singleGenre_df['profit'].apply(lambda x: zScoreNormalization(x, profit_mean, profit_std)) # + #DO NOT RUN #VISUALIZATION: scatter plot, score vs profit (See procedure(D) 1) fig, ax = plt.subplots() sns.scatterplot(x = 'imdbScore', y = 'profit', data= singleGenre_df, ax=ax) sns.scatterplot(x = 'metascore', y = 'profit', data= singleGenre_df, ax=ax) ax.legend(labels = ['metascore', 'imdbScore']) ax.set_xlabel("Scores", fontsize=15) #save image #plt.savefig('score_profit.png', bbox_inches = 'tight') # + #Below are some extra visualizations one can make # - singleGenre_df.corr() #heatmap sns.heatmap(singleGenre_df.corr(),cmap='magma',linecolor='white',linewidths=1) #plt.savefig('heatmap.png', bbox_inches = 'tight') #heat map f, ax = plt.subplots(figsize=(10,10)) sns.set(font_scale=1.35) cmap = sns.diverging_palette(1000, 0, as_cmap=True) matrix = np.triu(singleGenre_df.corr()) ax.set_title('correlations', y=1.2, fontsize=16, ha='center') sns.heatmap(singleGenre_df.corr(), cmap='magma', vmax=1, center=0, square=True, linewidths=.2, ax=ax, annot=True, mask=matrix, cbar_kws={'label': 'Colorbar'}) plt.savefig('heatmap2.png', bbox_inches = 'tight') #cluster map sns.clustermap(singleGenre_df.corr(),cmap='coolwarm',standard_scale=1) #plt.savefig('clustermap.png', bbox_inches = 'tight') #regression plot: helpful when trying to approximate correlation sns.lmplot(x='imdbScore',y='profit',data=singleGenre_df) # + #We exclude showing violin plot because this example data set is so small it makes little sense #fig, ax = plt.subplots(figsize=(30,8)) #sns.violinplot(x="genres", y="profit", data=singleGenre_df,palette='rainbow', ax=ax) #plt.savefig('violinplot.png', bbox_inches = 'tight') # - #box and whisker plot: good when looking for outliers fig, ax = plt.subplots(figsize=(40,10)) plot = sns.boxplot(x="genres", y="profit", data=singleGenre_df,palette='rainbow', ax=ax) plot.set(xlabel = "genres", ylabel='profit', title='genre v. profit'); #plt.savefig('boxplot.png', bbox_inches = 'tight') # + #RIDGEPLOT sns.set(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}) # Create the data df = singleGenre_df # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df, row='genres',hue="genres", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "metascore", clip_on=False, shade=True, alpha=1, lw=1.5, bw=.2) g.map(sns.kdeplot, "metascore", clip_on=False, color="w", lw=2, bw=.2) g.map(plt.axhline, y=0, lw=2, clip_on=False) def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "metascore") g.fig.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles('') g.set(yticks=[]) g.despine(bottom=True, left=True) plt.savefig('Ridgeplot.png', bbox_inches = 'tight') # - # ## ANSWER 3: Not exactly ''' Answer: there is not as much evidence to provide that better scores implies better profit ex: that we've seen Crime PG gives good profit but poor votes ''' # ## MYSQL ''' We did not make use of databases but: If one wants to use a database to store the data scraped off the web via mysql: note: one can store to a database especially if the data being scraped is very large, it is more efficient to store to a database and call from the database ''' # + #IMPORT mysql.connector and connect import mysql.connector mydb = mysql.connector.connect( host = 'localhost', user = 'root', passwd = <<PASSWORD>>, database = 'moviedb' autocommit = True ) mycursor = mydb.cursor() mycursor.execute('CREATE DATABASE moviedb') # - #Create table: movieInfo mycursor.execute("""CREATE TABLE movieInfo (ids VARCHAR(255), title VARCHAR(255), genres VARCHAR(255), directors VARCHAR(255), year INTEGER(10))""") #add data to table: ids, title, genres, directors, years def insertToMovieInfo(data_list): ''' Parameters: data_list:<list> list of tuples to append to table Returns: appends to table ''' for item in data_list: #INSERT INTO tablename, (columns) VALUES (%s) insert = "INSERT INTO movieInfo (ids, title, genres, directors, year) VALUES (%s, %s, %s, %s, %s)" mycursor.execute(insert, item) #mydb.commit() # + #(*) cell 1: the list of movies, top100_10yrs #we take tuple with ids, title, genres, directors, year boxOffice = [x[5] for x in top100_10yrs] listToDf = [(x[0],x[1],x[2],x[3], x[4]) for x in top100_10yrs] insertToMovieInfo(listToDf) # + #how to retrieve mycursor.execute("SELECT * FROM movieInfo") myresult = mycursor.fetchall() for x in myresult: print(x) # - #create separate table for boxoffice mycursor.execute("""CREATE TABLE movieBoxoffice (ids, VARCHAR(255), budget VARCHAR(255), opening_gross VARCHAR(255), USAgross VARCHAR(255), total_gross VARCHAR(255))""") #turn boxOffice into tuple boxOffice_tuple = [] for item in boxOffice: ids = item[0] box_office = item[1] data = (ids, box_office.get('budget'), box_office.get('openingWeekendUSA'), box_office.get('grossUSA'), box_office.get('cumulativeWorldwideGross')) boxOffice_tuple.append(data) #insert function def insertToMovieBoxOffice(data_list): ''' Parameters: data_list:<list> list of tuples to append to table Returns: appends to table ''' for item in data_list: insert = "INSERT INTO movieInfo (ids, title, genres, directors, year) VALUES (%s, %s, %s, %s, %s)" mycursor.execute(insert, item) #insert insertToMovieBoxOffice(boxOffice_tuple) #Create table for scores mycursor.execute("""CREATE TABLE movieScores (ids, VARCHAR(255), metascore VARCHAR(255), metacritic VARCHAR(255), mrating VARCHAR(255), imdbVotes VARCHAR(255), imdbScore VARCHAR(255)""") #insert function def insertToMovieScore(data_list): ''' Parameters: data_list:<list> list of tuples to append to table Returns: appends to table ''' for item in data_list: insert = """INSERT INTO movieScores (ids, metascore, metacritic, mrating, imdbVotes, imdbScore) VALUES (%s, %s, %s, %s, %s)""" mycursor.execute(insert, item) #insert insertToMovieScore(top100_10yrs2) #Create table for votes mycursor.execute("""CREATE TABLE movieVotes (ids VARCHAR(255), title VARCHAR(255), one VARCHAR(255), two VARCHAR(255), three VARCHAR(255), four VARCHAR(255), five VARCHAR(255), six VARCHAR(255), seven VARCHAR(255), eight VARCHAR(255), nine(255), ten(255) """) #(*) cell three def insertToMovieVotes(data_list): ''' Parameters: data_list:<list> list of tuples to append to table Returns: appends to table ''' for item in data_list: insert = """INSERT INTO movieVotes (ids, title, one, two, three, four, five, six, seven, eight, nine, ten) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""" mycursor.execute(insert, item) #How to Merge: #note: one could have made a "mega" table with ALL information which is better, so no joining is necessary mycursor.execute("""SELECT ids FROM movieInfo LEFT JOIN movieBoxOffice ON ids""") # + #Creating dataframe #When one has a large table it is a nice way to select subtables from a large table mycursor.execute("SELECT ids,title,genres FROM movieInfo") myresult = mycursor.fetchall() dataframe = [] for x in myresult: dataframe.append(x) df = pd.DataFrame(dataframe) # - #dont forget to close when you're done mydb.close() # ## Sqlite3 # + import sqlite3 conn = sqlite3.connect('MovieDB.db') # create a new database mycursor = conn.cursor() # The database will be saved in the location where your notebook file is saved #create table - MovieInfo mycursor.execute('''CREATE TABLE MovieInfo4 ([ids] text, [title] text, [genres] text, [directors] text, [year] integer)''') #create table - boxoffice mycursor.execute('''CREATE TABLE movieBoxoffice4 ([generated_id] INTEGER PRIMARY KEY, [ids] text, [budget] text, [opening_gross] text, [USAgross] text, [total_gross] text)''') #Create table - scores mycursor.execute('''CREATE TABLE movieScores ([ids] text, [metascore] text, [metacritic] text, [mrating] text, [imdbVotes] text, [imdbScore] text)''') #Create table - Votes mycursor.execute('''CREATE TABLE Votes([generated_id] INTEGER PRIMARY KEY,[ids] text, [title] text, [one] integer, [two] integer, [three] integer, [four] integer, [five] integer, [six] integer, [seven] integer, [eight] integer, [nine] integer, [ten] integer)''') conn.commit() # + #add data to table: ids, title, genres, directors, years c.executemany('INSERT INTO MovieInfo (ids, title, genres, year) VALUES (?, ?, ?, ?)', listToDf) #one can do the same to insert info to all other tables. # join tables mycursor.execute(''' SELECT ids FROM movieInfo LEFT JOIN movieBoxOffice ON ids ''') #getting info mycursor.execute(''' SELECT DISTINCT * FROM movieInfo ''') #turning to dataframe df = pd.DataFrame(c.fetchall(), columns=['ids', 'title','directors', 'genres', 'year'])
Module1ProjectRunExample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from bokeh.models import ColumnDataSource from bokeh.plotting import figure from bokeh.io import output_notebook, show, push_notebook import param import paramnb import numpy as np output_notebook() # + class SineWave(param.Parameterized): offset = param.Number(default=0.0, bounds=(-5.0,5.0)) amplitude = param.Number(default=1.0, bounds=(-5.0,5.0)) phase = param.Number(default=0.0,bounds=(0.0,2*np.pi)) frequency = param.Number(default=1.0, bounds=(0.1, 5.1)) N = param.Integer(default=200, bounds=(0,None)) def update_sinewave2(self, **kw): print(self) x = np.linspace(0, 4*np.pi, self.N) y = self.amplitude*np.sin(self.frequency*x + self.phase) + self.offset self._source.data = dict(x=x, y=y) push_notebook(handle=self._plot_handle) def __init__(self, source=None, plot_handle=None, **kw): super().__init__(**kw) self._source = source self._plot_handle = plot_handle # + source = ColumnDataSource(data=dict(x= np.linspace(0, 4*np.pi, 200), y=np.linspace(-2.5,2.5, 200))) plot = figure(plot_height=400, plot_width=400, tools="crosshair,pan,reset,save,wheel_zoom", x_range=[0, 4*np.pi], y_range=[-2.5, 2.5]) r = plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6) #plot plot_handle = show(plot, notebook_handle=True) mysine = SineWave(name="MySine") def update_sinewave(self, **kw): x = np.linspace(0, 4*np.pi, self.N) y = self.amplitude*np.sin(self.frequency*x + self.phase) + self.offset source.data = dict(x=x, y=y) push_notebook(handle=plot_handle) paramnb.Widgets(mysine, callback=update_sinewave) # -
reviews/Jupyter_Widgets/param/paramNB_Widget_UI_changes_Bokeh_Plot_without_bokeh_server.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf from tensorflow.keras import * from tensorflow.keras.layers import * model = Sequential() model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5, 10))) model.add(Bidirectional(LSTM(10))) model.add(Dense(5)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
Testing/BidirectionalTest.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### Write a NumPy program to test whether none of the elements of a given array is zero import numpy as np x = np.array([1, 0, 0, 0]) print("Original array:") print(x) print("Test if any of the elements of a given array is non-zero:") print(np.any(x)) x = np.array([0, 0, 0, 0]) print("Original array:") print(x) print("Test if any of the elements of a given array is non-zero:") print(np.any(x)) # #### Write a NumPy program to create an array of 10 zeros, 10 ones, 10 fives. import numpy as np array=np.zeros(10) print("An array of 10 zeros:") print(array) array=np.ones(10) print("An array of 10 ones:") print(array) array=np.ones(10)*5 print("An array of 10 fives:") print(array) # #### Write a NumPy program to create an array of all the even integers from 30 to 70. import numpy as np array=np.arange(30,71,2) print("Array of all the even integers from 30 to 70") print(array) # #### Write a NumPy program to generate a random number between 0 and 1. import numpy as np rand_num = np.random.normal(0,1,1) print("Random number between 0 and 1:") print(rand_num) # #### Write a NumPy program to create a vector with values ​​from 0 to 20 and change the sign of the numbers in the range from 9 to 15. import numpy as np x = np.arange(20) print("Original vector:") print(x) print("After changing the sign of the numbers in the range from 9 to 15:") x[(x >= 9) & (x <= 15)] *= -1 print(x) # #### Write a NumPy program to create a vector of length 5 filled with arbitrary integers from 0 to 10. import numpy as np x = np.random.randint(0, 11, 5) print("Vector of length 5 filled with arbitrary integers from 0 to 10:") print(x) # #### Write a NumPy program to create a 10x10 matrix, in which the elements on the borders will be equal to 1, and inside 0. import numpy as np x = np.ones((10, 10)) x[1:-1, 1:-1] = 0 print(x) # #### Write a NumPy program to add a vector to each row of a given matrix. import numpy as np m = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 1, 0]) print("Original vector:") print(v) print("Original matrix:") print(m) result = np.empty_like(m) for i in range(4): result[i, :] = m[i, :] + v print("\nAfter adding the vector v to each row of the matrix m:") print(result) # #### Write a NumPy program to convert a given array into a list and then convert it into a list again. import numpy as np a = [[1, 2], [3, 4]] x = np.array(a) a2 = x.tolist() print(a == a2) # #### Write a NumPy program to create a 3x3 matrix with values ranging from 2 to 10. # + import numpy as np x = np.arange(2, 11).reshape(3,3) print(x) # - # #### Write a NumPy program to reverse an array (first element becomes last). import numpy as np import numpy as np x = np.arange(12, 38) print("Original array:") print(x) print("Reverse array:") x = x[::-1] print(x) # #### Write a NumPy program to create a 8x8 matrix and fill it with a checkerboard pattern. import numpy as np x = np.ones((3,3)) print("Checkerboard pattern:") x = np.zeros((8,8),dtype=int) x[1::2,::2] = 1 x[::2,1::2] = 1 print(x) # #### Write a NumPy program to append values to the end of an array. import numpy as np x = [10, 20, 30] print("Original array:") print(x) x = np.append(x, [[40, 50, 60], [70, 80, 90]]) print("After append values to the end of the array:") print(x) # #### Write a NumPy program to convert the values of Centigrade degrees into Fahrenheit degrees. Centigrade values are stored into a NumPy array. import numpy as np fvalues = [0, 12, 45.21, 34, 99.91] F = np.array(fvalues) print("Values in Fahrenheit degrees:") print(F) print("Values in Centigrade degrees:") print(5*F/9 - 5*32/9) # #### Write a NumPy program to get the unique elements of an array. import numpy as np x = np.array([10, 10, 20, 20, 30, 30]) print("Original array:") print(x) print("Unique elements of the above array:") print(np.unique(x)) x = np.array([[1, 1], [2, 3]]) print("Original array:") print(x) print("Unique elements of the above array:") print(np.unique(x)) # #### Write a NumPy program to find the indices of the maximum and minimum values along the given axis of an array. import numpy as np x = np.array([1, 2, 3, 4, 5, 6]) print("Original array: ",x) print("Maximum Values: ",np.argmax(x)) print("Minimum Values: ",np.argmin(x)) # #### Write a NumPy program to sort an along the first, last axis of an array. import numpy as np a = np.array([[4, 6],[2, 1]]) print("Original array: ") print(a) print("Sort along the first axis: ") x = np.sort(a, axis=0) print(x) print("Sort along the last axis: ") y = np.sort(x, axis=1) print(y) # #### Write a NumPy program to create a contiguous flattened array. import numpy as np x = np.array([[10, 20, 30], [20, 40, 50]]) print("Original array:") print(x) y = np.ravel(x) print("New flattened array:") print(y) # #### Write a NumPy program to interchange two axes of an array. import numpy as np x = np.array([[1,2,3]]) print(x) y = np.swapaxes(x,0,1) print(y) # #### Write a NumPy program to interchange two axes of an array. import numpy as np a = np.array((10,20,30)) b = np.array((40,50,60)) c = np.column_stack((a, b)) print(c) # #### Write a NumPy program to concatenate two 2-dimensional arrays. Sample arrays: ([[0, 1, 3], [5, 7, 9]], [[0, 2, 4], [6, 8, 10]] import numpy as np a = np.array([[0, 1, 3], [5, 7, 9]]) b = np.array([[0, 2, 4], [6, 8, 10]]) c = np.concatenate((a, b), 1) print(c) # #### Write a NumPy program (using numpy) to sum of all the multiples of 3 or 5 below 100. import numpy as np x = np.arange(1, 100) # find multiple of 3 or 5 n= x[(x % 3 == 0) | (x % 5 == 0)] print(n[:1000]) # print sum the numbers print(n.sum()) # #### Write a NumPy program to how to add an extra column to an numpy array. import numpy as np x = np.array([[10,20,30], [40,50,60]]) y = np.array([[100], [200]]) print(np.append(x, y, axis=1)) # #### Write a NumPy program to count the frequency of unique values in numpy array. import numpy as np a = np.array( [10,10,20,10,20,20,20,30, 30,50,40,40] ) print("Original array:") print(a) unique_elements, counts_elements = np.unique(a, return_counts=True) print("Frequency of unique values of the said array:") print(np.asarray((unique_elements, counts_elements))) # #### Write a NumPy program to extract all the elements of the first row from a given (4x4) array. import numpy as np arra_data = np.arange(0,16).reshape((4, 4)) print("Original array:") print(arra_data) print("\nExtracted data: First row") print(arra_data[0]) # #### Write a NumPy program to extract first and second elements of the first and second rows from a given (4x4) array import numpy as np arra_data = np.arange(0,16).reshape((4, 4)) print("Original array:") print(arra_data) print("\nExtracted data: First and second elements of the first and second rows ") print(arra_data[0:2, 0:2]) # #### Write a NumPy program to extract first, third and fifth elements of the third and fifth rows from a given (6x6) array. import numpy as np arra_data = np.arange(0,36).reshape((6, 6)) print("Original array:") print(arra_data) print("\nExtracted data: First, third and fifth elements of the third and fifth rows") print(arra_data[2::2, ::2]) # #### Write a NumPy program to create a random 10x4 array and extract the first five rows of the array and store them into a variable. import numpy as np x = np.random.rand(10, 4) print("Original array: ") print(x) y= x[:5, :] print("First 5 rows of the above array:") print(y)
Chapter_1/Numpy_Exercise_with_solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.0 64-bit (''3.8.0'': pyenv)' # name: python38064bit380pyenvc7bb6ca2a3f9479fafa53265e2886bf3 # --- # + [markdown] id="NqcCN4cRTb4L" # # AI in Fact and Fiction - Summer 2021 # ## Games and Deep Reinforcement Learning # # In this lab, we will explore reinforcement learning techniques and how they can be applied in games. # # * Use [Google Colab](https://colab.research.google.com/github/AIFictionFact/Summer2021/blob/master/lab4.ipynb) to run the python code, and to complete any missing lines of code. # * You might find it helpful to save this notebook on your Google Drive. # * For some of the tasks you may find it useful to have GPU enabled via 'Runtime $\rightarrow$ Change runtime type' option. # * Please make sure to fill the required information in the **Declaration** cell. # * Once you complete the lab, please download the .ipynb file (File --> Download .ipynb). # * Then, please use the following file naming convention to rename the downloaded python file lab4_YourRCS.ipynb (make sure to replace 'YourRCS' with your RCS ID, for example 'lab4_senevo.ipynb'). # * Submit the .ipynb file in LMS. # # <p>Due Date/Time: <b>Friday, Aug 13 1.00 PM ET</b></p> # # <p>Estimated Time Needed: <b>4 hours</b></p> # # <p>Total Tasks: <b>14</b></p> # <p>Total Points: <b>50</b></p> # # <hr> # + [markdown] id="hJplFC7iu8l_" # # **Declaration** # # *Your Name* : # # *Your RCS ID* : # # *Collaborators (if any)* : # # *Online Resources consulted (if any):* # + [markdown] id="r9rhcUBcMi05" # # Simple Tree Search Algorithms for Game Play # # + [markdown] id="JlgioUEZMwdY" # ### Task 1 (4 points) # # In the following picture, suppose that you are playing 'X'. Assume that the reward for reaching a state only depends on that state, regardless of what will happen in the future. # Given that, how many actions can you (as the 'X' player) take at this current board configuration, and what is the average reward of these actions? Please explain your answer. (This is a written answer question). # # # <img src="https://raw.githubusercontent.com/AIFictionFact/Summer2021/main/images/tic-tac-toe.png" alt="tictactoe" width="200"/> # # # # + [markdown] id="rGm68P3RJzjV" # _Please type your answer here. (4 points)_ # # <font color='red'> # Correct answer: 5 actions, average reward: 0 # Explanation: There are 5 places in the grid that you can place your next 'X', but you will not have won or lost after any move you make, so your average reward will be 0. # # Note: This illustrates one of the big challenges in reinforcement learning: in our tic-tac-toe example, even though only one move would not lose on the following turn, your reward would still be 0 for any of the moves you make. This is characteristic of a sparse reward function — it is updated for few board states. # </font> # + [markdown] id="R1VjG2a9QCI8" # ## TicTacToeSolver: Simple Program to Solve Tic-Tac-Toe Board Configurations # # In the class we discussed algorithms such as [MiniMax](https://en.wikipedia.org/wiki/Minimax) for game play. # # The Minimax Algorithm is a decision rule formulated for two-player zero-sum games (Tic-Tac-Toe, Chess, Go, etc.). This algorithm sees a few steps ahead and puts itself in the shoes of its opponent. It keeps playing and exploring possible subsequent states until it reaches a terminal state resulting in a draw, a win, or a loss. # # We will first explore the application of basic tree search algorithms such as MiniMax for the Tic-Tac-Toe board game. # # In our execution of the Minimax algorithm for solving Tic-Tac-Toe, it works by visualizing all possible future states of the board and constructs them in the form of a tree. When the current board state is given to the algorithm (the root of the tree), it splits into `n` branches (where n denotes the number of moves chosen by the AI depending on the number of empty cells where the AI can be placed). If any of these new states is a terminal state, no further splits are performed for this state, and we get a winner! # + id="iQO7VNItQIDm" import copy '''Simple tree search algorithm to solve any tic tac toe position''' # Square definitions X_SQUARE = 'X' O_SQUARE = 'O' BLANK = '_' # Evaluation definitions X_WINS = 'X wins!' O_WINS = 'O wins!' DRAW = 'Draw!' def is_X_turn(pos): '''Returns true if X's turn to move, false otherwise''' x_count = 0 for row in pos: x_count += row.count(X_SQUARE) x_count -= row.count(O_SQUARE) return x_count == 0 def is_full(pos): '''Returns true if every space is taken, false otherwise''' for row in pos: if BLANK in row: return False return True def get_branches(pos, X_turn): '''Takes a position, and returns a list of every position that can result from a move''' symbol = X_SQUARE if X_turn else O_SQUARE branches = [] for row in range(3): for square in range(3): if pos[row][square] == BLANK: branches.append(copy.deepcopy(pos)) branches[-1][row][square] = symbol return branches def get_static_eval(pos): '''Checks for three in a row in the current position, returns evaluation''' potential_wins = [] # Three in a row for row in pos: potential_wins.append(set(row)) # Three in a column for i in range(3): potential_wins.append(set([pos[k][i] for k in range(3)])) # Three in a diagonal potential_wins.append(set([pos[i][i] for i in range(3)])) potential_wins.append(set([pos[i][2 - i] for i in range(3)])) # Checking if any three are the same for trio in potential_wins: if trio == set([X_SQUARE]): return X_WINS elif trio == set([O_SQUARE]): return O_WINS return DRAW def solve(pos): '''Returns the dynamic evaluation of any valid position''' # Immediately return the static evaluation if it is decisive static_eval = get_static_eval(pos) if static_eval != DRAW: return static_eval # Check for full board if is_full(pos): return DRAW # Checking and evaluating every path X_turn = is_X_turn(pos) branches = get_branches(pos, X_turn) branch_evals = [solve(branch) for branch in branches] # Returning the result assuming best play if X_turn: # X options from best to worst if X_WINS in branch_evals: return X_WINS elif DRAW in branch_evals: return DRAW else: return O_WINS else: # O options from best to worst if O_WINS in branch_evals: return O_WINS elif DRAW in branch_evals: return DRAW else: return X_WINS # + [markdown] id="-xcP0_06RBS7" # We define board positions in the following way. For example, this is one of the moves X can make. We can feed this into our simple program, and see what the outcome might be. # + id="bPpyY6HxS_MY" colab={"base_uri": "https://localhost:8080/"} outputId="e82c0301-7af2-49cf-85ab-a4291af16f13" x_move_1 = [['X', '_', '_'], ['_', '_', '_'], ['_', '_', '_']] print(solve(x_move_1)) # + [markdown] id="_zKbQehbUGcB" # ### Task 2 (2 points) # # Define the TicTacToe board position given in Task 1, and obtain the outcome from the `TicTacToeSolver.` # + id="m1jbqsuHUfOM" colab={"base_uri": "https://localhost:8080/"} outputId="47306114-9f37-4506-98f1-d1e747e35ad2" # Type your code here x_move_task1 = [['O', '_', 'X'], ['_', 'X', '_'], ['O', '_', '_']] print(solve(x_move_1)) # + [markdown] id="L0SOgIJzojm2" # ### Task 3 (4 points) # # Come up with a board configuration with an initial `X` move and an `O` move that guarantees a definite win for `X`. It is okay if your solution is a brute force algorithm. # # For example, the following configuration is a definite win for `X`. # # x_wins = [['X', 'O', '\_'], # ['\_', '\_', '\_'], # ['\_', '\_', '\_']] # # Your code should output such a configuration (excluding the one above) with 2 moves (`X` move followed by `O` move) that results in a definite win for `X`. # + id="HFMVbU0arzST" # Type your code here # Any algorithm that outputs one of the three board positions below. x_wins_2 = [['X', '_', 'O'], ['_', '_', '_'], ['_', '_', '_']] x_wins_3 = [['X', '_', '_'], ['_', '_', 'O'], ['_', '_', '_']] x_wins_4 = [['X', '_', '_'], ['_', '_', '_'], ['_', '_', 'O']] # + [markdown] id="-JVYOZnor7AD" # ### Task 4 (2 points) # # Inspect the `TicTacToeSolver` code above carefully. What are some optimizations that can be made to the code? Please provide at least 2 optimizations. _(This is a written answer question)_ # + [markdown] id="fv8auFxDsUZl" # _(Type your answer here)_ # # # <ul> # <li><font color='red'>Check symmetry.</font></li> # <li><font color='red'>Store every position globally.</font></li> # <li><font color='red'>Evaluate every branch one at a time.</font></li> # </ul> # # + [markdown] id="xK6d9abEDYR4" # # Reinforcement Learning # # The field of deep learning is inspired by natural intelligence, and reinforcement learning is no exception. Consider a baby learning to walk, a bird learning to fly, or an RL agent trying to land a spaceship. They all have these three things common: # # 1. Trial and Error: Each agent (baby, bird, or RL agent) makes many unsuccessful attempts--learning from each failure. # 2. Goal: The agent has a specific goal (to stand, fly, or land the spaceship). # 3. Interaction with the environment: There is no manual, no teacher, no training sample from which it can learn. The only feedback is the feedback from the immediate environment. The feedback is in terms of some reward or punishment. # # In reinforcement learning, an agent takes a sequence of actions in an uncertain and often complex environment with the goal of maximizing a reward function. Essentially, it is an approach for making appropriate decisions in a game-like environment that maximizes rewards and minimizes penalties. Feedback from its own actions and experience allows the agent to learn the most appropriate action by trial and error. Generally, reinforcement learning involves the following steps: # # 1. Observing the environment # 2. Formulating a decision based on a certain strategy # 3. Actions # 4. Receiving a reward or penalty # 5. Learning from the experiences to improve the strategy # 6. Iteration of the process until an optimal strategy is achieved # # There’s quite a lot that you can do with reinforcement learning – whether it’s related to video games or not. The core skills can be used across a variety of purposes, from stock trading and finance to cybersecurity and art. # # We will first apply reinforcement learning to teach and AI to play Tic-Tac-Toe with a human. In this game play, as you saw earlier, an agent takes actions within an environment. Based on these actions, the agent achieves different states with different rewards. For example, in Tic-Tac-Toe, your reward might be 1 if you got three-in-a-row, −1 if your opponent got three-in-a-row, and 0 otherwise. Your state space would consist of all possible board configurations. # + [markdown] id="mSzQR4PTv24j" # ## Exploitation vs. Exploration # # One of the fundamental tradeoffs in reinforcement learning is the exploitation vs. exploration tradeoff. # # **Exploitation** means choosing the action which maximizes our reward (which may lead to being stuck in a local optimum). # # **Exploration** means choosing an action regardless of the reward it provides (this helps us discover other local optimum solutions that may lead us closer to the global optimum). # # Going all out in either one of them is harmful; all exploitation may lead to a suboptimal agent, and all exploration would give us a not-so-intelligent agent which keeps taking random actions. # # A widely used strategy to tackle this problem, is the **epsilon-decreasing strategy**. It works as follows: # 1. Initialize a variable `epsilon` with a value between 0 and 1. # 2. Now with probability = `epsilon`, we explore, and with probability = `1-epsilon`, we exploit. # 3. We decrease the value of `epsilon` over time until it becomes zero. # # Using this strategy, the agent can explore better actions during the earlier stages of the training, and then it exploits the best actions in the later stages of the game. # # Say, if a state leads to the AI winning, it shall have a positive value (`value = 1`). If AI loses in some state, it shall have a negative value (`value = -1`). All the rest of the states would have a neutral value (`value = 0`). These are the initialized state values. # # Once a game has started, our agent computes all possible actions it can take in the current state and the new states which would result from each action. The values of these states are collected from a `state_value` vector, which contains values for all possible states in the game. The agent can then choose the action that leads to the state with the highest value (exploitation) or chooses a random action (exploration), depending on the epsilon value. Throughout our training, we play several games. After each move, the value of the state is updated using the following rule: # # $$ V(s) \leftarrow V(s) + \alpha \times (V(s^f) - V(s))$$ # # where, # # * $V(s)$ = current state of the game board # * $V(s^f)$ = the new state of the board after agent takes some action # * $\alpha$ = learning rate (or the step-size parameter) # # Using this update rule, the states that lead to a loss, get a negative state value (whose magnitude depends on the learning rate). The agent learns that being in such a state may lead to a loss down the line, so it would try to avoid landing in this state unless necessary. On the other hand, the states that lead to a win, get a positive state value. The agent learns that being in such a state may lead to a win down the line, so it would be encouraged to be in this state. # # An implementation for this algorithm for the Tic-Tac-Toe game play is available in the [this repository](https://github.com/AIFictionFact/tic-tac-toe-bot). Please see [HumanVsAI_RLTest.py](https://github.com/AIFictionFact/tic-tac-toe-bot/blob/master/HumanVsAI_RLTest.py). # # Let's see how the AI plays TicTacToe. # # First clone the repo. # # # + id="jEoeqg0LxzNH" # !rm -rf tic-tac-toe-bot # !git clone https://github.com/AIFictionFact/tic-tac-toe-bot.git # + [markdown] id="8clNyoaQQ17L" # Then, let's run the `HumanVsAI_RLTest.py` file. # # Please note that the "board position number" corresponds to the following positions. # <code> # --------------- # | 1 || 2 || 3 | # --------------- # | 4 || 5 || 6 | # --------------- # | 7 || 8 || 9 | # --------------- # </code> # # # Use the `HumanVsAI_RLTest.py` to play a game with an RL agent. # # + id="G4Ta2annx2Qr" # !python tic-tac-toe-bot/HumanVsAI_RLTest.py # + [markdown] id="mh1QZfOcffxI" # ### Task 5 (4 points) # # Use the `HumanVsAI_RLTest.py` to play several games with an RL agent. Can you win the game with the RL agent? If so, provide the winning board configuration and the output in your answer. (Please note that a "win" does not include "draw"). # If you cannot win against the RL agent, please explain why you may not be able to. (2 points) # # If we had the MiniMax algorithm implemented as an agent, would you be able to win against the MiniMax agent? Please provide reasons. (2 points) # # # # + [markdown] id="Eb1GgZCWhWKb" # _(Please type your answer here)_ # # <font color='red'>In both cases, the human could not be able to win. # The number of board configurations is very small ~250k, which is a very small computation for modern computers that only take fraction of a second. # Therefore, the AI always wins, or the game results in a draw. # </font> # + [markdown] id="GD-Zgby4mvAl" # ## Training Reinforcement Learning Algorithms # # The above RL agent was trained by letting two AI agents play with each other for 10,000 epochs. When the agents are training, they use the exploit-explore method discussed earlier. The following code shows how the gameplay happens when both the players are AI, where each of them helps train each other. The number of epochs has been shortened to 100. # + id="OUovfRFciC_f" # !python tic-tac-toe-bot/AIVsAI_RL_Train.py # + [markdown] id="mMtWGYvzm3my" # ### Task 6 (3 points) # # How does the agent decide when to explore and exploit? You may check the source code of [AIVsAI_RL_Train.py](https://github.com/AIFictionFact/tic-tac-toe-bot/blob/master/AIVsAI_RL_Train.py) for your answer. _(This is a written answer question)_ # + [markdown] id="4dxiSSPEoTqI" # _Please type your answer here_ # # <font color='red'>The agent decides to explore and exploit based on a random number associated with the epsilon. # The agent explores if the random number is less than or equal to the epsilon, and updates the epsilon. # otherwise, exploits the best move available based on the state-action values it has already seen. # </font> # + [markdown] id="iJXKvHPHz-TA" # # Introduction to the OpenAI Gym # # [OpenAI Gym](https://gym.openai.com/) aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments. The goal is to standardize how environments are defined in AI research publications so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. # # Because OpenAI Gym requires a graphics display, the only (easy) way to display Gym in Google CoLab is an embedded video. The presentation of OpenAI Gym game animations in Google CoLab is discussed later, and we have presented two approaches for generating the videos. # # # + [markdown] id="SDVfHkxLA6G1" # Let's install the required libraries. First is the `cloudpickle` library (we need 1.6 as that is the latest version of the library that plays nicely with the `stable_baslines3` library that will be introduced later--_if you are curious please see [this issue](https://github.com/hill-a/stable-baselines/issues/1024) for some context_.). # # The second set of libraries are for installing `box2d` a game engine that some of the gym environments rely on. # # The third library is `gym` from OpenAI. # + id="-NKc4iUc645J" # !pip install cloudpickle==1.6 # !pip install Box2D # !pip install box2d-py # !pip install gym[all] # + [markdown] id="epTWUtW3BEst" # Since Colab doesn’t have a display except the Notebook in HTML, when we train reinforcement learning model with OpenAI Gym, we encounter `NoSuchDisplayException` when calling `gym.Env.render()` method. Therefore we need to install additional software to make it work. # # First, we require the virtual X11 display [Xvfb](https://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml). # + id="1HKFYUqqBllO" # !apt update # !apt-get install ffmpeg freeglut3-dev xvfb # For visualization # + [markdown] id="K5gj--y6BqvH" # Additionaly, to launch X virtual frame buffer (Xvfb) from Notebook, install [PyVirtualDisplay](https://github.com/ponty/PyVirtualDisplay). # + id="CPedgfThByKB" # !pip install pyvirtualdisplay # + [markdown] id="KTjzpVh4B1Hn" # Initialize the display. # + id="OME3FsdaB7r1" import pyvirtualdisplay # + [markdown] id="mYccyyMfB8--" # Let's use the official `gym.wrappers.Monitor` and store the display animation as a movie. # + id="RigAFuhvCOGP" import gym from gym.wrappers import Monitor import glob import io import base64 from IPython.display import HTML from pyvirtualdisplay import Display from IPython import display as ipythondisplay display = Display(visible=0, size=(1400, 900)) display.start() def wrap_env(env): """ Utility functions to enable video recording of gym environment and displaying it. To enable video, just do "env = wrap_env(env)"" """ env = Monitor(env, './video', force=True) return env def show_video(): mp4list = glob.glob('video/*.mp4') if len(mp4list) > 0: mp4 = mp4list[0] video = io.open(mp4, 'r+b').read() encoded = base64.b64encode(video) ipythondisplay.display(HTML(data='''<video alt="test" autoplay loop controls style="height: 400px;"> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))) else: print("Could not find video") # + [markdown] id="gb94jHGy6W1n" # ### Looking at Gym Environments # # The centerpiece of Gym is the environment, which defines the "game" in which your reinforcement algorithm will compete. An environment does not need to be a game; however, it describes the following game-like features: # * **Action space**: What actions can we take on the environment, at each step/episode, to alter the environment. # * **Observation space**: What is the current state of the portion of the environment that we can observe. Usually, we can see the entire environment. # # Before we begin to look at Gym, it is essential to understand some of the terminology used by this library. # # * **Agent** - The machine learning program or model that controls the actions. # * **Step** - One round of issuing actions that affect the observation space. # * **Episode** - A collection of steps that terminates when the agent fails to meet the environment's objective, or the episode reaches the maximum number of allowed steps. # * **Render** - Gym can render one frame for display after each episode. # * **Reward** - A positive reinforcement that can occur at the end of each episode, after the agent acts. # * **Nondeterministic** - For some environments, randomness is a factor in deciding what effects actions have on reward and changes to the observation space. # # It is important to note that many of the gym environments specify that they are not nondeterministic even though they make use of random numbers to process actions. It is generally agreed upon (based on the gym GitHub issue tracker) that nondeterministic property means that a deterministic environment will still behave randomly even when given consistent seed value. The seed method of an environment can be used by the program to seed the random number generator for the environment. # # The Gym library allows us to query some of these attributes from environments. The following function can be used to query gym environments. # # + id="_CNllTTe6h3P" def query_environment(name): env = gym.make(name) spec = gym.spec(name) print(f"Action Space: {env.action_space}") print(f"Observation Space: {env.observation_space}") print(f"Max Episode Steps: {spec.max_episode_steps}") print(f"Nondeterministic: {spec.nondeterministic}") print(f"Reward Range: {env.reward_range}") print(f"Reward Threshold: {spec.reward_threshold}") # + [markdown] id="F4QGAnPS6oN8" # ### Classic Control Environments # # We will begin by looking at the "CartPole-v0" environment, a classic control problem. # # "A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. " # # Cartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/) # # + id="ls5WlnEO8bk1" query_environment("CartPole-v0") # + [markdown] id="NqXImKGf8w18" # The CartPole-v0 environment challenges the agent to move a cart while keeping a pole balanced. The environment has an observation space of 4 continuous numbers: # # * Cart Position # * Cart Velocity # * Pole Angle # * Pole Velocity At Tip # # To achieve this goal, the agent can take the following actions: # # * Push cart to the left # * Push cart to the right # # + [markdown] id="bP3qj-jRImmp" # Let's call the CartPole environment and visualize it. # + id="Yg6HW3fAIrr8" env = wrap_env(gym.make("CartPole-v0")) observation = env.reset() while True: env.render() #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break; env.close() show_video() # + [markdown] id="PpPuz27v-O6V" # ### Task 7 (2 points) # # Let's consider another gym environment called the "MountainCar-v0", which challenges an underpowered car to escape the valley between two mountains. Write the code to query the environment (1 point) and describe the Mountain Car environment (1 point). # # + id="xYBMnRFv_HZa" # Type your code here to display the environment (1 point) env = Monitor(gym.make('MountainCar-v0'),'./', force=True) observation = env.reset() for _ in range(1000): observation, reward, done, info = env.step(env.action_space.sample()) if done: env.reset() # Get the last video in the environment file = env.videos[-1] video = io.open(file[0], 'r+b').read() encoded = base64.b64encode(video) display.display(display.HTML(data=""" <video alt="cartpole-output" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video> """.format(encoded.decode('ascii')))) # Type your code here to query the environment (1 point) query_environment("MountainCar-v0") # + [markdown] id="8VxxokOmEh8d" # ### Task 8 (6 points) # # What can you say about the actions of the agent (2 points), the observations (2 points), and the reward (2 points) in the MountainCar environment? _(This is a written answer question.)_ # # Hint: It may be helpful to take a look at the online documentation and the source code of the [Mountain-Car environment](https://gym.openai.com/envs/MountainCar-v0). # + [markdown] id="b1WpSIDcF5pI" # _Type your answer here._ # # <font color='red'> # The Mountain car is an environment where a car must climb a mountain. Because gravity is stronger than the car's engine, even with full throttle, it cannot merely accelerate up the steep slope. The vehicle is situated in a valley and must learn to utilize potential energy by driving up the opposite hill before the car can make it to the goal at the top of the rightmost hill. # # There are three distinct actions that can be taken: <b>accelrate forward, decelerate, or accelerate backwards</b>. The observation space contains <b>two continuous (floating point) values</b>, as evident by the box object. The observation space is simply the <b>position</b> and <b>velocity</b> of the car. The car has 200 steps to escape for each epasode. You would have to look at the code to know, but the mountian car <b>recieves no incramental reward</b>. The <b>only reward for the car is given when it escapes the valley</b>. # </font> # + [markdown] id="5OdUzcP_F-sT" # ### Atari Games # # Atari games can use an observation space that is either equal to the size of the Atari screen (210x160) or even use the RAM memory of the Atari (128 bytes) to determine the state of the game. Yes thats bytes, not kilobytes! # # We will first need to load the Atari ROMs into our Colab instance. # + id="d6mi6R9iHNq8" import urllib.request urllib.request.urlretrieve('http://www.atarimania.com/roms/Roms.rar','Roms.rar') # !pip install unrar # !unrar x Roms.rar # !mkdir rars # !mv HC\ ROMS.zip rars # !mv ROMS.zip rars # !python -m atari_py.import_roms rars # + [markdown] id="ow-wzHDiHko_" # Let's query a sample game, for example Breakout. # + id="1J-TpAPBGXNt" query_environment("Breakout-v0") # + [markdown] id="T3FGYl7GHzSd" # Let's visualize the game. # + id="pBxHJQR8IJbt" env = wrap_env(gym.make("Breakout-v0")) observation = env.reset() while True: env.render() #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break; env.close() show_video() # + [markdown] id="eqHTCGJyqoSl" # ### Task 9 (5 points) # # Select a game of your choice from the list of Atari ROMs downloaded (except "Breakout_v0"), query it's environment, and render it in a video. _(Please provide the code below)_ # # Interpret the action space, the observation space, and the reward based on the parameters available in the environment as well as the video output. _(This is a written answer question)_ # + id="Lc8hSH0sq2_4" # Type your code here to query the environment (1 point) my_env = "Skiing-v0" # students can use any atari environment query_environment(my_env) # + id="CFfOVfYYq3cL" # Type your code here to display the environment (1 point) env = wrap_env(gym.make(my_env)) observation = env.reset() while True: env.render() #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break; env.close() show_video() # + [markdown] id="i4sEZTNNtkEL" # _Type your written answer here to interpret the environment (3 points)_ # + [markdown] id="AvYBNxCFhyJD" # # Introduction to Q-Learning # # Q-Learning is a foundational technique upon which deep reinforcement learning is based. Before we explore deep reinforcement learning, it is essential to understand Q-Learning. # # Several components make up any Q-Learning system, and you have encuntered many of these components before. # # * **Agent** - The agent is an entity that exists in an environment that takes actions to affect the state of the environment, to receive rewards. # * **Environment** - The environment is the universe that the agent exists in. The environment is always in a specific state that is changed by the actions of the agent. # * **Actions** - Steps that can be performed by the agent to alter the environment # * **Step** - A step occurs each time that the agent performs an action and potentially changes the environment state. # * **Episode** - A chain of steps that ultimately culminates in the environment entering a terminal state. # * **Epoch** - A training iteration of the agent that contains some number of episodes. # * **Terminal State** - A state in which further actions do not make sense. In many environments, a terminal state occurs when the agent has won, lost, or the environment exceeding the maximum number of steps. # # Q-Learning works by building a table that suggests an action for every possible state. Q-Learning primarily deals with discrete actions, such as pressing a joystick up or down. Out of the box, Q-Learning does not deal with continuous inputs, such as a car's accelerator that can be in a range of positions from released to fully engaged. However, researchers have come up with clever tricks to allow Q-Learning to accommodate continuous actions. Q-Learning handles continuous states by binning these numeric values into ranges. Furthermore, deep neural networks can help to solve the problems of continuous environments and action spaces. # The agent must bin continuous state values into a fixed finite number of columns. # # Learning occurs when the algorithm runs the agent and environment through a series of episodes and updates the Q-values based on the rewards received from actions taken. # # The Q-values can dictate action by selecting the action column with the highest Q-value for the current environment state. The choice between choosing a random action and a Q-value driven action is governed by the epsilon ($\epsilon$) parameter, which is the probability of random action. # # Each time through the training loop, the training algorithm updates the Q-values according to the following equation. # # $Q^{new}(s_{t},a_{t}) \leftarrow \underbrace{Q(s_{t},a_{t})}_{\text{old value}} + \underbrace{\alpha}_{\text{learning rate}} \cdot \overbrace{\bigg( \underbrace{\underbrace{r_{t}}_{\text{reward}} + \underbrace{\gamma}_{\text{discount factor}} \cdot \underbrace{\max_{a}Q(s_{t+1}, a)}_{\text{estimate of optimal future value}}}_{\text{new value (temporal difference target)}} - \underbrace{Q(s_{t},a_{t})}_{\text{old value}} \bigg) }^{\text{temporal difference}}$ # # There are several parameters in this equation: # * alpha ($\alpha$) - The learning rate, how much should the current step cause the Q-values to be updated. # * lambda ($\lambda$) - The discount factor is the percentage of future reward that the algorithm should consider in this update. # # This equation modifies several values: # # * $Q(s_t,a_t)$ - The Q-table. For each combination of states, what reward would the agent likely receive for performing each action? # * $s_t$ - The current state. # * $r_t$ - The last reward received. # * $a_t$ - The action that the agent will perform. # # The equation works by calculating a delta (temporal difference) that the equation should apply to the old state. This learning rate ($\alpha$) scales this delta. A learning rate of 1.0 would fully implement the temporal difference to the Q-values each iteration and would likely be very chaotic. # # There are two parts to the temporal difference: the new and old values. The new value is subtracted from the old value to provide a delta; the full amount that we would change the Q-value by if the learning rate did not scale this value. The new value is a summation of the reward received from the last action and the maximum of the Q-values from the resulting state when the client takes this action. It is essential to add the maximum of action Q-values for the new state because it estimates the optimal future values from proceeding with this action. # # For now, we will apply regular Q-Learning to the Mountain Car problem from OpenAI Gym. # # + [markdown] id="H_ME3GyjrIsH" # **Simple Algorithm for the Mountain Car** # # The following code shows an agent that applies full throttle to climb the hill. The cart is not strong enough. It will need to use potential energy from the mountain behind it. # + id="5eBGlVDfrT6y" mountain_car_simple_env = wrap_env(gym.make("MountainCar-v0")) mountain_car_simple_env.reset() done = False i = 0 while not done: i += 1 state, reward, done, _ = mountain_car_simple_env.step(2) mountain_car_simple_env.render() print(f"Step {i}: State={state}, Reward={reward}") mountain_car_simple_env.close() # + [markdown] id="3lQRqQbszMQt" # Let's also see the mountain car in action. # + id="ri1FNR74r5R3" show_video() # + [markdown] id="6U1W8YWJsHt6" # ### Task 10 (3 points) # # Similar to the above program, write code to program the mountain car such that it always applies force to one direction or another. Whatever direction the vehicle is currently rolling, the agent uses power in that direction. Therefore, if the car begins to climb a hill, it would be overpowered by gravity, and turns backward. However, once it starts to roll backward force is immediately applied in this new direction to gather enough potential energy to climb the hill. (2 points) # # Visualize the preprogrammed car with the above solution. (1 point) # + id="TlCCY6yds4U5" # Type your code to program the mountain car (2 points) import gym env = wrap_env(gym.make("MountainCar-v0")) state = env.reset() done = False i = 0 while not done: i += 1 if state[1]>0: action = 2 else: action = 0 state, reward, done, _ = env.step(action) env.render() print(f"Step {i}: State={state}, Reward={reward}") env.close() # + id="YjtBs3zss9us" # Type your code to visualize the mountain car environment (1 point) show_video() # + [markdown] id="DZACe0PpuPun" # **Q-Learning Car** # # We will now use Q-Learning to produce a car that learns to drive itself. # + id="8KtHD2couXzF" def calc_discrete_state(env, state): ''' This function converts the floating point state values into discrete values. This is often called binning. We divide the range that the state values might occupy and assign each region to a bucket. ''' discrete_state = (state - env.observation_space.low)/buckets return tuple(discrete_state.astype(np.int)) def run_game(env, q_table, render, should_update): ''' Run one game. The q_table to use is provided. We also provide a flag to indicate if the game should be rendered/animated. Finally, we also provide a flag to indicate if the q_table should be updated. ''' done = False discrete_state = calc_discrete_state(env, env.reset()) success = False while not done: # Exploit or explore if np.random.random() > epsilon: # Exploit - use q-table to take current best action # (and probably refine) action = np.argmax(q_table[discrete_state]) else: # Explore - t action = np.random.randint(0, env.action_space.n) # Run simulation step new_state, reward, done, _ = env.step(action) # Convert continuous state to discrete new_state_disc = calc_discrete_state(env, new_state) # Have we reached the goal position (have we won?)? if new_state[0] >= env.unwrapped.goal_position: success = True # Update q-table if should_update: max_future_q = np.max(q_table[new_state_disc]) current_q = q_table[discrete_state + (action,)] new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * \ (reward + DISCOUNT * max_future_q) q_table[discrete_state + (action,)] = new_q discrete_state = new_state_disc if render: env.render() return success # + [markdown] id="9_WxkIRGvOwd" # ### Hyperparameters in Q-Learning # # Several hyperparameters are very important for Q-Learning. These parameters will likely need adjustment as you apply Q-Learning to other problems. Because of this, it is crucial to understand the role of each parameter. # # * **LEARNING_RATE** The rate at which previous Q-values are updated based on new episodes run during training. # * **DISCOUNT** The amount of significance to give estimates of future rewards when added to the reward for the current action taken. A value of 0.95 would indicate a discount of 5% to the future reward estimates. # * **EPISODES** The number of episodes to train over. Increase this for more complex problems; however, training time also increases. # * **SHOW_EVERY** How many episodes to allow to elapse before showing an update. # * **DISCRETE_GRID_SIZE** How many buckets to use when converting each of the continuous state variables. For example, [10, 10] indicates that the algorithm should use ten buckets for the first and second state variables. # * **START_EPSILON_DECAYING** Epsilon is the probability that the agent will select a random action over what the Q-Table suggests. This value determines the starting probability of randomness. # * **END_EPSILON_DECAYING** How many episodes should elapse before epsilon goes to zero and no random actions are permitted. For example, EPISODES//10 means only the first 1/10th of the episodes might have random actions. # + id="-uLgB5yjvYBr" # Q-Learning hyper parameters LEARNING_RATE = 0.1 DISCOUNT = 0.95 EPISODES = 50000 SHOW_EVERY = 1000 DISCRETE_GRID_SIZE = [10, 10] START_EPSILON_DECAYING = 0.5 END_EPSILON_DECAYING = EPISODES//10 # + [markdown] id="xWKhFTLzvfMm" # We can now make the environment. # # Warning: this code may take some time to run (around 15-20 mins). # + id="Q8QFmZOYvf7q" import numpy as np mountain_car_qlearning_env = wrap_env(gym.make("MountainCar-v0")) epsilon = 1 epsilon_change = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING) buckets = (mountain_car_qlearning_env.observation_space.high - mountain_car_qlearning_env.observation_space.low) / DISCRETE_GRID_SIZE q_table = np.random.uniform(low=-3, high=0, size=(DISCRETE_GRID_SIZE + [mountain_car_qlearning_env.action_space.n])) success = False episode = 0 success_count = 0 # Loop through the required number of episodes while episode<EPISODES: episode+=1 done = False # Run the game. If we are local, display render animation at SHOW_EVERY # intervals. if episode % SHOW_EVERY == 0: print("Current episode:", episode, "success:", success_count , float(success_count)/SHOW_EVERY) success = run_game(mountain_car_qlearning_env, q_table, True, False) success_count = 0 else: success = run_game(mountain_car_qlearning_env, q_table, False, True) # Count successes if success: success_count += 1 # Move epsilon towards its ending value, if it still needs to move if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING: epsilon = max(0, epsilon - epsilon_change) print(success) # + [markdown] id="fSIkLVST_XZH" # **Running and Observing the Agent** # # Now that the algorithm has trained the agent, we can observe the agent in action. You can use the following code to see the agent in action. # + id="qme-fXCK_6gB" run_game(mountain_car_qlearning_env, q_table, True, False) show_video() # + [markdown] id="CjYWsKi-v-58" # ### Task 11 (2 points) # # Observe the success rate that gets output for the Q-learning algorithm above. Notice that the number of successful episodes generally increases as training progresses. However, there are some earlier episodes in which a success rate of 1.0 was achieved. Could we have stoped the learning process at that point? Explain your answer. _(This is a written answer question)_ # + [markdown] id="1MBbL5LxwsLo" # _(Please type your answer here)_ # # <font color='red'> # The mean reward of the trained model increased from 9.06 +/- 0.75 to 200.00 +/- 0.00. # (the numbers could be different, but the latter number must be a higher number) # </font> # + [markdown] id="HXnTAuOav6xP" # # Introduction to a Deep Reinforcement Library # # In this part of the lab, we will explore one of the cutting-edge deep-reinforcement libraries called [Stable Baselines3](https://github.com/DLR-RM/stable-baselines3). This library contains a set of reliable implementations of reinforcement learning algorithms in PyTorch. # # [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines3. # # It also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos. # # Documentation is available online: [https://stable-baselines3.readthedocs.io/](https://stable-baselines3.readthedocs.io/). # # **Important:** # You might want to change the Hardware accelerator in the Runtime menu to GPU. # + [markdown] id="87WRY7w4wuZa" # Let's first install the Stable Baselines3 package. The `[extra]` part includes optional dependencies like Tensorboard, OpenCV or atari-py to train on atari games. # + id="5QtpClaKwxcj" # !pip install stable-baselines3[extra] # + [markdown] id="-AVS9kb70uFt" # The next thing you need to import is the policy class that will be used to create the networks (for the policy/value functions). # This step is optional as you can directly use strings in the constructor: # # ```PPO('MlpPolicy', env)``` instead of ```PPO(MlpPolicy, env)``` # # Note that some algorithms like `SAC` have their own `MlpPolicy`, that's why using string for the policy is the recommened option. # # We chose the [MlpPolicy](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html?highlight=MlpPolicy#stable_baselines3.dqn.MlpPolicy), which implements an [actor critic algorithm](https://papers.nips.cc/paper/1999/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf). # # **Note:** You are not expected to know all the details of the policies and algorithms available for the purpose of this lab. But please make sure you know how to apply them when developing an agent for a particular environment. # + id="BNFQhWJm1oYg" from stable_baselines3.ppo import MlpPolicy, PPO # + [markdown] id="mbeJPaV03naK" # ## Deep-RL with Control Problems # # For this example, we will use the CartPole environment, which you saw earlier. # # Stable-baselines provides a set of default [policies](https://stable-baselines.readthedocs.io/en/master/modules/policies.html), that can be used with most action spaces. # # We chose the [MlpPolicy](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html?highlight=MlpPolicy#stable_baselines3.dqn.MlpPolicy) because input of CartPole is a feature vector, not images (like in the case of Atari games). # # The type of action to use (discrete/continuous) will be automatically deduced from the environment action space. # # Here we are using the [Proximal Policy Optimization (PPO)](https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html) algorithm. # + id="mQCM9G_T3Rp8" cartpole_env = wrap_env(gym.make("CartPole-v0")) cartpole_env._max_episode_steps = 20 cartpole_model = PPO(MlpPolicy, cartpole_env, verbose=1) # + [markdown] id="CYasdZXR39zE" # We import a helper function to evaluate the agent. # + id="t6Rr0B_j4AJ1" from stable_baselines3.common.evaluation import evaluate_policy # + [markdown] id="aFkdVC6T4MT9" # Let's evaluate the un-trained agent, this should be a random agent. # + id="8FJG3toK4NI3" # Use a separate environment for evaluation cartpole_eval_env = wrap_env(gym.make("CartPole-v0")) # Random Agent, before training mean_reward, std_reward = evaluate_policy(cartpole_model, cartpole_eval_env, n_eval_episodes=100) print(f"Before training: mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}") # + [markdown] id="xE0DszDd4q3T" # Train the agent and save it. # + id="p5AZRqQw4vSA" # Train the agent for 10000 steps cartpole_model.learn(total_timesteps=10000) cartpole_model.save("ppo_cartpole") # + [markdown] id="6uxE2zX319G6" # Since we saved the model, we can delete it from memory and reload it. # + id="YNBVdFjycBGz" # Since we saved the model, we can delete it from memory and reload it del cartpole_model cartpole_model = PPO.load("ppo_cartpole") # + [markdown] id="xTJ65Oiv2Aza" # Evaluate the trained agent. # + id="TQozcy4E427F" # Evaluate the trained agent mean_reward, std_reward = evaluate_policy(cartpole_model, cartpole_eval_env, n_eval_episodes=100) print(f"After training: mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}") # + [markdown] id="xtAWjf8u4__z" # ### Task 12 (2 points) # # How do you know if the training went well or not? _(This is a written answer question)_ # + [markdown] id="_gLhRrQR5geU" # _Please type your answer here_ # # <font color='red'>As you can see, the number of successful episodes generally increases as training progresses. It is not advisable to stop the first time that we observe 100% success over 1,000 episodes. There is a randomness to most games, so it is not likely that an agent would retain its 100% success rate with a new run. Once you observe that the agent has gotten 100% for several update intervals, it might be safe to stop training. # </font> # + [markdown] id="_WwzkWjX6LJt" # **Another piece of helper code to visualize the trained environment** # # In this implementation we do not need to use `wrap_env` as before. # + id="17-9WSVZ6PvU" # Set up fake display; otherwise rendering will fail import os os.system("Xvfb :1 -screen 0 1024x768x24 &") os.environ['DISPLAY'] = ':1' import base64 from pathlib import Path from IPython import display as ipythondisplay def render_videos(video_path='', prefix=''): """ Taken from https://github.com/eleurent/highway-env :param video_path: (str) Path to the folder containing videos :param prefix: (str) Filter the video, showing only the only starting with this prefix """ html = [] for mp4 in Path(video_path).glob("{}*.mp4".format(prefix)): video_b64 = base64.b64encode(mp4.read_bytes()) html.append('''<video alt="{}" autoplay loop controls style="height: 400px;"> <source src="data:video/mp4;base64,{}" type="video/mp4" /> </video>'''.format(mp4, video_b64.decode('ascii'))) ipythondisplay.display(ipythondisplay.HTML(data="<br>".join(html))) # + [markdown] id="hyFQ7MCI-1Ni" # We will record a video using the [VecVideoRecorder](https://stable-baselines.readthedocs.io/en/master/guide/vec_envs.html#vecvideorecorder) wrapper. # + id="d0SmJkeD-2Ql" from stable_baselines3.common.vec_env import VecVideoRecorder, DummyVecEnv def record_video(env_id, model, video_length=500, prefix='', video_folder='videos/'): """ :param env_id: (str) :param model: (RL model) :param video_length: (int) :param prefix: (str) :param video_folder: (str) """ eval_env = DummyVecEnv([lambda: gym.make(env_id)]) # Start the video at step=0 and record 500 steps eval_env = VecVideoRecorder(eval_env, video_folder=video_folder, record_video_trigger=lambda step: step == 0, video_length=video_length, name_prefix=prefix) obs = eval_env.reset() for _ in range(video_length): action, _ = model.predict(obs) obs, _, _, _ = eval_env.step(action) # Close the video recorder eval_env.close() # + [markdown] id="jG_Z5yVn2WPg" # Record a video for the Cartpole env we trained earlier. # + id="_YPviuRC_B3s" record_video('CartPole-v0', cartpole_model, video_length=500, prefix='ppo-cartpole') # + [markdown] id="_GhrgssF2g8V" # Render the generated video. # + id="h0aaa_Iy_Tgn" render_videos('videos', prefix='ppo-cartpole-step-0-to-step-500') # + [markdown] id="CB1atl7MKqpB" # ### Using Pre-trained RL Agents # # You can train other models such as the MountainCar using the available algorithms in the stable-baselines3 library. However, to train these models, it might take a lot of time. # # You may clone the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) repository, install all the required libraries, and train an agent to successfully complete the MountainCar problem. However, to train this agent, it may take a day or two! # # **Note:** If you are interested, you can use multi-processing capabilities in the stable-baselines3 library to speed up the process as outlined in this [online guide](https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/master/multiprocessing_rl.ipynb). (Please note that this guide was written for the previous version of stable-baselines and you may need to adjust the libraries as needed). # # + id="-j96ym-1KxKS" ## Note: Only run this if you can wait for several hours (or days!) for the training process to complete. # # !git clone --recursive https://github.com/DLR-RM/rl-baselines3-zoo # # cd /content/rl-baselines3-zoo/ # # !pip install -r requirements.txt # # !python train.py --algo ppo --env MountainCar-v0 -n 50000 -optimize --n-trials 1000 --n-jobs 2 --sampler tpe --pruner median # + [markdown] id="UtWmPqVuOjFI" # So, instead we will make use of the [rl-trained-agents](https://github.com/DLR-RM/rl-trained-agents), and generate a video for viewing. Here we use the pre-trained [DQN](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) model to the `MountainCar-v0` and generate a video with 50,000 timesteps. # # Let's first clone the repo. # + id="0h7McVfnOuWb" # !git clone --recursive https://github.com/DLR-RM/rl-trained-agents.git # + [markdown] id="r83O0Ces2_7v" # Then, load the pre-trained model, record the video, render it, and display the mean reward. # + id="309Iqge9PWi0" from stable_baselines3 import DQN import gym env_name = "MountainCar-v0" mountaincar_dqn_model = DQN.load("rl-trained-agents/dqn/MountainCar-v0_1/MountainCar-v0.zip") record_video(env_name, mountaincar_dqn_model, video_length=10000, prefix='dqn_mountaincar') render_videos('videos', prefix='dqn') mountaincar_eval_env = gym.make(env_name) # Evaluate the trained agent mean_reward, std_reward = evaluate_policy(mountaincar_dqn_model, mountaincar_eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # + [markdown] id="e1ycf8dg7xMr" # ### Task 13 (6 points) # # Use two pre-trained models for the `Pendulum-v0` environment available at [rl-trained-agents](https://github.com/DLR-RM/rl-trained-agents), and generate the videos. # Note: You may need to adjust the video_length in the videos to an optimal value for comparions. (4 points) # # Based on the videos generated, what can you say about the performance of the two models? (2 points) _(This is a written answer question.)_ # # + id="N6ucGuT2wBTd" import gym # The students should select two available models like the following. # Type your code here for model 1 (2 points). from stable_baselines3 import DDPG env_name = "Pendulum-v0" pendulum_ddpg_model = DDPG.load("rl-trained-agents/ddpg/Pendulum-v0_1/Pendulum-v0.zip") record_video(env_name, pendulum_ddpg_model, video_length=50000, prefix='ddpg_pendulum') render_videos('videos', prefix='ddpg') # Type your code here for model 2 (2 points). # Another relevant model # + [markdown] id="SengQl9Vm2qi" # _Type your answer to the written answer question here (2 points)_ # # <font color='red'>This is a very subjective answer based on the models selected. There are no clear winners in terms of the models used as long as the students have demonstrated the use of two existing algorithms and have presented an analysis on which model may be a superior model based on the video generated, full points will be awarded.</font> # + [markdown] id="1ESRA0Sxw-rj" # ## Deep-RL with Atari Games # # For this example, we will use Lunar Lander environment. # # "Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine. " # # Lunar Lander environment: [https://gym.openai.com/envs/LunarLander-v2/](https://gym.openai.com/envs/LunarLander-v2/) # # ![Lunar Lander](https://cdn-images-1.medium.com/max/960/1*f4VZPKOI0PYNWiwt0la0Rg.gif) # # We will apply the same process as before to load a pre-trained model and view the Lunar Lander in action. In this code snippet, we are using the [A2C](https://stable-baselines3.readthedocs.io/en/master/modules/a2c.html) algorithm. # + id="vypvySbvoOb6" from stable_baselines3 import A2C import gym env_name = "LunarLander-v2" lunarlander_a2c_model = A2C.load("rl-trained-agents/a2c/LunarLander-v2_1/LunarLander-v2.zip") record_video(env_name, lunarlander_a2c_model, video_length=5000, prefix='a2c_lunarlander') render_videos('videos', prefix='a2c') # + [markdown] id="fMLRtMK1qnso" # # Creating Your Own Gym Environment # # So far you have seen using existing OpenAI Gym environments. Let's create a simple environment called `BasicEnv`. There are two actions in this environment, first gets a reward of 1, second gets a reward of -1. if we take an action, we are in `state 1`, and depending on the action we receive the appropriate reward. # # # + id="b-FsDuyGjf0N" import numpy as np import gym from gym import spaces import random class BasicEnv(gym.Env): def __init__(self): '''There are two actions, first gets a reward of 1, second gets a reward of -1. ''' self.action_space = gym.spaces.Discrete(2) self.observation_space = gym.spaces.Discrete(2) def step(self, action): # if we took an action, we were in state 1 state = 1 if action == 2: reward = 1 else: reward = -1 # regardless of the action, game is done after a single step done = True info = {} return state, reward, done, info def reset(self): state = 0 return state def render(self): pass # + [markdown] id="byJ3u68T3fXR" # Create an instance of the `BasicEnv` and inspect the action and the observation space. # + id="9o7M08bVkMwR" my_basic_env = BasicEnv() action_space_size = my_basic_env.action_space state_space_size = my_basic_env.observation_space print(action_space_size) print(state_space_size) # + [markdown] id="zkjsCHj83nww" # The stable_baselines3 library comes with a function that verifies if your environment is Gym-compatible. # # If the environment is defined properly, the function will not return anything. Which is a bit weird, but it means that everything is OK. # # You can test what happens if you change any of the elements in your environment. For example, if you don’t have a reset function, you will get a `NotImplementedError`. If you don’t have a proper `self`.`action_space` and `self.observation_space`, for example if you defined them as a regular list instead of the special `gym.spaces` class, you will get this error: `AssertionError: The action space must inherit from gym.spaces`. # + id="ne_YCN_Xs9w5" from stable_baselines3.common.env_checker import check_env print(check_env(my_basic_env)) # + [markdown] id="B7US_hgd4Ynw" # Similar to the earlier pre-defined gym environments, we can apply an RL algorithm implemented in stable_baselines3 to define an agent and train it. # + id="qSkkxYBkuxxg" from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3 import PPO my_basic_model = PPO("MlpPolicy", my_basic_env, verbose=1) my_basic_model.learn(total_timesteps=10000) # + [markdown] id="Z1UKpJ4N4tL5" # After the model has been trained, we can run the environment for several episodes (in this case 10), and see the output in each episode. # + id="zu7iMfYCvGxD" obs = my_basic_env.reset() for i in range(10): action, _states = my_basic_model.predict(obs) obs, reward, done, info = my_basic_env.step(action) print(action, obs, reward, done) # + [markdown] id="XWwh-yRh5LdG" # As you can see, the above environment is a bit boring because, regardless of the action, the game is done in one step. # + [markdown] id="djJfMkNA1REo" # ### Task 14 (5 points) # # Develop a custom environment, titled `CustomEnv` with 3 actions. Give a reward for each action in a probabilitstic manner. For example, you may use a normal distribution based on the action provided to determine the reward from one of the three options [1, 0, -1]. (2 points) # # Use the environment checker to ascertain that the environment is defined correctly. (1 point) # # Use any `stable_baselines3` algorithm to train a model for the `CustomEnv` for 10,000 timesteps and display the output for 100 episodes. (2 points) # + id="a1OaUFGx7jlR" # Type your code here class CustomEnv(gym.Env): '''Almost the same as BasicEnv, with one difference: the reward for each action is a normal variable ''' metadata = {'render.modes': ['human']} def __init__(self): # There are three actions, # first will get reward of 1, # second reward of -1, # third reward of 0. self.action_space = gym.spaces.Discrete(3) self.observation_space = gym.spaces.Discrete(3) def step(self, action): # if we took an action, we were in state 1 state = 1 reward = np.random.normal(loc = action, scale = action) # regardless of the action, game is done after a single step done = True info = {} return state, reward, done, info def reset(self): state = 0 return state def render(self, mode='human'): pass def close(self): pass my_custom_env = CustomEnv() from stable_baselines3.common.env_checker import check_env print(check_env(my_custom_env)) my_custom_model = PPO("MlpPolicy", my_custom_env, verbose=1) my_custom_model.learn(total_timesteps=10000) obs = my_custom_env.reset() for i in range(100): action, _states = my_custom_model.predict(obs) obs, reward, done, info = my_custom_env.step(action) print(action, obs, reward, done)
lab4_answers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bW1gifIe0pUt" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://colab.research.google.com/github/OpenMined/PipelineDP/blob/main/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/OpenMined/PipelineDP/blob/main/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # </table> # + [markdown] id="3Pa1EeIdJyZn" # This is a simple example that shows how to calculate anonymized statistics using PipelineDP. The input data is a simulated dataset of visits to some restaurant during a 7 day period. Each visit is characterized by a visitor ID, the entry date, and the amount of money spent. In this colab we use Pipeline DP # Core API to calculate the count of restaurant visits per day. # # + [markdown] id="zxcPpZGuAPq8" # # Install dependencies and download data # # Run the code below to install the necessary dependencies, load and explore the input data. # # + id="E8yzpKYNbHTF" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="0e60ad12-094a-4e0d-9c44-d8377accc47c" cellView="form" #@markdown Install dependencies and download data import os os.chdir('/content') # !pip install pipeline-dp import sys sys.path.insert(0,'/content/PipelineDP') #Download restaurant dataset from github # !wget https://raw.githubusercontent.com/google/differential-privacy/main/examples/go/data/week_data.csv from IPython.display import clear_output clear_output() import pipeline_dp import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('week_data.csv') df.rename(inplace=True, columns={'VisitorId' : 'user_id', 'Time entered' : 'enter_time', 'Time spent (minutes)' : 'spent_minutes', 'Money spent (euros)' : 'spent_money', 'Day' : 'day'}) rows = [index_row[1] for index_row in df.iterrows()] df.head() # + [markdown] id="hzPiLxByC5BJ" # # Run the pipeline # + id="rFj2u61qBx0r" # Set the backend to local backend. Other options (Beam or Spark) # are possible. backend = pipeline_dp.LocalBackend() # Define the total budget. budget_accountant = pipeline_dp.NaiveBudgetAccountant(total_epsilon=1, total_delta=1e-6) # Create DPEngine which will execute the logic. dp_engine = pipeline_dp.DPEngine(budget_accountant, backend) # Define privacy ID, partition key and aggregated value extractors. # The aggregated value extractor isn't used in this example. data_extractors = pipeline_dp.DataExtractors( partition_extractor=lambda row: row.day, privacy_id_extractor=lambda row: row.user_id, value_extractor=lambda row: 1) # Configure the aggregation parameters. params = pipeline_dp.AggregateParams( noise_kind=pipeline_dp.NoiseKind.LAPLACE, # This example computes only count but we can compute multiple # ... metrics at once. metrics=[pipeline_dp.Metrics.COUNT], # Limits visits contributed by a visitor. A visitor can contribute to # ... up to 3 days max_partitions_contributed=3, # ... and up to 2 visits per day. max_contributions_per_partition=2, # Configure the output partition keys as they are publicly known. # The output should include all week days. public_partitions=list(range(1, 8))) # Create a computational graph for the aggregation. # All computations are lazy. dp_result is iterable, but iterating it would # fail until budget is computed (below). # It’s possible to call DPEngine.aggregate multiple times with different # metrics to compute. dp_result = dp_engine.aggregate(rows, params, data_extractors) # Compute budget per each DP operation. budget_accountant.compute_budgets() # Here's where the lazy iterator initiates computations and gets transformed # into actual results dp_result = list(dp_result) # + [markdown] id="hfHqnCLcDqpU" # # Inspect the result # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="sTkYZ0wSbo3h" outputId="80ab959d-5a2a-4901-fe10-2b99c1bd090b" cellView="form" #@markdown ##Inspect the result #@markdown Below you can see the DP and non-DP results. # Compute non-DP result non_dp_count = [0] * 7 days = range(1, 7) for row in rows: index = row['day'] - 1 non_dp_count[index] += 1 # Copy the DP result to a list dp_count = [0] * 7 for count_sum_per_day in dp_result: index = count_sum_per_day[0] - 1 dp_count[index] = count_sum_per_day[1][0] days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] x = np.arange(len(days)) width = 0.35 fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, non_dp_count, width, label='non-DP') rects2 = ax.bar(x + width/2, dp_count, width, label='DP') ax.set_ylabel('Visit count') ax.set_title('Count visits per day') ax.set_xticks(x) ax.set_xticklabels(days) ax.legend() fig.tight_layout() plt.show()
examples/quickstart.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Debugging strategies # # You will get errors in your scripts. This is not a bad thing! It's just part of the process -- the error messages will help guide you to the solution. The key is to not get discouraged. # # A typical development pattern: Write some code. Run it. See what errors break your script. Throw in some `print()` statements. Google around. Fix your errors. Rinse and repeat. # # n.b. Googling your error is _not_ "cheating" -- it's often the first step in resolving your problem. And if you get really stuck, don't be afraid to ask for help. # ### Dissecting a Python error # # Let's step through an error and discuss strategies for resolving it. # # Run the code in the next cell. # + x = 10 if x > 20 print('x is greater than 20!') # - # The "traceback" message shows you a couple of useful things: # # - What line the error is on: `line 3` # - The class of error: `SyntaxError` (v common) # - Exactly where the error occured -- see where the `^` symbol is pointing? # # What's the problem? # #### Googling # # If it's not immediately clear what's wrong, I might start by Googling the error messsage, the word "python" and maybe some keywords for what I was trying to do when I got the error. Something like [`"SyntaxError: invalid syntax" python if statement`](https://www.google.com/search?q=%22SyntaxError%3A+invalid+syntax%22+python+if+statement) # # Click through the first couple of links -- you'll become _very_ familiar with StackOverflow -- and see if you spot the problem. # #### Read the docs # # If I'm still stuck, I might check out the documentation and examples for the thing I'm trying to do. [Here's the page outlining how to write an `if` statement in Python](https://docs.python.org/3/tutorial/controlflow.html). From there, I would copy the example code, run it, compare it line by line with my code and see what's different. # # If I'm _still_ stuck, I might see if there are other keywords to search on and take another run at Google. # #### Guess and check ¯\&#95;(ツ)&#95;/¯ # # "Maybe if I changed _this thing_ ..." -- the muttered spell that punctuates many a successful debugging session. Tinker with your script, try things out, see if something works. # # If you change something and suddenly your script runs without error, great! Your next step, if you have time, is to figure out _why_ that change worked. Google around, read the docs, ask a more experienced developer. # #### Use `print()` liberally # # Especially when you're iterating over data files, the `print()` function can be a lifesaver. Print the value before you do any operations on it -- that will show you whether the value is what you expect, and point you to the line of data that's causing your script to fail. Here's an example: # + staff = [ {'name': 'Fran', 'age': 32, 'job': 'Reporter'}, {'name': 'John', 'age': ' 41', 'job': 'Managing Editor'}, {'name': 'Sue', 'age': 39, 'job': 'Executive Editor'} ] for person in staff: half_age = person['age'] / 2 print(half_age) # - # Pretend, for a moment, that we were reading in this data from a file, so it's not immediately obvious what's causing the error. I'd start by adding a print statement to dump the entire value of the `person` variable at the beginning of the loop: for person in staff: print(person) half_age = person['age'] / 2 print(half_age) # Now I've isolated the line of data causing the problem, and I can see the cause: The value for John's age is a string with a leading space, not a number. Boom. # #### Ask for help # # If you're hopelessly stuck, it's time to ask for help. You have many skilled friends in journalism who want to help you succeed -- pick a venue you're comfortable with (see below) and ask for help. # # And of course feel free to contact me ([<EMAIL>](mailto:<EMAIL>)) or the rest of the training staff at IRE ([<EMAIL>](mailto:<EMAIL>)) for help. # ### Get one thing to work at a time # # In general, if you're trying to get something to work for _all_ the data flowing through your script, it's a good idea to get it to work on _one thing_ first. # # For instance: Let's say you're processing data in a 30,000-line data file, and you want to reformat the dates from `m/m/yyyy` format to `yyyy-mm-dd` format. You've started work on a parsing function that currently looks like this: def parse_row(row): age, booking_date, dob = row # do something to reformat the date strings via Python date objects return # You need to figure out how to turn "9/7/1985" into "1985-09-07". Instead of calling that function on a "real" row of data inside the `with()` block where you're parsing your CSV file, however, start by doing something like this: # + from datetime import datetime test_date = '9/7/1985' parsed_date = datetime.strptime(test_date, '%m/%d/%Y').strftime('%Y-%m-%d') print(parsed_date) # - # ... and _then_ once you've got the pattern down, add it to your parsing function. # ### Exercises: What's the prob, Bob? # # For each of these Python snippets, figure out what the problems are and solve them. print(Hello, Minneapolis!) # + desk = {'wood': 'fir', 'color': 'black', 'height_in': 36, 'width_in': 48, 'length_in': 68} print(desk['drawer_count']) # + students = ['Kelly', 'Larry', 'José', 'Frank', 'Sarah', 'Sue'] for student in students: if student = 'Kelly': print('It's Kelly!') elif student == 'José': print("It's José!") # + import cvs with open('data/import-refusal-charge-codes.csv', r) as infile: reader = csv.reader(infile) for row in reader: print(row) # - # ### Resources # # - [ProPublica: How to ask programming questions](https://www.propublica.org/nerds/how-to-ask-programming-questions) # - [PythonJournos](https://github.com/PythonJournos/LearningPython/wiki) # - [NICAR-L](https://www.ire.org/resource-center/listservs/subscribe-nicar-l/) # - [The NewsNerdery Slack team](http://newsnerdery.org/) # - [The Lonely Coders Slack team](https://lcc-slack.herokuapp.com/) # - [Python's "Errors and Exceptions" tutorial](https://docs.python.org/3/tutorial/errors.html)
completed/16. Debugging strategies.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Gender Gap Analysis using WDI Data # <NAME> # # ## Introduction # Gender Gap is a topic that has been debated for several years. It is very interesting to study the growth or reduction of the gender gap over the years. The best way to quantify this is by exploring various areas like health, education, economy, and politics. It would also be interesting to see if the world average of these parameters follows the same trends as that of the US. # # This will be useful from a human-centered perspective because gender bias has been prevalent in our society for ages. This has given males an advantage over females over the years. But, as women are advancing in various fields, we see the gap diminishing. This project quantifies this decrease/increase in the gap and the areas we need to concentrate on to remove this gap. # # Also, by intuition, these factors may be related to each other. A study of correlations between these factors would be very interesting. # # ## Background or Related Work # My area of research is inspired by the [Global Gender Gap Report](https://en.wikipedia.org/wiki/Global_Gender_Gap_Report#:~:text=The%20report's%20Gender%20Gap%20Index,gender%20equality%20in%20a%20country.&text=Gender%20imbalances%20to%20the%20advantage%20of%20women%20do%20not%20affect%20the%20score.) # This report measures "gender-based gaps in access to resources and opportunities in countries" # # The 4 areas mentioned are: # # **1.Economic participation and opportunity:** # The indicators taken into consideration can be found [here](https://tcdata360.worldbank.org/indicators/a07f867b?country=BRA&indicator=28159&viz=line_chart&years=2006,2018#related-link) # The three concepts taken into consideration: # # A.Participation gap: Captures difference between women and men in labour force participation rates. # B.Remuneration gap: Captures difference between women and men in terms of remuneration for the same work. # C.Advancement gap: Captures ratio of women to men among legislators, senior officials, and managers, and the ratio of women to men among technical and professional workers. # # **2.Educational attainment:** # The indicators taken into consideration can be found [here](https://tcdata360.worldbank.org/indicators/82bb9059?country=BRA&indicator=28160&viz=line_chart&years=2006,2018) It captures the gap between women's and men's current access to education through ratios of women to men in primary, secondary and tertiary-level education. # # # **3.Health and survival** # The indicators taken into consideration can be found [here](https://tcdata360.worldbank.org/indicators/e06df634?country=BRA&indicator=28163&viz=line_chart&years=2006,2018). It captures life expectancy and sex ratio # # **4.Political empowerment** # The indicators taken into consideration can be found [here](https://tcdata360.worldbank.org/indicators/846d20f8?country=BRA&indicator=27960&viz=line_chart&years=2006,2018). It caputres gap between men and women at the highest level of political decision-making through the ratio of women to men in minister-level positions and the ratio of women to men in parliamentary positions. # # Taking into consideration the data availibilty and I have shorlisted the following indicators: # <table style="border: 1px solid black;"> # <tr style="border: 1px solid black;"> # <th>Indicator</th> # <th>Description</th> # </tr> # <tr> # <td colspan="2" style="border: 1px solid black;"><b>Economic participation and opportunity</b></td> # </tr> # <tr> # <td>Ratio Of Female To Male Labor Force Participation Rate (%) (Modeled ILO Estimate)</td> # <td>Labor force participation rate is the proportion of the population ages 15 and older that is economically active: all people who supply labor for the production of goods and services during a specified period. Ratio of female to male labor force participation rate is calculated by dividing female labor force participation rate by male labor force participation rate and multiplying by 100.</td> # </tr> # <tr> # <td colspan="2" style="border: 1px solid black;"><b>Educational attainment</b></td> # </tr> # <tr> # <td>Literacy Rate Female (%)</td> # <td>Percentage of female population age 7 and above who can read and write. For the purposes of census a person aged seven and above, who can both read and write with understanding in any language, is treated as literate. A person, who can only read but cannot write, is not literate.</td> # </tr> # <tr> # <td>Literacy Rate Male (%)</td> # <td>Percentage of male population age 7 and above who can read and write. For the purposes of census a person aged seven and above, who can both read and write with understanding in any language, is treated as literate. A person, who can only read but cannot write, is not literate.</td> # </tr> # <tr> # <td>School Enrollment, PrePrimary, Female (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</tr> # <tr> # <td>School Enrollment, PrePrimary, Male (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</td> # </tr> # <tr> # <td>School Enrollment, Primary, Female (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</tr> # <tr> # <td>School Enrollment, Primary, Male (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</td> # </tr> # <tr> # <td>School Enrollment, Secondary, Female (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</tr> # <tr> # <td>School Enrollment, Secondary, Male (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</td> # </tr> # <tr> # <td>School Enrollment, Tertiary, Female (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</tr> # <tr> # <td>School Enrollment, Tertiary, Male (% Gross) </td> # <td>Gross enrollment ratio is the ratio of total enrollment, regardless of age, to the population of the age group that officially corresponds to the level of education shown.</td> # </tr> # <tr> # <td colspan="2" style="border: 1px solid black;"><b>Health and survival</b></td> # </tr> # <tr> # <td>Sex Ratio At Birth (Male Births Per Female Births)</td> # <td>Sex Ratio At Birth (Male Births Per Female Births)</td> # </tr> # <tr> # <td>Life Expectancy At Birth, Female (Years)</td> # <td>Life expectancy at birth indicates the number of years a newborn infant would live if prevailing patterns of mortality at the time of its birth were to stay the same throughout its life.</td> # </tr> # <tr> # <td>Life Expectancy At Birth, Male (Years)</td> # <td>Life expectancy at birth indicates the number of years a newborn infant would live if prevailing patterns of mortality at the time of its birth were to stay the same throughout its life.</td> # </tr> # <tr> # <td colspan="2" style="border: 1px solid black;"><b>Political empowerment</b></td> # </tr> # <tr> # <td>Proportion Of Seats Held By Women In National Parliaments (%) </td> # <td>Women in parliaments are the percentage of parliamentary seats in a single or lower chamber held by women.</td> # </tr> # </table> # # More details about the indicators can be found [here](https://data.worldbank.org/indicator/). # # It would be interesting to see how these indicators have changed over the years. I could also explore any inter-dependencies. # # It would also be interesting to see if the figures follow the same pattern at the world and US levels. # At the world level, it would be interesting to observe any correlations between pairs of indicators. # # ## Research questions or hypotheses # # I identified the indicators related to the 4 major areas mentioned in the report. # 1. Economic participation and opportunity: # 1. Labour force participation # 2. Educational attainment # 1. Literacy rate # 2. Enrolment in primary education # 3. Enrolment in secondary education # 4. Enrolment in tertiary education # 3. Health and survival # 1. Sex ratio at birth # 2. Life expectancy # 4. Political empowerment # 1. Proportion of seats held by women in national parliaments (%) # # The 2 analysis questions I looked into are: # 1. How has the gender gap increased/decreased in these 4 areas? # 2. Is the change in these areas same at the global level as well as the US level? # 3. Are there correlations between any pairs of indicators at the world level? # # ## Data # # I used the World Development Indicators(WDI) dataset. I used this dataset because it has a wide variety of indicators that I can be compared, to access the stage of males vs females. I easily applied filters and changed the country that I am interested in. # # WDI has various indicators related to health, education, economy and politics for various countries of the world from 1960 onwards. # # Link [WDI dataset](https://datacatalog.worldbank.org/dataset/world-development-indicators) # # License: [CC-BY 4.0](https://datacatalog.worldbank.org/public-licenses#cc-by) # # ## Methodology # # I used graphs to study the trend of gender gaps in the 4 areas. Graphs were to be the best place to start my analysis. For the indicators where data is present for male as well as female, I created new indicators which were a ratio of female to male or a difference of the two. Analyzing the trends of these new indicators was also useful. I also performed a correlation analysis of all vs all indications. This helped unearth any inter-dependencies. # # Once I derived the conclusion for the increase/decrease of gender gaps over the years at a world level, I performed the same analysis at the US level. # # Also, a correlogram would be helpful to study the relationships between the indicators analyzed in the study. # # This process will answer all my research questions. # # !pip install seaborn #install seaborn #import packages import pandas as pd import matplotlib.pyplot as plt from matplotlib.offsetbox import OffsetImage, AnnotationBbox import seaborn as sns #read data df=pd.read_csv('data/WDIData.csv') #filter on world data on selected time period world_data=df.loc[df['Country Name']=='World'][['Indicator Name']+[str(i) for i in range(2000,2020)]] world_data.head() #filter on US data on selected time period us_data=df.loc[df['Country Name']=='United States'][['Indicator Name']+[str(i) for i in range(2000,2020)]] us_data.head() # ### Economic Participation And Opportunity # # #### Labour force participation # world_pg=world_data.loc[world_data['Indicator Name']=='Ratio of female to male labor force participation rate (%) (modeled ILO estimate)'][[str(i) for i in range(2000,2020)]] fig, (ax1,ax2) = plt.subplots(1,2,figsize=(15,10)) colors = ['#ffc0cb','#04d9ff'] ax1.pie([100-float(world_pg['2000']),float(world_pg['2000'])],labels=['Females','Males'],colors = colors,autopct = '%1.2f%%') ax1.set_title('2000') ax2.pie([100-float(world_pg['2019']),float(world_pg['2019'])],labels=['Females','Males'],colors = colors,autopct = '%1.2f%%') ax2.set_title('2019') fig.suptitle('Labor Force Participation(world)') fig.savefig('results/Participation Gap (World).png') us_pg=us_data.loc[us_data['Indicator Name']=='Ratio of female to male labor force participation rate (%) (modeled ILO estimate)'] fig, (ax1,ax2) = plt.subplots(1,2,figsize=(15,10)) colors = ['#ffc0cb','#04d9ff'] ax1.pie([100-float(us_pg['2000']),float(us_pg['2000'])],labels=['Females','Males'],colors = colors,autopct = '%1.2f%%') ax1.set_title('2000') ax2.pie([100-float(us_pg['2019']),float(us_pg['2019'])],labels=['Females','Males'],colors = colors,autopct = '%1.2f%%') ax2.set_title('2019') fig.suptitle('Labor Lorce Participation(US)') fig.savefig('results/Participation Gap (US).png') # We notice that at the world level, 2000-2019 the ratio of women in labor force participation has increased by 1.08%. At US level we see that though the ratio of women has decreased by 2.76%. # Thus, we conclude that though the gender gap has increased in the US but decreased at the World level. # We observe that even at the world level, the increase is not much. Thus, this is an area that should be worked upon. # # ## Educational Attainment # # ### School enrollment # + world_se_preprim_f=world_data.loc[world_data['Indicator Name']=='School enrollment, preprimary, female (% gross)'][[str(i) for i in range(2000,2020)]] world_se_preprim_f=world_se_preprim_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_preprim_m=world_data.loc[world_data['Indicator Name']=='School enrollment, preprimary, male (% gross)'][[str(i) for i in range(2000,2020)]] world_se_preprim_m=world_se_preprim_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_prim_f=world_data.loc[world_data['Indicator Name']=='School enrollment, primary, female (% gross)'][[str(i) for i in range(2000,2020)]] world_se_prim_f=world_se_prim_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_prim_m=world_data.loc[world_data['Indicator Name']=='School enrollment, primary, male (% gross)'][[str(i) for i in range(2000,2020)]] world_se_prim_m=world_se_prim_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_sec_f=world_data.loc[world_data['Indicator Name']=='School enrollment, secondary, female (% gross)'][[str(i) for i in range(2000,2020)]] world_se_sec_f=world_se_sec_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_sec_m=world_data.loc[world_data['Indicator Name']=='School enrollment, secondary, male (% gross)'][[str(i) for i in range(2000,2020)]] world_se_sec_m=world_se_sec_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_ter_f=world_data.loc[world_data['Indicator Name']=='School enrollment, tertiary, female (% gross)'][[str(i) for i in range(2000,2020)]] world_se_ter_f=world_se_ter_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) world_se_ter_m=world_data.loc[world_data['Indicator Name']=='School enrollment, tertiary, male (% gross)'][[str(i) for i in range(2000,2020)]] world_se_ter_m=world_se_ter_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) fig, ax1 = plt.subplots(figsize=(15,5)) diff=[world_se_ter_f['value']-world_se_ter_m['value'],world_se_sec_f['value']-world_se_sec_m['value'],world_se_prim_f['value']-world_se_prim_m['value'],world_se_preprim_f['value']-world_se_preprim_m['value']] cm = plt.cm.get_cmap('RdYlBu') plt.stackplot([int(i) for i in range(2000,2020)],diff,cmap=cm) plt.xticks([int(i) for i in range(2000,2020)]) plt.legend(['Tertiary','Secondary','Primary','Preprimary']) plt.axhline(y=0, linestyle='-') ax1.set_title('School enrollment, female (% gross) - School enrollment, male (% gross) at various education levels(world)') plt.xlabel('Year') plt.ylabel('Male- Female School enrollment(% gross)') fig.savefig('results/School enrollment (World).png') us_se_preprim_f=us_data.loc[us_data['Indicator Name']=='School enrollment, preprimary, female (% gross)'][[str(i) for i in range(2000,2020)]] us_se_preprim_f=us_se_preprim_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_preprim_m=us_data.loc[us_data['Indicator Name']=='School enrollment, preprimary, male (% gross)'][[str(i) for i in range(2000,2020)]] us_se_preprim_m=us_se_preprim_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_prim_f=us_data.loc[us_data['Indicator Name']=='School enrollment, primary, female (% gross)'][[str(i) for i in range(2000,2020)]] us_se_prim_f=us_se_prim_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_prim_m=us_data.loc[us_data['Indicator Name']=='School enrollment, primary, male (% gross)'][[str(i) for i in range(2000,2020)]] us_se_prim_m=us_se_prim_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_sec_f=us_data.loc[us_data['Indicator Name']=='School enrollment, secondary, female (% gross)'][[str(i) for i in range(2000,2020)]] us_se_sec_f=us_se_sec_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_sec_m=us_data.loc[us_data['Indicator Name']=='School enrollment, secondary, male (% gross)'][[str(i) for i in range(2000,2020)]] us_se_sec_m=us_se_sec_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_ter_f=us_data.loc[us_data['Indicator Name']=='School enrollment, tertiary, female (% gross)'][[str(i) for i in range(2000,2020)]] us_se_ter_f=us_se_ter_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) us_se_ter_m=us_data.loc[us_data['Indicator Name']=='School enrollment, tertiary, male (% gross)'][[str(i) for i in range(2000,2020)]] us_se_ter_m=us_se_ter_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'value'}).drop(columns=['del']) fig, ax1 = plt.subplots(figsize=(15,5)) diff=[us_se_ter_f['value']-us_se_ter_m['value'],us_se_sec_f['value']-us_se_sec_m['value'],us_se_prim_f['value']-us_se_prim_m['value'],us_se_preprim_f['value']-us_se_preprim_m['value']] cm = plt.cm.get_cmap('RdYlBu') plt.stackplot([int(i) for i in range(2000,2020)],diff,cmap=cm) plt.xticks([int(i) for i in range(2000,2020)]) plt.legend(['Tertiary','Secondary','Primary','Preprimary']) plt.axhline(y=0, linestyle='-') ax1.set_title('School enrollment, female (% gross) - School enrollment, male (% gross) at various education levels(US)') plt.xlabel('Year') plt.ylabel('Male- Female School enrollment(% gross)') fig.savefig('results/School enrollment(US).png') # - # At the world level, we start with negative figures when we subtract the school enrollments for males from males at various levels of education. But it is very encouraging to see that the gap is being reduced as time passes. # This change is not so apparent at the US level. # We observe school enrollment for females at the tertiary level surpasses the school enrollment for males at both World and US levels. This is a strong indicator that women are motivated to build a career for themselves and are determined to gain knowledge. This also indicates that there is an increase in access to higher education at World as well as the US level # 2000 starts with a vast difference in school enrollment at every level. But it is very encouraging to see this difference has diminished for every level of education since then. # # # ### Literacy rate world_lit_m=world_data.loc[world_data['Indicator Name']=='Literacy rate, adult male (% of males ages 15 and above)'][[str(i) for i in range(2000,2020)]] world_lit_f=world_data.loc[world_data['Indicator Name']=='Literacy rate, adult female (% of females ages 15 and above)'][[str(i) for i in range(2000,2020)]] world_lit_m=world_lit_m.melt() world_lit_f=world_lit_f.melt() fig, ax1 = plt.subplots(figsize=(15,5)) plt.plot(world_lit_f['variable'],world_lit_f['value']/world_lit_m['value']) ax1.set_title('Ratio of female literacy percentage to male literacy percentage(world)') plt.xlabel('Year') plt.ylabel('Ratio') fig.savefig('results/Literacy rate ratio.png') # We notice that the ratio of the percentage of literate women to literate men increases from 2000-2019. This indicates a reduction in the gender gap in literacy rates. Since US data was not available, comparison to US statistics was not possible # ## Health and survival # # ### Sex ratio at birth world_sr=world_data.loc[world_data['Indicator Name']=='Sex ratio at birth (male births per female births)'][[str(i) for i in range(2006,2020)]] world_sr=world_sr.melt() us_sr=us_data.loc[us_data['Indicator Name']=='Sex ratio at birth (male births per female births)'][[str(i) for i in range(2006,2020)]] us_sr=us_sr.melt() fig, ax1 = plt.subplots(figsize=(15,5)) plt.plot(world_sr['variable'],world_sr['value'],marker="o") plt.plot(us_sr['variable'],us_sr['value'],marker="*") plt.legend(['World','US']) ax1.set_title('Sex ratio at birth (male births per female births)') plt.xlabel('Year') plt.ylabel('Sex Ratio') fig.savefig('results/Sex ratio.png') # We observe that the number of male births per female births is approaching 1, but we still have a long way to go. The world statistics have shown much more improvement than the US. # # ### Life expectancy at birth world_le_f=world_data.loc[world_data['Indicator Name']=='Life expectancy at birth, female (years)'][[str(i) for i in range(2000,2020)]] world_le_m=world_data.loc[world_data['Indicator Name']=='Life expectancy at birth, male (years)'][[str(i) for i in range(2000,2020)]] world_le_f=world_le_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'life expectancy'}).drop(columns=['del']) world_le_f['Gender']='Female' world_le_f['Location']='World' world_le_m=world_le_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'life expectancy'}).drop(columns=['del']) world_le_m['Gender']='Male' world_le_m['Location']='World' us_le_f=us_data.loc[us_data['Indicator Name']=='Life expectancy at birth, female (years)'][[str(i) for i in range(2000,2020)]] us_le_m=us_data.loc[us_data['Indicator Name']=='Life expectancy at birth, male (years)'][[str(i) for i in range(2000,2020)]] us_le_f=us_le_f.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'life expectancy'}).drop(columns=['del']) us_le_f['Gender']='Female' us_le_f['Location']='us' us_le_m=us_le_m.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'life expectancy'}).drop(columns=['del']) us_le_m['Gender']='Male' us_le_m['Location']='us' res=pd.concat([world_le_m,world_le_f,us_le_m,us_le_f,]) # #!pip install seaborn import seaborn as sns g=sns.relplot( data=res,x="year", y="life expectancy", hue='Gender',kind="line",style='Location', markers=['o','^']) g.fig.set_figwidth(11.7, 8.27) plt.xlabel('Year') plt.ylabel('Life Expectancy (years)') g.fig.savefig('results/Life Expectancy.png') # Contrary to the expectation, the life expectancy of females was greater than males. It felt good to see that the increase in life expectancy was approximately equal across genders. This meant women and men had equal access to health services. # The increase in life expectancy was more prominent at the world level compared when compared to the US level. # ## Political empowerment # # ### Proportion Of Seats Held By Women In National Parliaments (%) # # + world_pp=world_data.loc[world_data['Indicator Name']=='Proportion of seats held by women in national parliaments (%)'][[str(i) for i in range(2000,2020)]] world_pp=world_pp.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'seat %'}).drop(columns=['del']) world_pp['location']='World' us_pp=us_data.loc[us_data['Indicator Name']=='Proportion of seats held by women in national parliaments (%)'][[str(i) for i in range(2000,2020)]] us_pp=us_pp.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'seat %'}).drop(columns=['del']) us_pp['location']='us' path_globe = "images/globe crown.png" image_globe = plt.imread(path_globe)[0:30, 0:30] path_us = "images/usa crown.png" image_us = plt.imread(path_us)[0:30, 0:30] fig, ax = plt.subplots(figsize=(15,8)) ax.plot([i for i in range(2000,2020)], world_pp['seat %'] ) ax.plot([i for i in range(2000,2020)], us_pp['seat %']) def plot_images(x, y, image, ax=None): ax = ax or plt.gca() for xi, yi in zip(x,y): im = OffsetImage(image, zoom=72/ax.figure.dpi) im.image.axes = ax ab = AnnotationBbox(im, (xi,yi), frameon=False, pad=0.0,) ax.add_artist(ab) plot_images([int(i) for i in range(2000,2020)], world_pp['seat %'] , image_globe, ax=ax) plot_images([int(i) for i in range(2000,2020)], us_pp['seat %'], image_us, ax=ax) plt.legend(['World','US']) plt.xlabel("Year") plt.ylabel("Proportion of seats held by women in national parliaments (%)") plt.xticks([int(i) for i in range(2000,2020)]) plt.show() fig.savefig('results/Proportion of seats held by women.png') # - # We notice there is a persistent and gradual increase in women's representation in national parliaments at the world and US level. This is an indicator of the political empowerment of women and women having equal say in society. # We notice a jump in the women representation from 2017-2018 in the US. 2018 saw a hike in women representation. House of Representatives saw 24% women representation and the Senate saw 25% women representation. This was a commendable record in history. # # ### Correlations cor=pd.DataFrame() cor['Year']=[int(i) for i in range(2000,2020)] cor['Participation Gap']=world_pg.unstack().reset_index(name='value').rename(columns={"level_0":'year','level_1':'del','value':'participation gap'}).drop(columns=['del','year'])['participation gap'] cor['Literacy Rate']=world_lit_f['value']/world_lit_m['value'] cor['Pre Primary Education']=world_se_preprim_m['value']-world_se_preprim_f['value'] cor['Primary Education']=world_se_prim_m['value']-world_se_prim_f['value'] cor['Secondary Education']=world_se_sec_m['value']-world_se_sec_f['value'] cor['Tertiary Education']=world_se_ter_m['value']-world_se_ter_f['value'] cor['Sex Ratio']=world_sr['value'] cor['Life Expectancy (male)']=world_le_m['life expectancy'] cor['Life Expectancy (female)']=world_le_f['life expectancy'] cor['Proportion of Seats held by Woman']=world_pp['seat %'] sns_plot = sns.pairplot(cor,kind="scatter") sns_plot.savefig('results/Correlogram.png') # We notice that there are linear correlations between almost all pairs of indicators. The participation gap is an exception that has a parabola-like shape. # This is an interesting observation as this affects the usability of these indicators in various models. # # ## Findings # # **1. Economic participation and opportunity:** # The Labour force participation of women has increased by about 1.08% at the World level, whereas decreased by about 2.76%. # # **2. Educational attainment:** # The ratio of female literacy percentage to the male literacy percentage is fast approaching 1. # At the world level, though the gender gap in school enrollments was very apparent in 2000, as time passed the gap reduced. This reduction in the gap was not so apparent at the US level. # # **3. Health and survival:** # The sex ratio at birth is tending towards one at the US and World level # The life expectancy is increasing equally for both genders: at the World as well as the US level. # # **4. Political empowerment** # There has been a slow and gradual increase in the proportion of women in national parliaments. # # We also notice a linear relationship between every pair of indicators. Participation gap forms an exception with parabole-like relation with all the other indicators # # # ## Discussion # # **1. Economic participation and opportunity:** # The reduction in the gender gap is insignificant or negative in this area. This means we need to increase the labour partcipation at the world level as well as US level. An increase in women participation in economic activities will help the countries strive towards excellence at a greater pace. # # **2. Educational attainment:** # The ratio of female literacy percentage to the male literacy percentage is fast approaching 1 means the literacy rates for both genders are approaching equality. # This reduction in the school enrollment gender gap was not so apparent at the US level, as at the world level. # Higher access to higher education increased enrollment of women at the tertiary level of education at the World as well as the US level. # Overall the area of Educational attainment has made tremendous progress in the reduction of the gender gap. # It is exciting to see that more and more women are getting education. This would also lead to a greater economic participation and also to their political empowerment. This is make them aware of their surroundings and help them respond better to any situation. # # **3. Health and survival:** # The sex ratio at birth is tending towards one indicates that the number of male birth and number of female birth is fast approaching equality. This occurs at both US and World levels. # Equality in access to health and survival resources is the reason for the uniform increase in life expectancy across genders. # Thus the area of Health and survival is making good progress in the reduction of the gender gap. # This means women are getting better healhcare facilities and resources. Also, there is an improvement in the sex ratio, indicating their social upliftment. "Health is Weatlh" is an old saying and here it implies that health contributes to the financial well being of women as well. # # **4. Political empowerment** # There has been a slow and gradual increase in the proportion of women in national parliaments. This indicates women are being encouraged to join politics and have a voice of their own. # The jump in women representation in the US in 2018 was a commendable benchmark in the history of the country. # Though the trend in the World is smoother than in the US, both are progressing towards women's political empowerment. This is an ultimate indicator of reduction of gender gap. This indicates social as well as mental well being of women and they are supported enough to hold important political posts. # # # At the world level, all the indicators except the participation gap have an almost linear relationship with the other indicators. # The participation gap forms an exception with parabole-like relation with all the other indicators. This ticks off one requirement for use in linear regression analysis. # # # ### Limitation: # 1. The data was empty for many indicators at World or US level. This resulted in a limited choice of indicators. # 2. Not a variety of indicators are used. 1 or 2 indicators per area are used. Analysis of a wider range of indicators may reveal other findings # 3. This is a graphical EDA. A more statistical study may help in deeper explorations of correlations. # # # ## Conclusion # # All 4 areas - economic participation and opportunity, educational attainment, health and survival, political empowerment have shown a reduction in Gender Gap at the World level. # At the US level Economic participation and opportunity shows an increase in the gender gap whereas other areas show a decrease in Gender Gap. # At the world level, all the indicators have a linear correlation with each other except for Participation Gap which shows a parabola-like relationship with all others. # # ## References # [1] <NAME>. “A Record Number of Women Will Be Serving in the New Congress.” Pew Research Center, 18 Dec. 2018, www.pewresearch.org/fact-tank/2018/12/18/record-number-women-in-congress. # # [2] “How to Use Custom Png Image Marker with Plot?” Stack Overflow, 23 Feb. 2010, stackoverflow.com/questions/2318288/how-to-use-custom-png-image-marker-with-plot. # # [3] https://icons.iconarchive.com/icons/dtafalonso/modern-xp/48/ModernXP-73-Globe-icon.png # # [4] https://icon-library.com/images/crown-xxl.png # # [5] https://www.hiclipart.com/free-transparent-background-png-clipart-zjdjg #
A7 Final Project Report.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # + import os os.chdir(os.path.dirname("../")) # - import deepof.data import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D import numpy as np from deepof.models import * from itertools import product from scipy.optimize import curve_fit from scipy.stats import linregress from sklearn.linear_model import LinearRegression from sklearn.metrics import pairwise_distances, r2_score from tqdm import tqdm # # Tuning of latent space entropy radius # # To evaluate how clusters overlap in the latent space, we compute the mean entropy of cluster assignment across all datapoints that fall within a radius of given encoded training instances. This notebook explores how the number of neighbors that fall within that radius on the latent space depends on several variables (ie number of clusters and encoding dimensions). data_path = "../../Desktop/deepoftesttemp/" # Load data and tag a few test videos proj = deepof.data.project(path=data_path, arena_dims=[380]).run() rules = proj.rule_based_annotation() coords = proj.get_coords(propagate_annotations=False) list(range(2500, 15001, 2500)) # + # Load the models, and try different radii # each dataset is rank 3: encoding dimensions, number of clusters, and different radii x, y = np.zeros([6, 6, 100]), np.zeros([6, 6, 100]) # Iterate over encoding dimensions for a, n in enumerate(tqdm(range(2500, 15001, 2500))): X_train, _, _, _ = coords.preprocess(shuffle=True, window_size=25, test_videos=0) X_train = X_train[np.random.choice(range(X_train.shape[0]), n, replace=False)] for b, d in enumerate((2, 4, 6, 8, 10, 12)): gmvaep = SEQ_2_SEQ_GMVAE(encoding=d, number_of_components=15).build( X_train.shape )[3] # Get encoer and grouper from full model cluster_means = [ layer for layer in gmvaep.layers if layer.name == "latent_distribution" ][0] cluster_assignment = [ layer for layer in gmvaep.layers if layer.name == "cluster_assignment" ][0] encoder = tf.keras.models.Model(gmvaep.layers[0].input, cluster_means.output) grouper = tf.keras.models.Model( gmvaep.layers[0].input, cluster_assignment.output ) # Use encoder and grouper to predict on validation data encoding = encoder.predict(X_train) groups = grouper.predict(X_train) pdist = pairwise_distances(encoding) for i, r in enumerate(np.linspace(0, 5, 100)): x[a][b][i], y[a][b][i] = ( np.round(r, 7), np.median(np.sum(pdist < r, axis=0)), ) # - # Select number of average neighbors to aim for N = 100 # + fig, (ax1, ax2) = plt.subplots( 1, 2, figsize=(12, 4), dpi=100, facecolor="w", edgecolor="k", sharey=True ) plt.suptitle("Samples in latent space neighborhood for a given radius") # Plot number of neighbors in radius versus number of clusters for i, t in enumerate(range(2500, 15001, 2500)): ax1.plot(x[i][2], y[i][2], label="t={}".format(t)) # Plot number of neighbors in radius versus encoding dimensions for i, d in enumerate([2, 4, 6, 8, 10, 12]): ax2.plot(x[5][i], y[5][i], label="enc={}".format(d)) ax1.set_xlabel("radius") ax1.set_ylabel("samples in neighborhood") ax1.legend() # ax1.set_xlim(0,2) # ax1.set_ylim(0,100) ax1.axhline(N, linestyle="--", c="r", linewidth=0.5) ax2.set_xlabel("radius") ax2.set_ylabel("samples in neighborhood") ax2.axhline(N, linestyle="--", c="r", linewidth=0.5) ax2.legend() plt.show() # + # Fit sigmoid functions to the data in the second plot, and compute the radius that yields K neighbors in average for # each curve def sigmoid(x, L, x0, k, b): y = L / (1 + np.exp(-k * (x - x0))) + b return y def fit_sigmoid(x, y): p0 = [max(y), np.median(x), 1, min(y)] popt, pcov = curve_fit(sigmoid, x, y, p0, method="dogbox") return popt def retrieve_x_from_sigmoid(x, y, n): L, x0, k, b = fit_sigmoid(x, y) x_given_k = -(np.log(L / (n - b) - 1) / k) + x0 return x_given_k # + # Interpolate to get the radius that will yield n neighbors in each setting x_given_n = np.zeros([6, 6]) _x_given_n = np.zeros([6, 6]) y_given_n = np.array([list(range(2500, 15001, 2500)), [2, 4, 6, 8, 10, 12]]) for i in range(6): for j in range(6): x_given_n[i][j] = retrieve_x_from_sigmoid(x[i][j], y[i][j], 100) # + # Fit a line to the data to get an equation of how #neighbors varies with encoding dimensions # The retrieved equation will be the default radius! res1 = linregress(np.log2(y_given_n[0]), x_given_n[:, 2]) print(res1) res2 = linregress(y_given_n[1], x_given_n[5]) print(res2) # + # Compute radius for an example def radius_given_n_and_dim(n, dim, coefs, inpt): return coefs[0] * np.log2(n) + coefs[1] * dim + inpt radius_given_n_and_dim(15000 * 5, 6, res3.coef_, res3.intercept_) # - # To select a good default for the radius r, we make the value depend on the variables we find relationships with, such as the number of dimensions in the latent space. # + fig, (ax1, ax2) = plt.subplots( 1, 2, figsize=(12, 5), dpi=100, facecolor="w", edgecolor="k", sharey=True ) ax1.scatter(np.log2(y_given_n[0]), x_given_n[:, 2]) ax1.plot( np.log2(y_given_n[0]), res1.intercept + res1.slope * np.log2(y_given_n[0]), "r", label="y={}*x+{}".format(np.round(res1.slope, 2), np.round(res1.intercept, 2)), ) ax1.set_ylabel("radius to reach {} samples in neighborhood".format(N)) ax1.set_xlabel("number of encoded examples") ax2.scatter(y_given_n[1], x_given_n[5]) ax2.plot( y_given_n[1], res2.intercept + res2.slope * y_given_n[1], "r", label="y={}*x+{}".format(np.round(res2.slope, 2), np.round(res2.intercept, 2)), ) ax2.set_ylabel("radius to reach {} samples in neighborhood".format(N)) ax2.set_xlabel("number of dimensions") plt.suptitle( "Relationship between radius to reach {} average neighbors \n \ before training and neighborhood crowdedness".format( N ) ) ax1.legend() ax2.legend() plt.ylim(0) plt.show() # + # Fit a hyperplane to both features res3 = LinearRegression() X = np.array([list(i) for i in product(np.log2(y_given_n[0]), y_given_n[1])]) res3.fit(X, x_given_n.flatten(order="C")) print( "log2(samples) coef: {}\n\ dimension coef: {}".format( *np.round(res3.coef_, 25) ) ) print("intercept:", np.round(res3.intercept_, 25)) print() print("r2_score:", np.round(r2_score(x_given_n.flatten(), res3.predict(X)), 5)) # + # %matplotlib inline # Let's represent how both variables evolve in a 3D space fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(111, projection="3d") # Get combinations of predictors prod = np.array([list(i) for i in product(y_given_n[0], y_given_n[1])]) n, d = prod[:, 0], prod[:, 1] ax.scatter3D( np.log2(n), d, x_given_n, c="red", label="z={}*x + {}*y + {}".format( *np.round(res3.coef_, 5), np.round(res3.intercept_, 5) ), ) x1, x2 = np.meshgrid(X[:, 0], X[:, 1]) ax.plot_surface( x1, x2, (res3.coef_[0] * x1 + res3.coef_[1] * x2 + res3.intercept_), cmap=cm.coolwarm, linewidth=1, antialiased=True, ) ax.set_xlabel("number of samples") ax.set_ylabel("number of dimensions") ax.set_zlabel("radius to reach {} samples in neighborhood".format(N)) ax.legend() plt.show()
supplementary_notebooks/set_default_entropy_radius.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Intro. to Snorkel: Extracting Spouse Relations from the News # ## Part II: Generating _and modeling_ noisy training labels # # In this part of the tutorial, we will write **labeling functions** which express various heuristics, patterns, and [_weak supervision_](http://hazyresearch.github.io/snorkel/blog/weak_supervision.html) strategies to label our data. # # In most real-world settings, hand-labeled training data is prohibitively expensive and slow to collect. A common scenario, though, is to have access to tons of _unlabeled_ training data, and have some idea of how to label it programmatically. For example: # # * We may be able to think of text patterns that would indicate two people mentioned in a sentence are married, such as seeing the word "spouse" between the mentions. # * We may have access to an external _knowledge base (KB)_ that lists some known pairs of married people, and can use these to heuristically label some subset of our data. # # Our labeling functions will capture these types of strategies. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will _model_ their accuracies with a generative model, which Snorkel will help us easily apply. # # This will ultimately produce a single set of **noise-aware training labels**, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our [NIPS 2016 paper](https://arxiv.org/abs/1605.07723). # + # %load_ext autoreload # %autoreload 2 # %matplotlib inline import os # TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE # Note that this is necessary for parallel execution amongst other things... # os.environ['SNORKELDB'] = 'postgres:///snorkel-intro' import numpy as np from snorkel import SnorkelSession session = SnorkelSession() # - # We repeat our definition of the `Spouse` `Candidate` subclass from Parts II and III. # + from snorkel.models import candidate_subclass Spouse = candidate_subclass('Spouse', ['person1', 'person2']) # - # ### Using a labeled _development set_ # # In our setting here, we will use the phrase "development set" to refer to a _small_ set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the _test set_, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions. # # In our case, we already loaded existing labels for a development set (`split` 1), so we can load them again now: # + from snorkel.annotations import load_gold_labels L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) # - # # Creating and Modeling a Noisy Training Set # # Our biggest step in the data programming pipeline is the creation - _and modeling_ - of a noisy training set. We'll approach this in three main steps: # # 1. **Creating labeling functions (LFs):** This is where most of our development time would actually go into if this were a real application. Labeling functions encode our heuristics and weak supervision signals to generate (noisy) labels for our training candidates. # # 2. **Applying the LFs:** Here, we actually use them to label our candidates! # # 3. **Training a generative model of our training set:** Here we learn a model over our LFs, learning their respective accuracies automatically. This will allow us to combine them into a single, higher-quality label set. # # We'll also add some detail on how to go about _developing labeling functions_ and then _debugging our model_ of them to improve performance. # # ## 1. Creating Labeling Functions # # In Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing **labeling functions (LFs)** (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below. # # A labeling function is just a Python function that accepts a `Candidate` and returns `1` to mark the `Candidate` as true, `-1` to mark the `Candidate` as false, and `0` to abstain from labeling the `Candidate` (note that the non-binary classification setting is covered in the advanced tutorials!). # # In the next stages of the Snorkel pipeline, we'll train a model to learn the accuracies of the labeling functions and trewieght them accordingly, and then use them to train a downstream model. It turns out by doing this, we can get high-quality models even with lower-quality labeling functions. So they don't need to be perfect! Now on to writing some: import re from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) # ### Pattern-based LFs # These LFs express some common sense text patterns which indicate that a person pair might be married. For example, `LF_husband_wife` looks for words in `spouses` between the person mentions, and `LF_same_last_name` checks to see if the two people have the same last name (but aren't the same whole name). # + spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'} family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter', 'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'} family = family | {f + '-in-law' for f in family} other = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'} # Helper function to get last name def last_name(s): name_parts = s.split(' ') return name_parts[-1] if len(name_parts) > 1 else None def LF_husband_wife(c): return 1 if len(spouses.intersection(get_between_tokens(c))) > 0 else 0 def LF_husband_wife_left_window(c): if len(spouses.intersection(get_left_tokens(c[0], window=2))) > 0: return 1 elif len(spouses.intersection(get_left_tokens(c[1], window=2))) > 0: return 1 else: return 0 def LF_same_last_name(c): p1_last_name = last_name(c.person1.get_span()) p2_last_name = last_name(c.person2.get_span()) if p1_last_name and p2_last_name and p1_last_name == p2_last_name: if c.person1.get_span() != c.person2.get_span(): return 1 return 0 def LF_no_spouse_in_sentence(c): return -1 if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else 0 def LF_and_married(c): return 1 if 'and' in get_between_tokens(c) and 'married' in get_right_tokens(c) else 0 def LF_familial_relationship(c): return -1 if len(family.intersection(get_between_tokens(c))) > 0 else 0 def LF_family_left_window(c): if len(family.intersection(get_left_tokens(c[0], window=2))) > 0: return -1 elif len(family.intersection(get_left_tokens(c[1], window=2))) > 0: return -1 else: return 0 def LF_other_relationship(c): return -1 if len(other.intersection(get_between_tokens(c))) > 0 else 0 # - # ### Distant Supervision LFs # In addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these. # + import bz2 # Function to remove special characters from text def strip_special(s): return ''.join(c for c in s if ord(c) < 128) # Read in known spouse pairs and save as set of tuples with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f: known_spouses = set( tuple(strip_special(x.decode('utf-8')).strip().split(',')) for x in f.readlines() ) # Last name pairs for known spouses last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)]) def LF_distant_supervision(c): p1, p2 = c.person1.get_span(), c.person2.get_span() return 1 if (p1, p2) in known_spouses or (p2, p1) in known_spouses else 0 def LF_distant_supervision_last_names(c): p1, p2 = c.person1.get_span(), c.person2.get_span() p1n, p2n = last_name(p1), last_name(p2) return 1 if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else 0 # - # For later convenience we group the labeling functions into a list. LFs = [ LF_distant_supervision, LF_distant_supervision_last_names, LF_husband_wife, LF_husband_wife_left_window, LF_same_last_name, LF_no_spouse_in_sentence, LF_and_married, LF_familial_relationship, LF_family_left_window, LF_other_relationship ] # ### Developing Labeling Functions # # Above, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF: def LF_wife_in_sentence(c): """A simple example of a labeling function""" return 1 if 'wife' in c.get_parent().words else 0 # One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true: labeled = [] for c in session.query(Spouse).filter(Spouse.split == 1).all(): if LF_wife_in_sentence(c) != 0: labeled.append(c) print("Number labeled:", len(labeled)) # We can then easily put this into the Viewer as usual (try it out!): # ``` # SentenceNgramViewer(labeled, session) # ``` # # We also have a simple helper function for getting the empirical accuracy of a single LF with respect to the development set labels for example. This function also returns the evaluation buckets of the candidates (true positive, false positive, true negative, false negative): # + from gensim.parsing.preprocessing import STOPWORDS import gensim.matutils as gm from gensim.models.keyedvectors import KeyedVectors # Load pretrained model (since intermediate data is not included, the model cannot be refined with additional data) model = KeyedVectors.load_word2vec_format('../../../snorkel/glove_w2v.txt', binary=False) # C binary format wordvec_unavailable= set() def write_to_file(wordvec_unavailable): with open("wordvec_unavailable.txt","w") as f: for word in wordvec_unavailable: f.write(word+"\n") def preprocess(tokens): btw_words = [word for word in tokens if word not in STOPWORDS] btw_words = [word for word in btw_words if word.isalpha()] return btw_words def get_word_vectors(btw_words): # returns vector of embeddings of words word_vectors= [] for word in btw_words: try: word_v = np.array(model[word]) word_v = word_v.reshape(len(word_v),1) #print(word_v.shape) word_vectors.append(model[word]) except: wordvec_unavailable.add(word) return word_vectors def get_similarity(word_vectors,target_word): # sent(list of word vecs) to word similarity similarity = 0 target_word_vector = 0 try: target_word_vector = model[target_word] except: wordvec_unavailable.add(target_word+" t") return similarity target_word_sparse = gm.any2sparse(target_word_vector,eps=1e-09) for wv in word_vectors: wv_sparse = gm.any2sparse(wv, eps=1e-09) similarity = max(similarity,gm.cossim(wv_sparse,target_word_sparse)) return similarity # + ##### Continuous ################ softmax_Threshold = 0.3 LF_Threshold = 0.3 import re from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'} family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter', 'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'} family = family | {f + '-in-law' for f in family} other = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'} # Helper function to get last name def last_name(s): name_parts = s.split(' ') return name_parts[-1] if len(name_parts) > 1 else None def LF_husband_wife(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for sw in spouses: sc=max(sc,get_similarity(word_vectors,sw)) return (1,sc) def LF_husband_wife_left_window(c): global LF_Threshold sc_1 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[0]))) for sw in spouses: sc_1=max(sc_1,get_similarity(word_vectors,sw)) sc_2 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[1]))) for sw in spouses: sc_2=max(sc_2,get_similarity(word_vectors,sw)) return(1,max(sc_1,sc_2)) def LF_same_last_name(c): p1_last_name = last_name(c.person1.get_span()) p2_last_name = last_name(c.person2.get_span()) if p1_last_name and p2_last_name and p1_last_name == p2_last_name: if c.person1.get_span() != c.person2.get_span(): return (1,1) return (0,0) def LF_no_spouse_in_sentence(c): return (-1,0.75) if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else (0,0) def LF_and_married(c): global LF_Threshold word_vectors = get_word_vectors(preprocess(get_right_tokens(c))) sc = get_similarity(word_vectors,'married') if 'and' in get_between_tokens(c): return (1,sc) else: return (0,0) def LF_familial_relationship(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for fw in family: sc=max(sc,get_similarity(word_vectors,fw)) return (-1,sc) def LF_family_left_window(c): global LF_Threshold sc_1 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[0]))) for fw in family: sc_1=max(sc_1,get_similarity(word_vectors,fw)) sc_2 = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c[1]))) for fw in family: sc_2=max(sc_2,get_similarity(word_vectors,fw)) return (-1,max(sc_1,sc_2)) def LF_other_relationship(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for ow in other: sc=max(sc,get_similarity(word_vectors,ow)) return (-1,sc) def LF_other_relationship_left_window(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_left_tokens(c))) for ow in other: sc=max(sc,get_similarity(word_vectors,ow)) return (-1,sc) import bz2 # Function to remove special characters from text def strip_special(s): return ''.join(c for c in s if ord(c) < 128) # Read in known spouse pairs and save as set of tuples with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f: known_spouses = set( tuple(strip_special(x).strip().split(',')) for x in f.readlines() ) # Last name pairs for known spouses last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)]) def LF_distant_supervision(c): p1, p2 = c.person1.get_span(), c.person2.get_span() return (1,1) if (p1, p2) in known_spouses or (p2, p1) in known_spouses else (0,0) def LF_distant_supervision_last_names(c): p1, p2 = c.person1.get_span(), c.person2.get_span() p1n, p2n = last_name(p1), last_name(p2) return (1,1) if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else (0,1) import numpy as np def LF_Three_Lists_Left_Window(c): global softmax_Threshold c1,s1 = LF_husband_wife_left_window(c) c2,s2 = LF_family_left_window(c) c3,s3 = LF_other_relationship_left_window(c) sc = np.array([s1,s2,s3]) c = [c1,c2,c3] sharp_param = 1.5 prob_sc = np.exp(sc * sharp_param - np.max(sc)) prob_sc = prob_sc / np.sum(prob_sc) #print 'Left:',s1,s2,s3,prob_sc if s1==s2 or s3==s1: return (0,0) return c[np.argmax(prob_sc)],1 def LF_Three_Lists_Between_Words(c): global softmax_Threshold c1,s1 = LF_husband_wife(c) c2,s2 = LF_familial_relationship(c) c3,s3 = LF_other_relationship(c) sc = np.array([s1,s2,s3]) c = [c1,c2,c3] sharp_param = 1.5 prob_sc = np.exp(sc * sharp_param - np.max(sc)) prob_sc = prob_sc / np.sum(prob_sc) #print 'BW:',s1,s2,s3,prob_sc if s1==s2 or s3==s1: return (0,0) return c[np.argmax(prob_sc)],1 LFs = [LF_distant_supervision, LF_distant_supervision_last_names,LF_same_last_name, LF_and_married, LF_Three_Lists_Between_Words,LF_Three_Lists_Left_Window, LF_no_spouse_in_sentence ] # + LF_Threshold = 0 import matplotlib.pyplot as plt import numpy as np from snorkel.lf_helpers import test_LF def plot_sense(acc_list,th_list,lf_name): plt.plot(th_list, acc_list) plt.xlabel('threshold') plt.ylabel('accuracy') # plt.ylim([0.0, 0.25]) # plt.xlim([0.55, 0.61]) plt.title(lf_name) plt.savefig(lf_name+'.png') plt.show() def sense_values(lf): global LF_Threshold accuracy_list = [] threshold_list = [] for i in np.arange(0,1.1,0.1): LF_Threshold = i threshold_list.append(LF_Threshold) tp, fp, tn, fn = test_LF(session, lf, split=1, annotator_name='gold') ntp = len(tp) nfp = len(fp) ntn = len(tn) nfn = len(fn) print("lf thresh:",LF_Threshold) acc= (ntp+ntn)/(ntp+nfp+ntn+nfn) print("acc:",acc) accuracy_list.append(acc) print(len(accuracy_list)) print(len(threshold_list)) return (accuracy_list,threshold_list) # - def LF_husband_wife(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for sw in spouses: sc=max(sc,get_similarity(word_vectors,sw)) if(sc >= LF_Threshold): return 1 return 0 acc_hw,th_hw = sense_values(LF_husband_wife) # print(acc_hw,th_hw) plot_sense(acc_hw,th_hw,"LF_husband_wife") def LF_familial_relationship(c): global LF_Threshold sc = 0 word_vectors = get_word_vectors(preprocess(get_between_tokens(c))) for fw in family: sc=max(sc,get_similarity(word_vectors,fw)) if sc>=LF_Threshold: return -1 return 0 acc_fr,th_fr = sense_values(LF_familial_relationship) print(acc_fr,th_fr) plot_sense(acc_fr,th_fr,"LF_familial_relationship") def LF_and_married(c): global LF_Threshold word_vectors = get_word_vectors(preprocess(get_right_tokens(c))) sc = get_similarity(word_vectors,'married') if sc >= LF_Threshold: return 1 return 0 acc_m,th_m = sense_values(LF_and_married) plot_sense(acc_m,th_m,"LF_and_married") from snorkel.lf_helpers import test_LF tp, fp, tn, fn = test_LF(session, LF_husband_wife, split=1, annotator_name='gold') # ## 2. Applying the Labeling Functions # # Next, we need to actually run the LFs over all of our training candidates, producing a set of `Labels` and `LabelKeys` (just the names of the LFs) in the database. We'll do this using the `LabelAnnotator` class, a UDF which we will again run with `UDFRunner`. **Note that this will delete any existing `Labels` and `LabelKeys` for this candidate set.** We start by setting up the class: from snorkel.annotations import LabelAnnotator labeler = LabelAnnotator(lfs=LFs) # Finally, we run the `labeler`. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used: np.random.seed(1701) # %time L_train = labeler.apply(split=0) L_train # If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too: # %time L_train = labeler.load_matrix(session, split=0) L_train # Note that the returned matrix is a special subclass of the `scipy.sparse.csr_matrix` class, with some special features which we demonstrate below: L_train.get_candidate(session, 0) L_train.get_key(session, 0) # We can also view statistics about the resulting label matrix. # # * **Coverage** is the fraction of candidates that the labeling function emits a non-zero label for. # * **Overlap** is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for. # * **Conflict** is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a *conflicting* non-zero label for. L_train.lf_stats(session) # ## 3. Fitting the Generative Model # Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other. # + from snorkel.learning import GenerativeModel gen_model = GenerativeModel() gen_model.train(L_train, epochs=100, decay=0.95, step_size=0.1 / L_train.shape[0], reg_param=1e-6) # - gen_model.weights.lf_accuracy # We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals: train_marginals = gen_model.marginals(L_train) # We'll look at the distribution of the training marginals: import matplotlib.pyplot as plt plt.hist(train_marginals, bins=20) plt.show() # We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model: gen_model.learned_lf_stats() # ### Using the Model to Iterate on Labeling Functions # # Now that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set: L_dev = labeler.apply_existing(split=1) # And finally, we get the score of the generative model: tp, fp, tn, fn = gen_model.error_analysis(session, L_dev, L_gold_dev) # ### Interpreting Generative Model Performance # # At this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here! # # In fact, it is probably somewhat _overfit_ to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a _blind_ test set (i.e. one we never looked at during development). # ### Doing Some Error Analysis # # At this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the `Viewer`: # + from snorkel.viewer import SentenceNgramViewer # NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook # You should ignore this! import os if 'CI' not in os.environ: sv = SentenceNgramViewer(fn, session) else: sv = None # - sv c = sv.get_selected() if sv else list(fp.union(fn))[0] c # We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax: c.labels # We can also now explore some of the additional functionalities of the `lf_stats` method for our dev set LF labels, `L_dev`: we can plug in the gold labels that we have, and the accuracies that our generative model has learned: L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy']) # Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy. # ### Saving our training labels # # Finally, we'll save the `training_marginals`, which are our **probabilistic training labels**, so that we can use them in the next tutorial to train our end extraction model: from snorkel.annotations import save_marginals # %time save_marginals(session, L_train, train_marginals) # Next, in Part III, we'll use these probabilistic training labels to train a deep neural network.
intro_Z/Intro_Tutorial_2-Alpha-Sensitivity.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .sh # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Bash # language: bash # name: bash # --- # ![aga](img/AB_logo.png) # # How to create a *GitHub* repository from command line. # # ## Step 1: Create a new local *Git* repository # # Open up your terminal and navigate to your projects folder, then run the following command to create a new project folder and navigate into it: # # `mkdir ~/Documents/hello-world` # # `cd hello-world` mkdir ~/Documents/hello-world cd ~/Documents/hello-world pwd # ## Step 2: Adding a new file to our Git repository # # Create a new file in your project folder, we will call our sample file `hello.py` # # You can use the graphical interface of your operating system to create the file, or use the following terminal commands: # # **Bash (Mac/Linux) terminal:** `touch hello.py` # # **Windows Powershell:** `ni hello.py` # # You can open the `hello.py` file with your text editor, and write the following Python code which prints **Hello World!** to the console: # # `print("Hello World!")` # # Save the file changes and switch back to your terminal window. touch hello.py chmod a+x hello.py # ![](img/vi_hello_world.png) python3 hello.py # ## Step 3: Initialize a new local Git repository # # To initialize a new local Git repository we need to run the `git init` command. # # After you run that command, you should get feedback that an empty Git repository was initialized for your project. # # **This command must be run just once** git init # **Note:** Make sure to use the `git status` command frequently when working with Git. It’s a great way to check the status of your project files and the whole repository.) git status # ## Step 4: Making our initial commit to the local repository # # Run the following commands to track your files and make the initial commit in the local repository: # # `git add .` --> If you want to add all files in the directory use `.`, but you can indicate which file must be added with its name as well. (`git add README.md`) # # `git commit -m "Initial commit"` --> To perform the commit add a descriptive comment. # # When that’s done, it means that we successfully prepared our new local repository to be pushed to GitHub! git add . git commit -m "Initial commit" # ## Step 5: Creating a repo from command line. # # `curl -u "GithubUser" https://api.github.com/user/repos -d '{"name":"Saturday"}'` # # Indicate your Github user, and the name of the new repository. # # Please refer to the link bellow for more details. # # # ## Repositories # # The Repos API allows to create, manage and control the workflow of public and private GitHub repositories. # # https://docs.github.com/en/rest/reference/repos#create curl -u "GithubUser" https://api.github.com/user/repos -d '{"name":"Repo-Name"}' # ## Step 6: Indicate git where to send the local code when we do the push # # `git remote add origin https://github.com/GithubUser/test.git` # # This indicates that there is a **remote** repository and it's **origin** is actually under https://github.com/GituserName/test.git # # This is a way to hook local environment with the environmet in the cloud. Now I know where to push my code where to send my code when I want to push it. # # **This command must be run just once** git remote add origin https://github.com/GithubUser/borrar.git # ## Step 7: Pushing your code locally to Github # # `git push -u origin master` # # Push your code locally from origin to master # # **Note: Do not run below instruction on Notebook, only in a terminal** git push -u origin master # ## CONNECT WITH SSH # # https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh # # ## Creating a personal access token (PAT) # # https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token # # You should create a personal access token to use in place of a password with the command line or with the API. # ## Storing Git Credentials with Git Credential Helper # # When using git commands via Terminal, Git will sometimes need credentials from the user in order to perform operations; for example, it may need to ask for a username and password in order to access a remote repository over HTTP/HTTPS. # # ***“gitcredentials”*** *module is used to* ***request these credentials from the user*** *as well as* ***stores these credentials*** *to avoid inputting these credentials repeatedly.* # # ### Check below links for more detail: # # https://techexpertise.medium.com/storing-git-credentials-with-git-credential-helper-33d22a6b5ce7 # # https://git-scm.com/docs/git-credential-store # ### Git Credentials Helper # # By default git credentials are not cached at all. # Every connection will prompt you for your username and password. # # Git credentials helper can be configured in one of the following modes to remember the user credentials, # # * cache # * store # * osxkeychain # * manager # # Use command `git config credential.helper` to know the configuration. git config credential.helper # ### Git Credentials Helper: store # # Store credentials indefinitely on disk. # # Execute the following command in a terminal to configure the git credential helper in store mode. # # `git config --global credential.helper store`. git config --global credential.helper store
Creating a Github Repository.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from gs_quant.markets.baskets import Basket from gs_quant.session import Environment, GsSession # + client = 'CLIENT ID' secret = 'CLIENT SECRET' GsSession.use(Environment.PROD, client_id=client, client_secret=secret, scopes=('read_user_profile',)) # - basket = Basket.get('GSMBXXXX') # substitute input with any identifier for a basket basket.poll_status(timeout=300, step=20) # timeout/step are optional - default behavior will be to check status every 30 sec for <= 10 min
gs_quant/documentation/06_baskets/examples/06_basket_reports/0005_poll_status_of_most_recent_basket_create_report.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="X38L6tanrnrB" # # Pose Detection with OpenPose # # This notebook uses an open source project [CMU-Perceptual-Computing-Lab/openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose.git) to detect/track multi person poses on a given youtube video. # # For other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks). # # # ## Install OpenPose # + id="FOdkDhb6ga6N" import os from os.path import exists, join, basename, splitext git_repo_url = 'https://github.com/CMU-Perceptual-Computing-Lab/openpose.git' project_name = splitext(basename(git_repo_url))[0] if not exists(project_name): # see: https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/949 # install new CMake becaue of CUDA10 # !wget -q https://cmake.org/files/v3.13/cmake-3.13.0-Linux-x86_64.tar.gz # !tar xfz cmake-3.13.0-Linux-x86_64.tar.gz --strip-components=1 -C /usr/local # clone openpose # !git clone -q --depth 1 $git_repo_url # !sed -i 's/execute_process(COMMAND git checkout master WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/execute_process(COMMAND git checkout f019d0dfe86f49d1140961f8c7dec22130c83154 WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/g' openpose/CMakeLists.txt # install system dependencies # !apt-get -qq install -y libatlas-base-dev libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev opencl-headers ocl-icd-opencl-dev libviennacl-dev # install python dependencies # !pip install -q youtube-dl # build openpose # !cd openpose && rm -rf build || true && mkdir build && cd build && cmake .. && make -j`nproc` from IPython.display import YouTubeVideo # + [markdown] id="n5L3Z5YVrZ2R" # ## Detect poses on a test video # # We are going to detect poses on the following youtube video: # + id="xIt-eyIDO6XG" colab={"base_uri": "https://localhost:8080/", "height": 321} outputId="b275c261-38ee-419f-e4be-9b7d6f069420" YOUTUBE_ID = 'RXABo9hm8B8' YouTubeVideo(YOUTUBE_ID) # + [markdown] id="Kn08K-3bp-W9" # Download the above youtube video, cut the first 5 seconds and do the pose detection on that 5 seconds: # + id="oNASdyyiO65I" # !rm -rf youtube.mp4 # download the youtube with the given ID # !youtube-dl -f 'bestvideo[ext=mp4]' --output "youtube.%(ext)s" https://www.youtube.com/watch?v=$YOUTUBE_ID # cut the first 5 seconds # !ffmpeg -y -loglevel info -i youtube.mp4 -t 5 video.mp4 # detect poses on the these 5 seconds # !rm openpose.avi # !cd openpose && ./build/examples/openpose/openpose.bin --video ../video.mp4 --write_json ./output/ --display 0 --write_video ../openpose.avi # convert the result into MP4 # !ffmpeg -y -loglevel info -i openpose.avi output.mp4 # + [markdown] id="kDDkgCCSrFTv" # Finally, visualize the result: # + id="nZ3Ud9zLgOoQ" def show_local_mp4_video(file_name, width=640, height=480): import io import base64 from IPython.display import HTML video_encoded = base64.b64encode(io.open(file_name, 'rb').read()) return HTML(data='''<video width="{0}" height="{1}" alt="test" controls> <source src="data:video/mp4;base64,{2}" type="video/mp4" /> </video>'''.format(width, height, video_encoded.decode('ascii'))) show_local_mp4_video('output.mp4', width=960, height=720)
OpenPose_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> # <script> # window.dataLayer = window.dataLayer || []; # function gtag(){dataLayer.push(arguments);} # gtag('js', new Date()); # # gtag('config', 'UA-59152712-8'); # </script> # # # Tutorial-IllinoisGRMHD: `driver_evaluate_MHD_rhs.C` # # ## Authors: <NAME> & <NAME> # # <font color='red'>**This module is currently under development**</font> # # ## In this tutorial module we explain the driver functions that compute the right-hand side (RHS) of the MHD equations within `IllinoisGRMHD` # # ### Required and recommended citations: # # * **(Required)** <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)). # * **(Required)** <NAME>., <NAME>., <NAME>., <NAME>. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)). # * **(Recommended)** <NAME>., <NAME>., <NAME>. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). # <a id='toc'></a> # # # Table of Contents # $$\label{toc}$$ # # This module is organized as follows # # 0. [Step 0](#src_dir): **Source directory creation** # 1. [Step 1](#introduction): **Introduction** # 1. [Step 2](#header_files): **Load up necessary ETK and `IllinoisGRMHD` header files** # 1. [Step 3](#driver_mhd_rhs_function): **The ` IllinoisGRMHD_driver_evaluate_MHD_rhs()` function** # 1. [Step 3.1](#eos_parameters): *Equation of state (EOS) parameters* # 1. [Step 3.2](#set_pointers_grmhd_gfs): *Set pointers to GRMHD gridfunctions* # 1. [Step 3.3](#admbase_to_bssnbase): *Convert ADM variables to BSSN variables* # 1. [Step 3.4](#pointers_metric_tmunu_gfs): *Setting up pointers to the metric and stress-energy tensor gridfunctions* # 1. [Step 3.5](#initialize_rhss): *Initialization of the RHS variables* # 1. [Step 3.6](#tau_rhs_ext_curv_and_tupmunu): *Compute extrinsic curvature terms from the RHS of $\partial_{t}\tilde\tau$ and $T^{\mu\nu}$* # 1. [Step 3.7](#computing_ftilde): *Computing ${\rm ftilde}$* # 1. [Step 3.8](#rhs_mhd_and_a_i): *The RHSs of $\rho_{\star}$, $\tilde\tau$, $\tilde{S}_{i}$, and $A_{i}$* # 1. [Step 3.8.1](#reconstructing_vx_vy_by_along_x): Reconstructing $\left\{v^{x}, v^{y}, B^{y}_{\rm stagger}\right\}$ along the $x$-direction # 1. [Step 3.8.2](#fluxes_x_dirn): Evaluating $\partial_{x}\boldsymbol{F}$ # 1. [Step 3.8.3](#reconstructing_vx_vy_by_along_y): Reconstructing $\left\{v^{x}, v^{y}, B^{y}_{\rm stagger}\right\}$ along the $y$-direction # 1. [Step 3.8.4](#fluxes_y_dirn): Evaluating $\partial_{y}\boldsymbol{F}$ # 1. [Step 3.8.5](#rhs_az_no_gauge_terms): Evaluating $\left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms}$ # 1. [Step 3.8.6](#multiple_reconstructions): Multiple reconstructions # 1. [Step 3.8.7](#fluxes_z_dirn): Evaluating $\partial_{z}\boldsymbol{F}$ # 1. [Step 3.8.8](#rhs_ax_no_gauge_terms): Evaluating $\left[\partial_{t}A_{x}\right]_{\rm no\ gauge\ terms}$ # 1. [Step 3.8.9](#rhs_ay_no_gauge_terms): Evaluating $\left[\partial_{t}A_{y}\right]_{\rm no\ gauge\ terms}$ # 1. [Step 3.8.10](#rhs_psi6phi_and_ai_gauge_terms): Evaluating $\partial_{t}\left[\psi^{6}\Phi\right]$ and $\left[\partial_{t}A_{i}\right]_{\rm gauge\ terms}$ # 1. [Step 4](#driver_evaluate_MHD_rhs__h): **The `driver_evaluate_MHD_rhs.h` header file** # 1. [Step 5](#code_validation): **Code validation** # 1. [Step 5.a](#code_validation_driver_evaluate_MHD_rhs__c): *`driver_evaluate_MHD_rhs.C`* # 1. [Step 5.b](#code_validation_driver_evaluate_MHD_rhs__h): *`driver_evaluate_MHD_rhs.h`* # 1. [Step 6](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** # <a id='src_dir'></a> # # # Step 0: Source directory creation \[Back to [top](#toc)\] # $$\label{src_dir}$$ # # We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet. # + # Step 0: Creation of the IllinoisGRMHD source directory # Step 0a: Add NRPy's directory to the path # https://stackoverflow.com/questions/16780014/import-file-from-parent-directory import os,sys nrpy_dir_path = os.path.join("..","..") if nrpy_dir_path not in sys.path: sys.path.append(nrpy_dir_path) # Step 0b: Load up cmdline_helper and create the directory import cmdline_helper as cmd IGM_src_dir_path = os.path.join("..","src") cmd.mkdir(IGM_src_dir_path) # Step 0c: Create the output file path outfile_path__driver_evaluate_MHD_rhs__C = os.path.join(IGM_src_dir_path,"driver_evaluate_MHD_rhs.C") outfile_path__driver_evaluate_MHD_rhs__h = os.path.join(IGM_src_dir_path,"driver_evaluate_MHD_rhs.h") # - # <a id='introduction'></a> # # # Step 1: Introduction \[Back to [top](#toc)\] # $$\label{introduction}$$ # # We will start by creating the file `driver_evaluate_MHD_rhs.C` and writing dwn the preamble of the file, which contains useful references and information to the user. # # We remind the reader of the "[generalized Lorenz gauge condition](https://arxiv.org/pdf/1207.3354.pdf)", # # $$ # \nabla_{\mu}\mathcal{A^{\mu}} = \xi n_{\mu}\mathcal{A^{\mu}}\ , # $$ # # where $n_{\mu} = \left(\alpha,0,0,0\right)$ is the normal unit vector, $\mathcal{A}_{\mu}$ is the magnetic 4-vector potential, and $xi$ is a parameter with dimensions 1/Length, just like the $\eta$ parameter in the gamma-driving shift condition, so its value is chosen so that the CFL condition remains satisfied. # %%writefile $outfile_path__driver_evaluate_MHD_rhs__C /********************************************* * Evaluate RHS of GRMHD & induction equations * (vector potential prescription), using the * generalized Lorenz gauge condition for the * EM gauge. * * Based originally on the Illinois GRMHD code, * written by <NAME>, <NAME>, and Branson * Stephens (original version), and then developed * primarily by <NAME>, <NAME>, * and <NAME>. * * Rewritten for public release in 2013 * by <NAME> * * References: * Original unigrid GRMHD evolution prescription: * http://arxiv.org/abs/astro-ph/0503420 * Vector potential formulation in full GR: * http://arxiv.org/abs/1007.2848 * Improved EM gauge conditions for AMR grids: * http://arxiv.org/abs/1110.4633 * Generalized Lorenz gauge prescription: * http://arxiv.org/abs/1207.3354 * * Note that the Generalized Lorenz gauge strength * parameter has units of 1/M, just like the \eta * parameter in the gamma-driving shift condition, * so setting it too large will result in violation * of the CFL condition. * * This version of PPM implements the standard * Colella & Woodward PPM, though modified as in GRHydro * to have 3 ghostzones instead of 4. *********************************************/ # <a id='header_files'></a> # # # Step 2: Load up necessary ETK and `IllinoisGRMHD` header files \[Back to [top](#toc)\] # $$\label{header_files}$$ # # Here we load all necessary ETK and `IllinoisGRMHD` file, as well as some standard C++ libraries. # + # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C #include "cctk.h" #include <cstdio> #include <cstdlib> #include <cmath> #include <sys/time.h> #include "cctk_Arguments.h" #include "cctk_Parameters.h" #include "IllinoisGRMHD_headers.h" /* Generic #define's and function prototypes */ #include "driver_evaluate_MHD_rhs.h" /* Function prototypes for this file only */ #include "IllinoisGRMHD_EoS_lowlevel_functs.C" #include "inlined_functions.C" # - # <a id='driver_mhd_rhs_function'></a> # # # Step 3: The ` IllinoisGRMHD_driver_evaluate_MHD_rhs()` function \[Back to [top](#toc)\] # $$\label{driver_mhd_rhs_function}$$ # # This is the basic function declaration. We set up basic ETK parameters and verify double precision is being used. # + # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C extern "C" void IllinoisGRMHD_driver_evaluate_MHD_rhs(CCTK_ARGUMENTS) { DECLARE_CCTK_ARGUMENTS; DECLARE_CCTK_PARAMETERS; int levelnumber = GetRefinementLevel(cctkGH); if(CCTK_Equals(verbose, "essential+iteration output")) { CCTK_VInfo(CCTK_THORNSTRING,"***** Iter. # %d, Lev: %d, Integrating to time: %e *****",cctk_iteration,levelnumber,cctk_delta_time/cctk_levfac[0]+cctk_time); } if( sizeof(CCTK_REAL) < 8 ) CCTK_VError(VERR_DEF_PARAMS,"Error: IllinoisGRMHD assumes that CCTK_REAL is a double precision number. Setting otherwise will likely cause havoc with the conserv_to_prims solver."); if(cctk_nghostzones[0]<3 || cctk_nghostzones[1]<3 || cctk_nghostzones[2]<3) { CCTK_VError(VERR_DEF_PARAMS,"ERROR. Need at least 3 ghostzones for IllinoisGRMHD evolutions."); } CCTK_REAL dX[3] = { CCTK_DELTA_SPACE(0), CCTK_DELTA_SPACE(1), CCTK_DELTA_SPACE(2) }; # - # <a id='eos_parameters'></a> # # ## Step 3.1: Equation of state (EOS) parameters \[Back to [top](#toc)\] # $$\label{eos_parameters}$$ # # Next we set up the EOS struct, which is defined in the `IllinoisGRMHD_headers.h` header file. We set the following parameters: # # * $\rm neos$: number of EOS (currently set to 1, which is a $\Gamma$-law EOS) # * $\rm K\_poly$: this is the constant $\kappa$ from the polytropic EOS $P = \kappa \rho_{0}^{\Gamma}$ # * $\rm rho\_poly$: $\rho_{0}$, fluid rest-mass # * $\rm P\_poly$: $P$, pressure # * $\rm gamma\_th$: $\Gamma_{\rm th}$, the constant parameter which determines the conversion efficiency of kinetic to thermal energy at shocks # * $\rm eps\_poly$: $\epsilon$, specific internal energy # * $\rm k\_poly$: $\kappa$, polytropic EOS constant # * $\rm gamma\_poly$: $\Gamma$, the $\Gamma$-law EOS constant # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C /********************************** * Piecewise Polytropic EOS Patch * * Setting up the EOS struct * **********************************/ /* * The short piece of code below takes care * of initializing the EOS parameters. * Please refer to the "inlined_functions.C" * source file for the documentation on the * function. */ eos_struct eos; initialize_EOS_struct_from_input(eos); # <a id='set_pointers_grmhd_gfs'></a> # # ## Step 3.2: Set pointers to GRMHD gridfunctions \[Back to [top](#toc)\] # $$\label{set_pointers_grmhd_gfs}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // in_prims,out_prims_r, and out_prims_l are arrays of pointers to the actual gridfunctions. gf_and_gz_struct in_prims[MAXNUMVARS],out_prims_r[MAXNUMVARS],out_prims_l[MAXNUMVARS]; int which_prims_to_reconstruct[MAXNUMVARS],num_prims_to_reconstruct; /* SET POINTERS TO GRMHD GRIDFUNCTIONS */ // The order here MATTERS, and must be consistent with the global variable declarations in // evaluate_MHD_rhs_headers.h (look for RHOB=0, etc.) // For example, in_prims[0] _must_ be rho_b. int ww=0; in_prims[ww].gf=rho_b; out_prims_r[ww].gf=rho_br; out_prims_l[ww].gf=rho_bl; ww++; in_prims[ww].gf=P; out_prims_r[ww].gf=Pr; out_prims_l[ww].gf=Pl; ww++; in_prims[ww].gf=vx; out_prims_r[ww].gf=vxr; out_prims_l[ww].gf=vxl; ww++; in_prims[ww].gf=vy; out_prims_r[ww].gf=vyr; out_prims_l[ww].gf=vyl; ww++; in_prims[ww].gf=vz; out_prims_r[ww].gf=vzr; out_prims_l[ww].gf=vzl; ww++; in_prims[ww].gf=Bx; out_prims_r[ww].gf=Bxr; out_prims_l[ww].gf=Bxl; ww++; in_prims[ww].gf=By; out_prims_r[ww].gf=Byr; out_prims_l[ww].gf=Byl; ww++; in_prims[ww].gf=Bz; out_prims_r[ww].gf=Bzr; out_prims_l[ww].gf=Bzl; ww++; in_prims[ww].gf=Bx_stagger; out_prims_r[ww].gf=Bx_staggerr; out_prims_l[ww].gf=Bx_staggerl; ww++; in_prims[ww].gf=By_stagger; out_prims_r[ww].gf=By_staggerr; out_prims_l[ww].gf=By_staggerl; ww++; in_prims[ww].gf=Bz_stagger; out_prims_r[ww].gf=Bz_staggerr; out_prims_l[ww].gf=Bz_staggerl; ww++; in_prims[ww].gf=vxr; out_prims_r[ww].gf=vxrr; out_prims_l[ww].gf=vxrl; ww++; in_prims[ww].gf=vyr; out_prims_r[ww].gf=vyrr; out_prims_l[ww].gf=vyrl; ww++; in_prims[ww].gf=vzr; out_prims_r[ww].gf=vzrr; out_prims_l[ww].gf=vzrl; ww++; in_prims[ww].gf=vxl; out_prims_r[ww].gf=vxlr; out_prims_l[ww].gf=vxll; ww++; in_prims[ww].gf=vyl; out_prims_r[ww].gf=vylr; out_prims_l[ww].gf=vyll; ww++; in_prims[ww].gf=vzl; out_prims_r[ww].gf=vzlr; out_prims_l[ww].gf=vzll; ww++; // Prims are defined AT ALL GRIDPOINTS, so we set the # of ghostzones to zero: for(int i=0;i<MAXNUMVARS;i++) for(int j=1;j<=3;j++) { in_prims[i].gz_lo[j]=0; in_prims[i].gz_hi[j]=0; } // Left/right variables are not yet defined, yet we set the # of gz's to zero by default: for(int i=0;i<MAXNUMVARS;i++) for(int j=1;j<=3;j++) { out_prims_r[i].gz_lo[j]=0; out_prims_r[i].gz_hi[j]=0; } for(int i=0;i<MAXNUMVARS;i++) for(int j=1;j<=3;j++) { out_prims_l[i].gz_lo[j]=0; out_prims_l[i].gz_hi[j]=0; } # <a id='admbase_to_bssnbase'></a> # # ## Step 3.3: Convert ADM variables to BSSN variables \[Back to [top](#toc)\] # $$\label{admbase_to_bssnbase}$$ # # We summarize here the algorithm of the `IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij()` function, which is explained in detail in [this tutorial module]() (<font color='red'>**Link not available yet - TODO**</font>): # # * First, $\gamma\equiv \det\left(\gamma_{ij}\right)$, where $\gamma_{ij}$ is the physical spatial metric, is evaluated. # * Then, $\phi$, the conformal factor, is computed via the relation $\phi = \frac{1}{12}\log\gamma$. # * Next, we compute $e^{-4\phi}$. # * Then, the conformal metric, $\bar{\gamma}_{ij} = e^{-4\phi}\gamma_{ij}$, is computed. # * Next, the condition $\bar\gamma = 1$ is enforced, by first computing $\gamma$ then performing $\bar\gamma_{ij}\to\left(\frac{1}{\bar\gamma}\right)^{1/3}\bar\gamma_{ij}$. # * Then, $\gamma_{ij}$ is computed from $\bar\gamma_{ij}$ *after* the condition $\bar\gamma = 1$ is enforced via the inverse relation $\gamma_{ij} = e^{4\phi}\bar\gamma_{ij}$. # * Finally, we compute the inverse conformal metric $\bar\gamma^{ij}$. # # **A note on notation:** in the C code, we have the following identifications between the quantities described above and the C variables: # # *Input and temporary variables:* # * $\gamma_{ij} := {\rm gij\_physL}$ # * $\det(\gamma_{ij}) := {\rm gijdet}$ # * $\phi := {\rm phiL}$ # * $\psi \equiv e^{\phi} := {\rm psiL}$ # * $\bar\gamma_{ij} := {\rm gtijL}$ (the "t" stands for the notation where the conformal metric is written as $\tilde\gamma_{ij}$ instead of $\bar\gamma_{ij}$. In our discussion we use the latter to keep our notation consistent with other NRPy notebooks). # * $\det(\bar\gamma_{ij}) := {\rm gtijdet}$ # * $\left(\frac{1}{\bar\gamma}\right)^{1/3} := {\rm gtijdet\_Fm1o3}$ # # *Output/gridfunction variables:* # * $\gamma_{ij} := {\rm gij}$ (Physical metric) # * $\bar\gamma_{ij} := {\rm gtij}$ (Conformal metric) # * $\bar\gamma^{ij} := {\rm gtupij}$ (Inverse conformal metric) # * $\phi := {\rm phi}$ # * $\psi := {\rm psi}$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // Convert ADM variables (from ADMBase) to the BSSN-based variables expected by this routine. IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp, gtxx,gtxy,gtxz,gtyy,gtyz,gtzz, gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz, phi_bssn,psi_bssn,lapm1); # <a id='pointers_metric_tmunu_gfs'></a> # # ## Step 3.4: Setting up pointers to the metric and stress-energy tensor gridfunctions \[Back to [top](#toc)\] # $$\label{pointers_metric_tmunu_gfs}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C /* SET POINTERS TO METRIC GRIDFUNCTIONS */ CCTK_REAL *metric[NUMVARS_FOR_METRIC_FACEVALS]; // "metric" here is array of pointers to the actual gridfunctions. ww=0; metric[ww]=phi_bssn;ww++; metric[ww]=psi_bssn;ww++; metric[ww]=gtxx; ww++; metric[ww]=gtxy; ww++; metric[ww]=gtxz; ww++; metric[ww]=gtyy; ww++; metric[ww]=gtyz; ww++; metric[ww]=gtzz; ww++; metric[ww]=lapm1; ww++; metric[ww]=betax; ww++; metric[ww]=betay; ww++; metric[ww]=betaz; ww++; metric[ww]=gtupxx; ww++; metric[ww]=gtupyy; ww++; metric[ww]=gtupzz; ww++; /* SET POINTERS TO STRESS-ENERGY TENSOR GRIDFUNCTIONS */ CCTK_REAL *TUPmunu[10];// "TUPmunu" here is array of pointers to the actual gridfunctions. ww=0; TUPmunu[ww]=TUPtt; ww++; TUPmunu[ww]=TUPtx; ww++; TUPmunu[ww]=TUPty; ww++; TUPmunu[ww]=TUPtz; ww++; TUPmunu[ww]=TUPxx; ww++; TUPmunu[ww]=TUPxy; ww++; TUPmunu[ww]=TUPxz; ww++; TUPmunu[ww]=TUPyy; ww++; TUPmunu[ww]=TUPyz; ww++; TUPmunu[ww]=TUPzz; ww++; # <a id='initialize_rhss'></a> # # ## Step 3.5: Initialization of the RHS variables \[Back to [top](#toc)\] # $$\label{initialize_rhss}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // 1) First initialize {rho_star_rhs,tau_rhs,st_x_rhs,st_y_rhs,st_z_rhs} to zero #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) { int index=CCTK_GFINDEX3D(cctkGH,i,j,k); Ax_rhs[index]=0.0; Ay_rhs[index]=0.0; Az_rhs[index]=0.0; psi6phi_rhs[index]=0.0; tau_rhs[index]=0.0; rho_star_rhs[index]=0.0; st_x_rhs[index]=0.0; st_y_rhs[index]=0.0; st_z_rhs[index]=0.0; //if(i==17 && j==19 && k==26) CCTK_VInfo(CCTK_THORNSTRING,"CONSSS: %.15e %.15e %.15e %.15e %.15e | %.15e",rho_star[index],mhd_st_x[index],mhd_st_y[index],mhd_st_z[index],tau[index],P[index]); } # <a id='tau_rhs_ext_curv_and_tupmunu'></a> # # ## Step 3.6: Compute extrinsic curvature terms from the RHS of $\partial_{t}\tilde\tau$ and $T^{\mu\nu}$ \[Back to [top](#toc)\] # $$\label{tau_rhs_ext_curv_and_tupmunu}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // Here, we: // 1) Compute tau_rhs extrinsic curvature terms, and // 2) Compute TUPmunu. // This function is housed in the file: "compute_tau_rhs_extrinsic_curvature_terms_and_TUPmunu.C" printf("aaaa: "); for(int ii=0; ii<NUMVARS_FOR_METRIC_FACEVALS; ii++) printf("%e ",metric[ii][CCTK_GFINDEX3D(cctkGH,14,14,14)]); printf("\n"); compute_tau_rhs_extrinsic_curvature_terms_and_TUPmunu(cctkGH,cctk_lsh,cctk_nghostzones,dX,metric,in_prims,TUPmunu, eos, Gamma_th, gtupxy,gtupxz,gtupyz, kxx,kxy,kxz,kyy,kyz,kzz, tau_rhs); printf("yyyy: %e\n",tau_rhs[CCTK_GFINDEX3D(cctkGH,14,14,14)]); //for(int i=0;i<10;i++) { // printf("zzzz: %d %e\n",i,TUPmunu[i][CCTK_GFINDEX3D(cctkGH,14,14,14)]); //} # <a id='computing_ftilde'></a> # # ## Step 3.7: Computing ${\rm ftilde}$ \[Back to [top](#toc)\] # $$\label{computing_ftilde}$$ # # This is part of the flattening scheme of the PPM algorithm. The main reference to look at is [Colella & Woodward (1983)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf). The equations implemented can be found in Appendix A (particularly eqs. (A.1) and (A.2)), while the flattening method is introduced and discussed in section 4. More will follow when we talk about the `reconstruct_set_of_prims_PPM.C` file of `IllinoisGRMHD`. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C int flux_dirn; flux_dirn=1; // First compute ftilde, which is used for flattening left and right face values // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" ftilde_gf_compute(cctkGH,cctk_lsh,flux_dirn, in_prims, ftilde_gf); # <a id='rhs_mhd_and_a_i'></a> # # ## Step 3.8: The RHSs of $\rho_{\star}$, $\tilde\tau$, $\tilde{S}_{i}$, and $A_{i}$ \[Back to [top](#toc)\] # $$\label{rhs_mhd_and_a_i}$$ # # This part of the code evaluates the RHSs of $\rho_{\star}$, $\tilde\tau$, and $\tilde{S}_{i}$, i.e. # # $$ # \partial_{t} # \begin{bmatrix} # \rho_{\star}\\ # \tilde\tau\\ # \tilde{S}_{i} # \end{bmatrix} # = # -\partial_{j} # \underbrace{\begin{bmatrix} # \rho_{\star}v^{j}\\ # \alpha^{2}\sqrt{\gamma}T^{0j} - \rho_{\star}v^{j}\\ # \alpha\sqrt{\gamma}T^{j}_{\ i} # \end{bmatrix}}_{\rm Flux\ terms} # + # \underbrace{\begin{bmatrix} # 0\\ # s\\ # \frac{1}{2}\alpha\sqrt{\gamma}T^{\alpha\beta}\partial_{i}g_{\alpha\beta} # \end{bmatrix}}_{\rm Source\ terms}\ . # $$ # # At the same time, we are also interested in evaluating the RHS of the evolution equation for $A_{i}$, namely # # $$ # \partial_{t}A_{i} = \epsilon_{ijk}v^{j}\tilde{B}^{k} - \partial_{i}\left(\alpha\Phi - \beta^{j}A_{j}\right) = \psi^{6}\epsilon_{ijk}v^{j}B^{k} - \underbrace{\partial_{i}\left(\alpha\Phi - \beta^{j}A_{j}\right)}_{\rm Gauge\ terms} # $$ # # The following summary greatly oversimplifies what the code below does, but it is enough for the user to understand the purpose of the algorithm: # # 1. Compute $\partial_{x}\boldsymbol{F}$, then $\partial_{y}\boldsymbol{F}$, and finally $\left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms}$ # 2. Compute $\partial_{y}\boldsymbol{F}$, then $\left[\partial_{t}A_{x}\right]_{\rm no\ gauge\ terms}$ # 3. Compute $\left[\partial_{t}A_{y}\right]_{\rm no\ gauge\ terms}$ # 4. Add gauge terms to $\partial_{t}A_{i}$ # # Now, in between every step of the summary above, care must be taken to evaluate the gridfunctions at the appropriate gridpoints (see table below for the location of each variable in the computational grid). # # <a id='table_staggerings'></a> # # | Variable(s) | Gridpoint location in the computational grid | # |------------------------------------------------------------------|-----------------------------------------------| # | Metric terms, $\vec{P}$, $\rho_*$, $\tilde{S}_i$, $\tilde{\tau}$ | $(i,j,k)$ | # | $B^x$, $\tilde{B}^x$ | $(i+\frac{1}{2},j,k)$ | # | $B^y$, $\tilde{B}^y$ | $(i,j+\frac{1}{2},k)$ | # | $B^z$, $\tilde{B}^z$ | $(i,j,k+\frac{1}{2})$ | # | $A_x$ | $(i,j+\frac{1}{2},k+\frac{1}{2})$ | # | $A_y$ | $(i+\frac{1}{2},j,k+\frac{1}{2})$ | # | $A_z$ | $(i+\frac{1}{2},j+\frac{1}{2},k)$ | # | $\sqrt{\gamma}\Phi$ | $(i+\frac{1}{2},j+\frac{1}{2},k+\frac{1}{2})$ | # # $$\label{table_staggerings}$$ # # For example, we know that # # $$ # \left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms} \equiv \psi^{6}\left(v^{x}B^{y} - v^{y}B^{x}\right)\ . # $$ # # But once we evaluate $v^{x}$ and $v^{y}$, we know them at the point $(i,j,k)$. Similarly, the gridfunction $B^{x}$ is known at $(i+\frac{1}{2},j,k)$, while $B^{y}$ is known at $(i,j+\frac{1}{2},k)$. This means that we are not able to immediately evaluate the equation above, since determining $A_{z}$ at $(i+\frac{1}{2},j+\frac{1}{2},k)$ requires knowning $\left\{v^{x},v^{y},B^{x},B^{y}\right\}$ at $(i+\frac{1}{2},j+\frac{1}{2},k)$ as well. To this end, we reconstruct the variables $\left\{v^{x},v^{y},B^{x},B^{y}\right\}$ using the PPM method at the desired staggered point. An analogous procedure is required in order to determine the RHS of $\partial_{t}A_{x}$ and $\partial_{t}A_{y}$. # <a id='reconstructing_vx_vy_by_along_x'></a> # # ### Step 3.8.1: Reconstructing $\left\{v^{x}, v^{y}, B^{y}_{\rm stagger}\right\}$ along the $x$-direction \[Back to [top](#toc)\] # $$\label{reconstructing_vx_vy_by_along_x}$$ # # We want to evaluate $\partial_{x}\boldsymbol{F}$. It is important that we keep in the back of our minds our intention of evaluating the RHS of $\left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms}$ as well, since then we can reconstruct $\left\{v^{x},v^{y},B^{x},B^{y}\right\}$ cleverly, as we need them at the same gridpoint as $A_{z}$ (see the table in the end of [step 3.8](#table_staggerings)). # # We start by reconstructing $\left\{\rho_{0},P,v^{i},B^{i}, B^{y}_{\rm stagger}\right\}$ in the $x$-direction, keeping in mind that after the reconstruction we will know: # # 1. The flux variables at $\left(i-\frac{1}{2},j,k\right)$, so that we can evaluate $\partial_{x}\boldsymbol{F}_{i,j,k}=dx^{-1}\left(\boldsymbol{F}_{i+1/2,j,k}-\boldsymbol{F}_{i-1/2,j,k}\right)$ # 1. The velocities $\left\{v^{x},v^{y}\right\}$ at $\left(i-\frac{1}{2},j,k\right)$ # 1. The staggered value $B^{y}_{\rm stagger}$ at $\left(i-\frac{1}{2},j+\frac{1}{2},k\right)$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C /* There are two stories going on here: * 1) Computation of \partial_x on RHS of \partial_t {rho_star,tau,mhd_st_{x,y,z}}, * via PPM reconstruction onto (i-1/2,j,k), so that * \partial_x F = [ F(i+1/2,j,k) - F(i-1/2,j,k) ] / dx * 2) Computation of \partial_t A_i, where A_i are *staggered* gridfunctions, * where A_x is defined at (i,j+1/2,k+1/2), A_y at (i+1/2,j,k+1/2), etc. * Ai_rhs = \partial_t A_i = \epsilon_{ijk} \psi^{6} v^j B^k, * where \epsilon_{ijk} is the flat-space antisymmetric operator. * 2A) Az_rhs is defined at (i+1/2,j+1/2,k), and it depends on {Bx,By,vx,vy}, * so the trick is to reconstruct {Bx,By,vx,vy} cleverly to get to these * staggered points. For example: * 2Aa) vx and vy are at (i,j,k), and we reconstruct them to (i-1/2,j,k) below. After * this, we'll reconstruct again in the y-dir'n to get {vx,vy} at (i-1/2,j-1/2,k) * 2Ab) By_stagger is at (i,j+1/2,k), and we reconstruct below to (i-1/2,j+1/2,k). */ ww=0; which_prims_to_reconstruct[ww]=RHOB; ww++; which_prims_to_reconstruct[ww]=PRESSURE; ww++; which_prims_to_reconstruct[ww]=VX; ww++; which_prims_to_reconstruct[ww]=VY; ww++; which_prims_to_reconstruct[ww]=VZ; ww++; //which_prims_to_reconstruct[ww]=BX_CENTER; ww++; which_prims_to_reconstruct[ww]=BY_CENTER; ww++; which_prims_to_reconstruct[ww]=BZ_CENTER; ww++; which_prims_to_reconstruct[ww]=BY_STAGGER;ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); # <a id='fluxes_x_dirn'></a> # # ### Step 3.8.2: Evaluating $\partial_{x}\boldsymbol{F}$ \[Back to [top](#toc)\] # $$\label{fluxes_x_dirn}$$ # # Next we set the face values of $B^{x}$ (which are needed for the computation of MHD flux terms) by making them consistent with $B^{x}_{\rm stagger}$. # # After that, we evaluate $\partial_{x}\boldsymbol{F}$ and add it to the RHS of $\partial_{t}\left(\rho_{\star},\tilde\tau,\tilde{S}_{i}\right)$. It is important to notice that, as we mentioned, $A_{z}$ is defined at $\left(i+\frac{1}{2},j+\frac{1}{2},k\right)$, but other functions, like $v^{x}$ and $v^{y}$, are now known only at $\left(i-\frac{1}{2},j-\frac{1}{2},k\right)$. The function `add_fluxes_and_source_terms_to_hydro_rhss()` below takes care of this, and we will study the process in more detail when we look at the `add_fluxes_and_source_terms_to_hydro_rhss.C` file of `IllinoisGRMHD`. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C //Right and left face values of BI_CENTER are used in mhdflux computation (first to compute b^a). // Instead of reconstructing, we simply set B^x face values to be consistent with BX_STAGGER. #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) { int index=CCTK_GFINDEX3D(cctkGH,i,j,k), indexim1=CCTK_GFINDEX3D(cctkGH,i-1+(i==0),j,k); /* indexim1=0 when i=0 */ out_prims_r[BX_CENTER].gf[index]=out_prims_l[BX_CENTER].gf[index]=in_prims[BX_STAGGER].gf[indexim1]; } // Then add fluxes to RHS for hydro variables {rho_b,P,vx,vy,vz}: // This function is housed in the file: "add_fluxes_and_source_terms_to_hydro_rhss.C" add_fluxes_and_source_terms_to_hydro_rhss(flux_dirn,cctkGH,cctk_lsh,cctk_nghostzones,dX, metric,in_prims,TUPmunu, num_prims_to_reconstruct,out_prims_r,out_prims_l,eos, cmax_x,cmin_x, rho_star_flux,tau_flux,st_x_flux,st_y_flux,st_z_flux, rho_star_rhs,tau_rhs,st_x_rhs,st_y_rhs,st_z_rhs); # <a id='reconstructing_vx_vy_by_along_y'></a> # # ### Step 3.8.3: Reconstructing $\left\{v^{x}, v^{y}, B^{y}_{\rm stagger}\right\}$ along the $y$-direction \[Back to [top](#toc)\] # $$\label{reconstructing_vx_vy_by_along_y}$$ # # We want to evaluate $\partial_{y}\boldsymbol{F}$. At this point we must remember that $v^{x}$ and $v^{y}$ have already been reconstruct along the $x$-direction and are now known at $\left(i-\frac{1}{2},j,k\right)$. Our goal is to reconstruct these quantities at $\left(i+\frac{1}{2},j+\frac{1}{2},k\right)$. # # We then reconstruct $\left\{\rho_{0},P,v^{i},B^{i}, B^{i}_{\rm stagger}\right\}$ in the $y$-direction, keeping in mind that after the reconstruction we will know: # # 1. The flux variables at $\left(i-\frac{1}{2},j-\frac{1}{2},k\right)$, so that we can evaluate $\partial_{y}\boldsymbol{F}_{i,j,k}=dy^{-1}\left(\boldsymbol{F}_{i,j+1/2,k}-\boldsymbol{F}_{i,j-1/2,k}\right)$ # 1. The velocities $\left\{v^{x},v^{y}\right\}$ at $\left(i-\frac{1}{2},j-\frac{1}{2},k\right)$ # 1. The staggered value $B^{x}_{\rm stagger}$ at $\left(i+\frac{1}{2},j-\frac{1}{2},k\right)$ # 1. The staggered value $B^{y}_{\rm stagger}$ at $\left(i-\frac{1}{2},j+\frac{1}{2},k\right)$ # 1. The staggered value $B^{z}_{\rm stagger}$ at $\left(i,j-\frac{1}{2},k+\frac{1}{2}\right)$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // Note that we have already reconstructed vx and vy along the x-direction, // at (i-1/2,j,k). That result is stored in v{x,y}{r,l}. Bx_stagger data // are defined at (i+1/2,j,k). // Next goal: reconstruct Bx, vx and vy at (i+1/2,j+1/2,k). flux_dirn=2; // First compute ftilde, which is used for flattening left and right face values // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" ftilde_gf_compute(cctkGH,cctk_lsh,flux_dirn, in_prims, ftilde_gf); // in_prims[{VXR,VXL,VYR,VYL}].gz_{lo,hi} ghostzones are set to all zeros, which // is incorrect. We fix this below. // [Note that this is a cheap operation, copying only 8 integers and a pointer.] in_prims[VXR]=out_prims_r[VX]; in_prims[VXL]=out_prims_l[VX]; in_prims[VYR]=out_prims_r[VY]; in_prims[VYL]=out_prims_l[VY]; /* There are two stories going on here: * 1) Computation of \partial_y on RHS of \partial_t {rho_star,tau,mhd_st_{x,y,z}}, * via PPM reconstruction onto (i,j-1/2,k), so that * \partial_y F = [ F(i,j+1/2,k) - F(i,j-1/2,k) ] / dy * 2) Computation of \partial_t A_i, where A_i are *staggered* gridfunctions, * where A_x is defined at (i,j+1/2,k+1/2), A_y at (i+1/2,j,k+1/2), etc. * Ai_rhs = \partial_t A_i = \epsilon_{ijk} \psi^{6} v^j B^k, * where \epsilon_{ijk} is the flat-space antisymmetric operator. * 2A) Az_rhs is defined at (i+1/2,j+1/2,k), and it depends on {Bx,By,vx,vy}, * so the trick is to reconstruct {Bx,By,vx,vy} cleverly to get to these * staggered points. For example: * 2Aa) VXR = [right-face of vx reconstructed along x-direction above] is at (i-1/2,j,k), * and we reconstruct it to (i-1/2,j-1/2,k) below. Similarly for {VXL,VYR,VYL} * 2Ab) Bx_stagger is at (i+1/2,j,k), and we reconstruct to (i+1/2,j-1/2,k) below * 2Ac) By_stagger is at (i-1/2,j+1/2,k) already for Az_rhs, from the previous step. * 2B) Ax_rhs is defined at (i,j+1/2,k+1/2), and it depends on {By,Bz,vy,vz}. * Again the trick is to reconstruct these onto these staggered points. * 2Ba) Bz_stagger is at (i,j,k+1/2), and we reconstruct to (i,j-1/2,k+1/2) below */ ww=0; // NOTE! The order of variable reconstruction is important here, // as we don't want to overwrite {vxr,vxl,vyr,vyl}! which_prims_to_reconstruct[ww]=VXR; ww++; which_prims_to_reconstruct[ww]=VYR; ww++; which_prims_to_reconstruct[ww]=VXL; ww++; which_prims_to_reconstruct[ww]=VYL; ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); ww=0; // Reconstruct other primitives last! which_prims_to_reconstruct[ww]=RHOB; ww++; which_prims_to_reconstruct[ww]=PRESSURE; ww++; which_prims_to_reconstruct[ww]=VX; ww++; which_prims_to_reconstruct[ww]=VY; ww++; which_prims_to_reconstruct[ww]=VZ; ww++; which_prims_to_reconstruct[ww]=BX_CENTER; ww++; //which_prims_to_reconstruct[ww]=BY_CENTER; ww++; which_prims_to_reconstruct[ww]=BZ_CENTER; ww++; which_prims_to_reconstruct[ww]=BX_STAGGER;ww++; which_prims_to_reconstruct[ww]=BZ_STAGGER;ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); # <a id='fluxes_y_dirn'></a> # # ### Step 3.8.4: Evaluating $\partial_{y}\boldsymbol{F}$ \[Back to [top](#toc)\] # $$\label{fluxes_y_dirn}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C //Right and left face values of BI_CENTER are used in mhdflux computation (first to compute b^a). // Instead of reconstructing, we simply set B^y face values to be consistent with BY_STAGGER. #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) { int index=CCTK_GFINDEX3D(cctkGH,i,j,k), indexjm1=CCTK_GFINDEX3D(cctkGH,i,j-1+(j==0),k); /* indexjm1=0 when j=0 */ out_prims_r[BY_CENTER].gf[index]=out_prims_l[BY_CENTER].gf[index]=in_prims[BY_STAGGER].gf[indexjm1]; } // Then add fluxes to RHS for hydro variables {rho_b,P,vx,vy,vz}: // This function is housed in the file: "add_fluxes_and_source_terms_to_hydro_rhss.C" add_fluxes_and_source_terms_to_hydro_rhss(flux_dirn,cctkGH,cctk_lsh,cctk_nghostzones,dX, metric,in_prims,TUPmunu, num_prims_to_reconstruct,out_prims_r,out_prims_l,eos, cmax_y,cmin_y, rho_star_flux,tau_flux,st_x_flux,st_y_flux,st_z_flux, rho_star_rhs,tau_rhs,st_x_rhs,st_y_rhs,st_z_rhs); # <a id='rhs_az_no_gauge_terms'></a> # # ### Step 3.8.5: Evaluating $\left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms}$ \[Back to [top](#toc)\] # $$\label{rhs_az_no_gauge_terms}$$ # # As a friendly reminder, we summarize the known gridpoint location of the needed gridfunctions here: # # | Staggered and unstaggered variables | Gridpoint location at which the variable is known | # |-----------------------------------------------------|----------------------------------------------------| # | $\left(v^{x}\right)_{r,l},\left(v^{y}\right)_{r,l}$ | $(i-\frac{1}{2},j-\frac{1}{2},k)$ | # | $\left(B^{z}_{\rm stagger}\right)_{r,l}$ | $(i+\frac{1}{2},j-\frac{1}{2},k)$ | # | $\left(B^{y}_{\rm stagger}\right)_{r,l}$ | $(i-\frac{1}{2},j+\frac{1}{2},k)$ | # | $\phi$ | $(i,j,k)$ | # # We start by interpolating $\phi$ to $\left(i+\frac{1}{2},j,k\right)$, followed by a second interpolation so that $\phi$ is known at $\left(i+\frac{1}{2},j+\frac{1}{2},k\right)$. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C /***************************************** * COMPUTING RHS OF A_z, BOOKKEEPING NOTE: * We want to compute * \partial_t A_z - [gauge terms] = \psi^{6} (v^x B^y - v^y B^x). * A_z is defined at (i+1/2,j+1/2,k). * ========================== * Where defined | Variables * (i-1/2,j-1/2,k)| {vxrr,vxrl,vxlr,vxll,vyrr,vyrl,vylr,vyll} * (i+1/2,j-1/2,k)| {Bx_stagger_r,Bx_stagger_l} (see Table 1 in arXiv:1007.2848) * (i-1/2,j+1/2,k)| {By_stagger_r,By_stagger_l} (see Table 1 in arXiv:1007.2848) * (i,j,k) | {phi} * ========================== ******************************************/ // Interpolates to i+1/2 #define IPH(METRICm1,METRICp0,METRICp1,METRICp2) (-0.0625*((METRICm1) + (METRICp2)) + 0.5625*((METRICp0) + (METRICp1))) // Next compute phi at (i+1/2,j+1/2,k): #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=1;j<cctk_lsh[1]-2;j++) for(int i=1;i<cctk_lsh[0]-2;i++) { temporary[CCTK_GFINDEX3D(cctkGH,i,j,k)]= IPH(IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j-1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j-1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j-1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j-1,k)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j ,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j ,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j ,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j ,k)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j+1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j+1,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j+1,k)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j+2,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+2,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j+2,k)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j+2,k)])); } # Then we update the RHS of $\left[\partial_{t}A_{z}\right]_{\rm no\ gauge\ terms}$. Keep in mind that the function `A_i_rhs_no_gauge_terms()` takes care of determining $\left\{v^{i},B^{i},B^{i}_{\rm stagger}\right\}$ at $\left(i+\frac{1}{2},j+\frac{1}{2},k\right)$. We will look at it in more detail when we see the `A_i_rhs_no_gauge_terms.C` file from `IllinoisGRMHD`. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C int A_directionz=3; A_i_rhs_no_gauge_terms(A_directionz,cctkGH,cctk_lsh,cctk_nghostzones,out_prims_r,out_prims_l,temporary,cmax_x,cmin_x,cmax_y,cmin_y, Az_rhs); # <a id='multiple_reconstructions'></a> # # ### Step 3.8.6: Multiple reconstructions \[Back to [top](#toc)\] # $$\label{multiple_reconstructions}$$ # # 1. $\left\{\rho_{\star},P,v^{i}, B^{i}\right\}$ from $\left(i,j,k\right)$ to $\left(i,j,k-\frac{1}{2}\right)$ # 1. $\left[\partial_{t}A_{x}\right]_{\rm no\ gauge\ terms}$ is defined at $\left(i,j+\frac{1}{2},k+\frac{1}{2}\right)$ # 1. $\left(v^{y}\right)_{r,l}$ and $\left(v^{z}\right)_{r,l}$ are at $\left(i,j-\frac{1}{2},k\right)$, so we reconstruct them to $\left(i,j-\frac{1}{2},k-\frac{1}{2}\right)$ # 1. $\left(B^{z}_{\rm stagger}\right)_{r,l}$ is already known at $\left(i,j-\frac{1}{2},k+\frac{1}{2}\right)$ # 1. $B^{y}_{\rm stagger}$ is at $\left(i,j+\frac{1}{2},k\right)$, so we reconstruct it to $\left(i,j+\frac{1}{2},k-\frac{1}{2}\right)$ # 1. $\left[\partial_{t}A_{y}\right]_{\rm no\ gauge\ terms}$ is defined at $\left(i+\frac{1}{2},j,k+\frac{1}{2}\right)$ # 1. $v^{x}$ and $v^{z}$ are reconstructed to $\left(i,j,k-\frac{1}{2}\right)$. We'll reconstruct them to $\left(i,j-\frac{1}{2},k-\frac{1}{2}\right)$ later # 1. $B^{z}_{\rm stagger}$ is already known at $\left(i,j,k+\frac{1}{2}\right)$. We'll reconstruct them to $\left(i-\frac{1}{2},j-\frac{1}{2},k+\frac{1}{2}\right)$ later # 1. $B^{x}_{\rm stagger}$ is at $\left(i,j+\frac{1}{2},k\right)$, so we reconstruct it to $\left(i,j+\frac{1}{2},k-\frac{1}{2}\right)$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // in_prims[{VYR,VYL,VZR,VZL}].gz_{lo,hi} ghostzones are not correct, so we fix // this below. // [Note that this is a cheap operation, copying only 8 integers and a pointer.] in_prims[VYR]=out_prims_r[VY]; in_prims[VYL]=out_prims_l[VY]; in_prims[VZR]=out_prims_r[VZ]; in_prims[VZL]=out_prims_l[VZ]; flux_dirn=3; // First compute ftilde, which is used for flattening left and right face values // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" ftilde_gf_compute(cctkGH,cctk_lsh,flux_dirn, in_prims, ftilde_gf); /* There are two stories going on here: * 1) Single reconstruction to (i,j,k-1/2) for {rho,P,vx,vy,vz,Bx,By,Bz} to compute * z-dir'n advection terms in \partial_t {rho_star,tau,mhd_st_{x,y,z}} at (i,j,k) * 2) Multiple reconstructions for *staggered* gridfunctions A_i: * Ai_rhs = \partial_t A_i = \epsilon_{ijk} \psi^{6} v^j B^k, * where \epsilon_{ijk} is the flat-space antisymmetric operator. * 2A) Ax_rhs is defined at (i,j+1/2,k+1/2), depends on v{y,z} and B{y,z} * 2Aa) v{y,z}{r,l} are at (i,j-1/2,k), so we reconstruct here to (i,j-1/2,k-1/2) * 2Ab) Bz_stagger{r,l} are at (i,j-1/2,k+1/2) already. * 2Ac) By_stagger is at (i,j+1/2,k), and below we reconstruct its value at (i,j+1/2,k-1/2) * 2B) Ay_rhs is defined at (i+1/2,j,k+1/2), depends on v{z,x} and B{z,x}. * 2Ba) v{x,z} are reconstructed to (i,j,k-1/2). Later we'll reconstruct again to (i-1/2,j,k-1/2). * 2Bb) Bz_stagger is at (i,j,k+1/2). Later we will reconstruct to (i-1/2,j,k+1/2). * 2Bc) Bx_stagger is at (i+1/2,j,k), and below we reconstruct its value at (i+1/2,j,k-1/2) */ ww=0; // NOTE! The order of variable reconstruction is important here, // as we don't want to overwrite {vxr,vxl,vyr,vyl}! which_prims_to_reconstruct[ww]=VYR; ww++; which_prims_to_reconstruct[ww]=VZR; ww++; which_prims_to_reconstruct[ww]=VYL; ww++; which_prims_to_reconstruct[ww]=VZL; ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); // Reconstruct other primitives last! ww=0; which_prims_to_reconstruct[ww]=RHOB; ww++; which_prims_to_reconstruct[ww]=PRESSURE; ww++; which_prims_to_reconstruct[ww]=VX; ww++; which_prims_to_reconstruct[ww]=VY; ww++; which_prims_to_reconstruct[ww]=VZ; ww++; which_prims_to_reconstruct[ww]=BX_CENTER; ww++; which_prims_to_reconstruct[ww]=BY_CENTER; ww++; //which_prims_to_reconstruct[ww]=BZ_CENTER; ww++; which_prims_to_reconstruct[ww]=BX_STAGGER; ww++; which_prims_to_reconstruct[ww]=BY_STAGGER; ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); //Right and left face values of BI_CENTER are used in mhdflux computation (first to compute b^a). // Instead of reconstructing, we simply set B^z face values to be consistent with BZ_STAGGER. #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) { int index=CCTK_GFINDEX3D(cctkGH,i,j,k), indexkm1=CCTK_GFINDEX3D(cctkGH,i,j,k-1+(k==0)); /* indexkm1=0 when k=0 */ out_prims_r[BZ_CENTER].gf[index]=out_prims_l[BZ_CENTER].gf[index]=in_prims[BZ_STAGGER].gf[indexkm1]; } # <a id='fluxes_z_dirn'></a> # # ### Step 3.8.7: Evaluating $\partial_{z}\boldsymbol{F}$ \[Back to [top](#toc)\] # $$\label{fluxes_z_dirn}$$ # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // Then add fluxes to RHS for hydro variables {rho_b,P,vx,vy,vz}: // This function is housed in the file: "add_fluxes_and_source_terms_to_hydro_rhss.C" add_fluxes_and_source_terms_to_hydro_rhss(flux_dirn,cctkGH,cctk_lsh,cctk_nghostzones,dX, metric,in_prims,TUPmunu, num_prims_to_reconstruct,out_prims_r,out_prims_l,eos, cmax_z,cmin_z, rho_star_flux,tau_flux,st_x_flux,st_y_flux,st_z_flux, rho_star_rhs,tau_rhs,st_x_rhs,st_y_rhs,st_z_rhs); // in_prims[{VYR,VYL,VZR,VZL}].gz_{lo,hi} ghostzones are not set correcty. // We fix this below. // [Note that this is a cheap operation, copying only 8 integers and a pointer.] in_prims[VXR]=out_prims_r[VX]; in_prims[VZR]=out_prims_r[VZ]; in_prims[VXL]=out_prims_l[VX]; in_prims[VZL]=out_prims_l[VZ]; // FIXME: lines above seem to be inconsistent with lines below.... Possible bug, not major enough to affect evolutions though. in_prims[VZR].gz_lo[1]=in_prims[VZR].gz_hi[1]=0; in_prims[VXR].gz_lo[1]=in_prims[VXR].gz_hi[1]=0; in_prims[VZL].gz_lo[1]=in_prims[VZL].gz_hi[1]=0; in_prims[VXL].gz_lo[1]=in_prims[VXL].gz_hi[1]=0; # <a id='rhs_ax_no_gauge_terms'></a> # # ### Step 3.8.8: Evaluating $\left[\partial_{t}A_{x}\right]_{\rm no\ gauge\ terms}$ \[Back to [top](#toc)\] # $$\label{rhs_ax_no_gauge_terms}$$ # # As a friendly reminder, we summarize the known gridpoint location of the needed gridfunctions here: # # | Staggered and unstaggered variables | Gridpoint location at which the variable is known | # |-----------------------------------------------------|----------------------------------------------------| # | $\left(v^{y}\right)_{r,l},\left(v^{z}\right)_{r,l}$ | $(i,j-\frac{1}{2},k-\frac{1}{2})$ | # | $\left(B^{y}_{\rm stagger}\right)_{r,l}$ | $(i,j+\frac{1}{2},k-\frac{1}{2})$ | # | $\left(B^{z}_{\rm stagger}\right)_{r,l}$ | $(i,j-\frac{1}{2},k+\frac{1}{2})$ | # | $\phi$ | $(i,j,k)$ | # # We start by interpolating $\phi$ to $\left(i,j+\frac{1}{2},k+\frac{1}{2}\right)$. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C /***************************************** * COMPUTING RHS OF A_x, BOOKKEEPING NOTE: * We want to compute * \partial_t A_x - [gauge terms] = \psi^{6} (v^y B^z - v^z B^y). * A_x is defined at (i,j+1/2,k+1/2). * ========================== * Where defined | Variables * (i,j-1/2,k-1/2)| {vyrr,vyrl,vylr,vyll,vzrr,vzrl,vzlr,vzll} * (i,j+1/2,k-1/2)| {By_stagger_r,By_stagger_l} (see Table 1 in arXiv:1007.2848) * (i,j-1/2,k+1/2)| {Bz_stagger_r,Bz_stagger_l} (see Table 1 in arXiv:1007.2848) * (i,j,k) | {phi} * ========================== ******************************************/ // Next compute phi at (i,j+1/2,k+1/2): #pragma omp parallel for for(int k=1;k<cctk_lsh[2]-2;k++) for(int j=1;j<cctk_lsh[1]-2;j++) for(int i=0;i<cctk_lsh[0];i++) { temporary[CCTK_GFINDEX3D(cctkGH,i,j,k)]= IPH(IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j-1,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+1,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+2,k-1)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j-1,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+1,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+2,k )]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j-1,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+1,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+2,k+1)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j-1,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+1,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j+2,k+2)])); } # Then we update the RHS of $\left[\partial_{t}A_{x}\right]_{\rm no\ gauge\ terms}$. Keep in mind that the function `A_i_rhs_no_gauge_terms()` takes care of determining $\left\{v^{i},B^{i},B^{i}_{\rm stagger}\right\}$ at $\left(i,j+\frac{1}{2},k+\frac{1}{2}\right)$. We will look at it in more detail when we see the `A_i_rhs_no_gauge_terms.C` file from `IllinoisGRMHD`. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C int A_directionx=1; A_i_rhs_no_gauge_terms(A_directionx,cctkGH,cctk_lsh,cctk_nghostzones,out_prims_r,out_prims_l,temporary,cmax_y,cmin_y,cmax_z,cmin_z, Ax_rhs); # <a id='rhs_ay_no_gauge_terms'></a> # # ### Step 3.8.9: Evaluating $\left[\partial_{t}A_{y}\right]_{\rm no\ gauge\ terms}$ \[Back to [top](#toc)\] # $$\label{rhs_ay_no_gauge_terms}$$ # # As a friendly reminder, we summarize the known gridpoint location of the needed gridfunctions here: # # | Staggered and unstaggered variables | Gridpoint location at which the variable is known | # |-----------------------------------------------------|----------------------------------------------------| # | $\left(v^{x}\right)_{r,l},\left(v^{z}\right)_{r,l}$ | $(i-\frac{1}{2},j,k-\frac{1}{2})$ | # | $\left(B^{x}_{\rm stagger}\right)_{r,l}$ | $(i+\frac{1}{2},j,k-\frac{1}{2})$ | # | $\left(B^{z}_{\rm stagger}\right)_{r,l}$ | $(i-\frac{1}{2},j,k+\frac{1}{2})$ | # | $\phi$ | $(i,j,k)$ | # # We start by interpolating $\phi$ to $\left(i+\frac{1}{2},j,k+\frac{1}{2}\right)$. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // We reprise flux_dirn=1 to finish up computations of Ai_rhs's! flux_dirn=1; // First compute ftilde, which is used for flattening left and right face values // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" ftilde_gf_compute(cctkGH,cctk_lsh,flux_dirn, in_prims, ftilde_gf); ww=0; // NOTE! The order of variable reconstruction is important here, // as we don't want to overwrite {vxr,vxl,vyr,vyl}! which_prims_to_reconstruct[ww]=VXR; ww++; which_prims_to_reconstruct[ww]=VZR; ww++; which_prims_to_reconstruct[ww]=VXL; ww++; which_prims_to_reconstruct[ww]=VZL; ww++; which_prims_to_reconstruct[ww]=BZ_STAGGER;ww++; num_prims_to_reconstruct=ww; // This function is housed in the file: "reconstruct_set_of_prims_PPM.C" reconstruct_set_of_prims_PPM(cctkGH,cctk_lsh,flux_dirn,num_prims_to_reconstruct,which_prims_to_reconstruct, eos,in_prims,out_prims_r,out_prims_l,ftilde_gf,temporary); /***************************************** * COMPUTING RHS OF A_y, BOOKKEEPING NOTE: * We want to compute * \partial_t A_y - [gauge terms] = \psi^{6} (v^z B^x - v^x B^z). * A_y is defined at (i+1/2,j,k+1/2). * ========================== * Where defined | Variables * (i-1/2,j,k-1/2)| {vxrr,vxrl,vxlr,vxll,vzrr,vzrl,vzlr,vzll} * (i+1/2,j,k-1/2)| {Bx_stagger_r,Bx_stagger_l} (see Table 1 in arXiv:1007.2848) * (i-1/2,j,k+1/2)| {Bz_stagger_r,Bz_stagger_l} (see Table 1 in arXiv:1007.2848) * (i,j,k) | {phi} * ========================== ******************************************/ // Next compute phi at (i+1/2,j,k+1/2): #pragma omp parallel for for(int k=1;k<cctk_lsh[2]-2;k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=1;i<cctk_lsh[0]-2;i++) { temporary[CCTK_GFINDEX3D(cctkGH,i,j,k)]= IPH(IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j,k-1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j,k-1)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j,k )],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j,k )]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j,k+1)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j,k+1)]), IPH(phi_bssn[CCTK_GFINDEX3D(cctkGH,i-1,j,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i,j,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+1,j,k+2)],phi_bssn[CCTK_GFINDEX3D(cctkGH,i+2,j,k+2)])); } # Then we update the RHS of $\left[\partial_{t}A_{y}\right]_{\rm no\ gauge\ terms}$. Keep in mind that the function `A_i_rhs_no_gauge_terms()` takes care of determining $\left\{v^{i},B^{i},B^{i}_{\rm stagger}\right\}$ at $\left(i+\frac{1}{2},j,k+\frac{1}{2}\right)$. We will look at it in more detail when we see the `A_i_rhs_no_gauge_terms.C` file from `IllinoisGRMHD`. # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C int A_directiony=2; A_i_rhs_no_gauge_terms(A_directiony,cctkGH,cctk_lsh,cctk_nghostzones,out_prims_r,out_prims_l,temporary,cmax_z,cmin_z,cmax_x,cmin_x, Ay_rhs); # <a id='rhs_psi6phi_and_ai_gauge_terms'></a> # # ### Step 3.8.10: Evaluating $\partial_{t}\left[\psi^{6}\Phi\right]$ and $\left[\partial_{t}A_{i}\right]_{\rm gauge\ terms}$ \[Back to [top](#toc)\] # $$\label{rhs_psi6phi_and_ai_gauge_terms}$$ # # Finally, we compute # # $$ # \partial_{t}\left[\sqrt{\gamma}\Phi\right] = # \partial_{t}\left[\psi^{6}\Phi\right] = # -\partial_{j}\left(\alpha\sqrt{\gamma}A^{j} - \beta^{j}\left[\sqrt{\gamma}\Phi\right]\right) # -\xi\alpha\left[\sqrt{\gamma}\Phi\right]\ , # $$ # # and # # $$ # \left[\partial_{t}A_{i}\right]_{\rm gauge\ terms} = -\partial_{i}\left(\alpha\Phi - \beta^{j}A_{j}\right)\ . # $$ # # Notice that we will need $A^{i}$ to compute $\partial_{t}\left[\psi^{6}\Phi\right]$, but we only have $A_{i}$, so we need to determine $\bar\gamma^{ij}$ (${\rm gtupij}$). # + # %%writefile -a $outfile_path__driver_evaluate_MHD_rhs__C // Next compute psi6phi_rhs, and add gauge terms to A_i_rhs terms! // Note that in the following function, we don't bother with reconstruction, instead interpolating. // We need A^i, but only have A_i. So we add gtupij to the list of input variables. CCTK_REAL *interp_vars[MAXNUMINTERP]; ww=0; interp_vars[ww]=betax; ww++; interp_vars[ww]=betay; ww++; interp_vars[ww]=betaz; ww++; interp_vars[ww]=gtupxx; ww++; interp_vars[ww]=gtupxy; ww++; interp_vars[ww]=gtupxz; ww++; interp_vars[ww]=gtupyy; ww++; interp_vars[ww]=gtupyz; ww++; interp_vars[ww]=gtupzz; ww++; interp_vars[ww]=psi_bssn;ww++; interp_vars[ww]=lapm1; ww++; interp_vars[ww]=Ax; ww++; interp_vars[ww]=Ay; ww++; interp_vars[ww]=Az; ww++; int max_num_interp_variables=ww; if(max_num_interp_variables>MAXNUMINTERP) {CCTK_VError(VERR_DEF_PARAMS,"Error: Didn't allocate enough space for interp_vars[]."); } // We are FINISHED with v{x,y,z}{r,l} and P{r,l} so we use these 8 gridfunctions' worth of space as temp storage. Lorenz_psi6phi_rhs__add_gauge_terms_to_A_i_rhs(cctkGH,cctk_lsh,cctk_nghostzones,dX,interp_vars,psi6phi, vxr,vyr,vzr,vxl,vyl,vzl,Pr,Pl, psi6phi_rhs,Ax_rhs,Ay_rhs,Az_rhs); return; /* // FUN DEBUGGING TOOL (trust me!): #pragma omp parallel for for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) { int index=CCTK_GFINDEX3D(cctkGH,i,j,k); //st_x_rhs[index]=0.0; //st_y_rhs[index]=0.0; //st_z_rhs[index]=0.0; //rho_star_rhs[index]=0.0; //tau_rhs[index]=0.0; psi6phi_rhs[index] = 0.0; Ax_rhs[index] = 0.0; Ay_rhs[index] = 0.0; Az_rhs[index] = 0.0; } */ } // We add #include's here instead of compiling these separately to help ensure that functions are properly inlined. // These files only include about 800 lines of code in total (~1200 lines in total), but it's arguably more // convenient to edit a 600 line file than an 1800 line file, so I'd prefer to leave this unconventional structure // alone. #include "reconstruct_set_of_prims_PPM.C" #include "compute_tau_rhs_extrinsic_curvature_terms_and_TUPmunu.C" #include "add_fluxes_and_source_terms_to_hydro_rhss.C" #include "mhdflux.C" #include "A_i_rhs_no_gauge_terms.C" #include "Lorenz_psi6phi_rhs__add_gauge_terms_to_A_i_rhs.C" # - # <a id='driver_evaluate_MHD_rhs__h'></a> # # # Step 4: The `driver_evaluate_MHD_rhs.h` header file \[Back to [top](#toc)\] # $$\label{driver_evaluate_MHD_rhs__h}$$ # # Now we generate the header file for the `driver_evaluate_MHD_rhs.C` file. # + # %%writefile $outfile_path__driver_evaluate_MHD_rhs__h #ifndef DRIVER_EVALUATE_MHD_RHS_H_ #define DRIVER_EVALUATE_MHD_RHS_H_ /* PRIVATE FUNCTIONS, Called within driver_evaluate_MHD_rhs.C ONLY */ static void ftilde_gf_compute(const cGH *cctkGH,const int *cctk_lsh,const int flux_dirn,gf_and_gz_struct *input,CCTK_REAL *ftilde_gf); static void reconstruct_set_of_prims_PPM(const cGH *cctkGH,const int *cctk_lsh,const int flux_dirn,const int num_prims_to_reconstruct,const int *which_prims_to_reconstruct, eos_struct &eosi,gf_and_gz_struct *in_prims,gf_and_gz_struct *out_prims_r,gf_and_gz_struct *out_prims_l, CCTK_REAL *ftilde_gf,CCTK_REAL *temporary); static void compute_tau_rhs_extrinsic_curvature_terms_and_TUPmunu (const cGH *cctkGH,const int *cctk_lsh,const int *cctk_nghostzones,CCTK_REAL *dX,CCTK_REAL **metric,gf_and_gz_struct *prims, CCTK_REAL **TUPmunu,eos_struct &eos, CCTK_REAL Gamma_th, CCTK_REAL *gupxy,CCTK_REAL *gupxz,CCTK_REAL *gupyz, CCTK_REAL *kxx,CCTK_REAL *kxy,CCTK_REAL *kxz,CCTK_REAL *kyy,CCTK_REAL *kyz,CCTK_REAL *kzz, CCTK_REAL *tau_rhs); static void A_i_rhs_no_gauge_terms(const int A_dirn, const cGH *cctkGH,const int *cctk_lsh,const int *cctk_nghostzones,gf_and_gz_struct *out_prims_r,gf_and_gz_struct *out_prims_l, CCTK_REAL *phi_interped,CCTK_REAL *cmax_1,CCTK_REAL *cmin_1,CCTK_REAL *cmax_2,CCTK_REAL *cmin_2, CCTK_REAL *A3_rhs); static void Lorenz_psi6phi_rhs__add_gauge_terms_to_A_i_rhs(const cGH *cctkGH,const int *cctk_lsh,const int *cctk_nghostzones,CCTK_REAL *dX,CCTK_REAL **interp_vars,CCTK_REAL *psi6phi, CCTK_REAL *shiftx_iphjphkph,CCTK_REAL *shifty_iphjphkph,CCTK_REAL *shiftz_iphjphkph, CCTK_REAL *alpha_iphjphkph,CCTK_REAL *alpha_Phi_minus_betaj_A_j_iphjphkph,CCTK_REAL *alpha_sqrtg_Ax_interp, CCTK_REAL *alpha_sqrtg_Ay_interp,CCTK_REAL *alpha_sqrtg_Az_interp, CCTK_REAL *psi6phi_rhs,CCTK_REAL *Ax_rhs,CCTK_REAL *Ay_rhs,CCTK_REAL *Az_rhs); static void add_fluxes_and_source_terms_to_hydro_rhss(const int flux_dirn,const cGH *cctkGH,const int *cctk_lsh,const int *cctk_nghostzones,CCTK_REAL *dX, CCTK_REAL **metric,gf_and_gz_struct *in_prims,CCTK_REAL **TUPmunu, int numvars_reconstructed,gf_and_gz_struct *out_prims_r,gf_and_gz_struct *out_prims_l,eos_struct &eos, CCTK_REAL *cmax,CCTK_REAL *cmin, CCTK_REAL *rho_star_flux,CCTK_REAL *tau_flux,CCTK_REAL *st_x_flux,CCTK_REAL *st_y_flux,CCTK_REAL *st_z_flux, CCTK_REAL *rho_star_rhs,CCTK_REAL *tau_rhs,CCTK_REAL *st_x_rhs,CCTK_REAL *st_y_rhs,CCTK_REAL *st_z_rhs); #include "harm_primitives_headers.h" #endif /* DRIVER_EVALUATE_MHD_RHS_H_ */ # - # <a id='code_validation'></a> # # # Step 5: Code validation \[Back to [top](#toc)\] # $$\label{code_validation}$$ # # <a id='code_validation_driver_evaluate_MHD_rhs__c'></a> # # ## Step 5.a: `driver_evaluate_MHD_rhs.C` \[Back to [top](#toc)\] # $$\label{code_validation_driver_evaluate_MHD_rhs__c}$$ # # First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook. # + # Verify if the code generated by this tutorial module # matches the original IllinoisGRMHD source code # First download the original IllinoisGRMHD source code import urllib from os import path original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/driver_evaluate_MHD_rhs.C" original_IGM_file_name = "driver_evaluate_MHD_rhs-original.C" original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name) # Then download the original IllinoisGRMHD source code # We try it here in a couple of ways in an attempt to keep # the code more portable try: original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8") # Write down the file the original IllinoisGRMHD source code with open(original_IGM_file_path,"w") as file: file.write(original_IGM_file_code) except: try: original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8") # Write down the file the original IllinoisGRMHD source code with open(original_IGM_file_path,"w") as file: file.write(original_IGM_file_code) except: # If all else fails, hope wget does the job # !wget -O $original_IGM_file_path $original_IGM_file_url # Perform validation # Validation__driver_evaluate_MHD_rhs__C = !diff $original_IGM_file_path $outfile_path__driver_evaluate_MHD_rhs__C if Validation__driver_evaluate_MHD_rhs__C == []: # If the validation passes, we do not need to store the original IGM source code file # !rm $original_IGM_file_path print("Validation test for driver_evaluate_MHD_rhs.C: PASSED!") else: # If the validation fails, we keep the original IGM source code file print("Validation test for driver_evaluate_MHD_rhs.C: FAILED!") # We also print out the difference between the code generated # in this tutorial module and the original IGM source code print("Diff:") for diff_line in Validation__driver_evaluate_MHD_rhs__C: print(diff_line) # - # <a id='code_validation_driver_evaluate_MHD_rhs__h'></a> # # ## Step 5.b: `driver_evaluate_MHD_rhs.h` \[Back to [top](#toc)\] # $$\label{code_validation_driver_evaluate_MHD_rhs__h}$$ # # First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook. # + # Verify if the code generated by this tutorial module # matches the original IllinoisGRMHD source code # First download the original IllinoisGRMHD source code import urllib from os import path original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/driver_evaluate_MHD_rhs.h" original_IGM_file_name = "driver_evaluate_MHD_rhs-original.h" original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name) # Then download the original IllinoisGRMHD source code # We try it here in a couple of ways in an attempt to keep # the code more portable try: original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8") # Write down the file the original IllinoisGRMHD source code with open(original_IGM_file_path,"w") as file: file.write(original_IGM_file_code) except: try: original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8") # Write down the file the original IllinoisGRMHD source code with open(original_IGM_file_path,"w") as file: file.write(original_IGM_file_code) except: # If all else fails, hope wget does the job # !wget -O $original_IGM_file_path $original_IGM_file_url # Perform validation # Validation__driver_evaluate_MHD_rhs__h = !diff $original_IGM_file_path $outfile_path__driver_evaluate_MHD_rhs__h if Validation__driver_evaluate_MHD_rhs__h == []: # If the validation passes, we do not need to store the original IGM source code file # !rm $original_IGM_file_path print("Validation test for driver_evaluate_MHD_rhs.h: PASSED!") else: # If the validation fails, we keep the original IGM source code file print("Validation test for driver_evaluate_MHD_rhs.h: FAILED!") # We also print out the difference between the code generated # in this tutorial module and the original IGM source code print("Diff:") for diff_line in Validation__driver_evaluate_MHD_rhs__h: print(diff_line) # - # <a id='latex_pdf_output'></a> # # # Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] # $$\label{latex_pdf_output}$$ # # The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename # [Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.pdf](Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means). latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx") # #!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.ipynb # #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.tex # #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.tex # #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.tex # !rm -f Tut*.out Tut*.aux Tut*.log
IllinoisGRMHD-Trusted/doc/Tutorial-IllinoisGRMHD__driver_evaluate_MHD_rhs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Integrantes: # - <NAME> - 200711501 # - <NAME> - 201313516 # # Exercise 06 # # TensorFlow and Keras # # # --- # + import numpy as np import pylab as pl from sklearn.datasets.samples_generator import make_moons # %matplotlib inline # Functions for plotting 2D data and decision regions def plot_data(X, y,title='Data'): y_unique = np.unique(y) colors = pl.cm.rainbow(np.linspace(0.0, 1.0, y_unique.size)) for this_y, color in zip(y_unique, colors): this_X = X[y == this_y] pl.scatter(this_X[:, 0], this_X[:, 1], c=color, alpha=0.5, edgecolor='k', label="Class %s" % this_y) pl.legend(loc="best") pl.title(title) def plot_decision_region(X, pred_fun): min_x = np.min(X[:, 0]) max_x = np.max(X[:, 0]) min_y = np.min(X[:, 1]) max_y = np.max(X[:, 1]) min_x = min_x - (max_x - min_x) * 0.05 max_x = max_x + (max_x - min_x) * 0.05 min_y = min_y - (max_y - min_y) * 0.05 max_y = max_y + (max_y - min_y) * 0.05 x_vals = np.linspace(min_x, max_x, 30) y_vals = np.linspace(min_y, max_y, 30) XX, YY = np.meshgrid(x_vals, y_vals) grid_r, grid_c = XX.shape ZZ = np.zeros((grid_r, grid_c)) for i in range(grid_r): for j in range(grid_c): ZZ[i, j] = pred_fun(XX[i, j], YY[i, j]) pl.contourf(XX, YY, ZZ, 30, cmap = pl.cm.coolwarm, vmin= 0, vmax=1) pl.colorbar() pl.xlabel("x") pl.ylabel("y") # - # ### 1. Multilayer neural network in TensorFlow # # You need to create a neural network model in TF that is able to discriminate the two classes in the following dataset: # + X, Y = make_moons(n_samples=1000, noise= 0.2, random_state=3) x_train = X[:500] x_test = X[500:] y_train = Y[:500] y_test = Y[500:] pl.figure(figsize=(8, 6)) plot_data(x_train, y_train) # - # For this you will need to create a neural network with one hidden layer. You cannot use prebuilt models # such as those in `tf.estimator`. **Hint**: extend the logistic regression example from the TensorFlow handout. # # Your answer must contain the following: # * A visualization of the CG of the model. # * A visualization of the decision region along with the test data. # * A snapshot from TensorBoard that shows the evolution of the training and test loss. # + import tensorflow as tf from IPython.display import clear_output, Image, display, HTML # %matplotlib inline # Helper functions to inline visualization of computing graphs # Extracted from: # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:1000px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) def variable_summaries(var,name): with tf.name_scope(name): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('stddev', stddev) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram', var) # + graph = tf.Graph() n_hidden = 40 learning_rate = 1 seed = 123 with graph.as_default(): tf.set_random_seed(seed) x = tf.placeholder(tf.float32,shape=[None,2],name='Features') y_true = tf.placeholder(tf.float32,shape=[None,1],name='Class') with tf.name_scope('hidden') as scope: w1 = tf.Variable(tf.random_normal([2, n_hidden]),dtype=tf.float32,name='weights1') b1 = tf.Variable(tf.random_normal([n_hidden]),dtype=tf.float32,name='bias1') layer = tf.add(tf.matmul(x,w1),b1) layer=tf.sigmoid(layer,name='activation') variable_summaries(w1,name='Sum_w1') variable_summaries(b1,name='Sum_b1') with tf.name_scope('out') as scope: w2 = tf.Variable(tf.random_normal([n_hidden, 1]),dtype=tf.float32,name='weights2') b2 = tf.Variable(tf.random_normal([1]),dtype=tf.float32,name='bias2') y_pred = tf.add(tf.matmul(layer,w2),b2) variable_summaries(w2,name='Sum_w2') variable_summaries(b2,name='Sum_b2') with tf.name_scope('loss') as scope: loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_true,logits=y_pred) loss = tf.reduce_mean(loss) variable_summaries(loss,name='Loss') with tf.name_scope('train') as scope: optimizer = tf.train.GradientDescentOptimizer(learning_rate) train = optimizer.minimize(loss) merged = tf.summary.merge_all() init = tf.global_variables_initializer() # - # ## CG model visualization show_graph(graph.as_graph_def()) import os os.chdir('/Users/germancarvajal/Desktop') # + LOG_DIR = 'logs' train_writer = tf.summary.FileWriter(LOG_DIR + '/train', graph=graph) test_writer = tf.summary.FileWriter(LOG_DIR + '/test') num_epochs = 1000 lossesTr = [] lossesTs = [] with graph.as_default(): sess = tf.Session() sess.run(init) for step in range(num_epochs): summary, train_loss, _ = sess.run([merged, loss, train] ,{x: x_train, y_true: y_train.reshape(-1,1)}) train_writer.add_summary(summary, step) summary, val_loss = sess.run([merged, loss] ,{x: x_test, y_true: y_test.reshape(-1,1)}) test_writer.add_summary(summary, step) if step % 10 == 0: # print(step, train_loss, val_loss) lossesTr.append(train_loss) lossesTs.append(val_loss) # - pl.figure(figsize = (10,8)) pl.plot(lossesTr, '-b',label='Train') pl.plot(lossesTs, '-r',label='Test') pl.legend(loc='upper right') pl.ylim(0, 2.0) def sigmoid(x): return 1.0/(1.0 + np.exp(-x)) # ## Decision region and test data # + with graph.as_default(): def pred_fun(x1, x2): xval = np.array([[x1, x2]]) return sigmoid(sess.run(y_pred,{x: xval})) pl.figure(figsize = (8,16/3)) plot_decision_region(X, pred_fun) plot_data(x_test, y_test,title='Test data') # - # ## TensorBoard evolution of training and test loss # # <img width=1000 src="images/TensorBoard.png" align="middle"> # ### 2. Improving the Keras text classifier # # Your goal is to improve the performance of the text classifier in the Keras handout. This is are the things that you need to try: # # * Different activation functions for the hidden layer (https://keras.io/activations/) # * Different optimizers (https://keras.io/optimizers/) # * Add dropout between the hidden layer and the output layer (https://keras.io/layers/core/#dropout) # * Different initializers for the dense layers (https://keras.io/initializers/) # # Try different combinations and report your findings at the end. Which configuration got the best accuracy in test? # # **Solution** # # In order to ensure the different calibrated models are comparable between them, all the data preparation steps are performed beforehand. By doing this, the train and test datasets will be the same across the different optimization, and architectural neural networks trained. Also, the randomness of all processes is controled be seeding all sources of variability, thus the score results can only be affected by the parameters optimization. import keras import tensorflow as tf import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Dropout from keras.datasets import reuters from keras.preprocessing.text import Tokenizer max_words = 1000 np.random.seed(123) tokenizer = Tokenizer(num_words=max_words) (x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words, test_split=0.2) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') num_classes = np.max(y_train) + 1 print(num_classes, 'classes') x_train = tokenizer.sequences_to_matrix(x_train, mode='binary') y_train = keras.utils.to_categorical(y_train, num_classes) x_test = tokenizer.sequences_to_matrix(x_test, mode='binary') y_test = keras.utils.to_categorical(y_test, num_classes) print('x_train shape:', x_train.shape) print('y_train shape:', y_train.shape) print('x_test shape:', x_test.shape) print('y_test shape:', y_test.shape) # To improve the performance of the text classifier we'll explore the 4 most popular activation functions, 3 different optimizers including the stochastic-gradient descent and ADAM. The dropout will be explored from no-dropout to 90% left out of the network, and 3 different initialization strategies with two stochastic approaches and a standard 0 kernel. All posible combinations of architecture are trained and evaluated over the same samples to check their performance, for a total of 144 neural netowrks trained and evaluated. activations=['softmax','relu','tanh','sigmoid'] optim=['sgd','rmsprop','adam'] drop=np.linspace(0,0.9,4) initi=['zeros','RandomNormal','RandomUniform'] total=len(activations)*len(optim)*len(drop)*len(initi) # The following visualization function is defined to better of the progress supervision of the training and evaluation of the neural networks over the extensive posibilities set. # Print iterations progress def printProgressBar (iteration, total, prefix = '', suffix = '', decimals = 1, length = 100, fill = '█'): """ Call in a loop to create terminal progress bar @params: iteration - Required : current iteration (Int) total - Required : total iterations (Int) prefix - Optional : prefix string (Str) suffix - Optional : suffix string (Str) decimals - Optional : positive number of decimals in percent complete (Int) length - Optional : character length of bar (Int) fill - Optional : bar fill character (Str) """ percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total))) filledLength = int(length * iteration // total) bar = fill * filledLength + '-' * (length - filledLength) print('\r%s |%s| %s%% %s' % (prefix, bar, percent, suffix), end = '\r') # Print New Line on Complete if iteration == total: print() # All the set of pobible combinations of parameters is explored through the following 4 loop-in-loop algorithm. The evaluation results for every posibility are stored in the `'resultados'` array for later performance analysis and selection of the best performing model. resultados=[] n=0 for a in activations: for o in optim: for d in drop: for i in initi: n=n+1 np.random.seed(123) tf.set_random_seed(123) model = Sequential() model.add(Dense(256, input_shape=(max_words,),kernel_initializer=i)) model.add(Activation(a)) model.add(Dropout(d,seed=123)) model.add(Dense(num_classes,kernel_initializer=i)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy',optimizer=o,metrics=['accuracy']) model.fit(x_train, y_train,batch_size=32,epochs=5,verbose=0,validation_split=0.1) score = model.evaluate(x_test, y_test,verbose=0) resultados.append([a,o,d,i,score[0],score[1]]) printProgressBar(n,total,prefix = 'Progress:', suffix = 'Complete', length = 50) import pandas as pd results=pd.DataFrame(resultados,columns=['Hidden_activation','Optimizer','Dropout','Initializer','Test score','Test accuracy']) # As it can be seen in the following table, the best accuracy for the text classifier was achieved in 4 out of the best 5 cases by using the hyperbolic tangent function. The best optimizers according to the list were ADAM and RMSprop, both improving the performance of the SGD algorithm. The initializalization strategy appears to have some influence over the results, all the top 5 having a random start values for the iterations. Most notably, the dropout values for the best 5 cases ranges between 0 (no-dropout) and 60% in 3 of the explored combinations. # # In summary, the best performing combination using the same number of epochs, batch sizes, layers and neurons as the original model on handout 11 was the one with the following specification: # # - Hidden layer activation function: TANH # - Optimizer: ADAM # - Dropout: 60% # - Initializer: Random Uniform # # This combination lead to an accuracy of 0.8, a big leap over the approximate 0.5 achived by the original formulation. As this two models share the same architecture, then is obvios how the training process can alter the results of neural network models. results.nlargest(5,'Test accuracy')
exercises/E06-TensorFlow-Keras.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from matplotlib import pyplot as plt from nltk import everygrams from nltk.corpus import stopwords as sw import math from collections import Counter import itertools import re import warnings warnings.filterwarnings('ignore') # - # Loading the datasets resumeDf = pd.read_csv('resume_data.csv') print ("Total Rows =", len(resumeDf)) print ("Available Columns =", list(resumeDf.columns)) resumeDf.head() # Preprocessing resumeDf['State'] = resumeDf['State'].str.strip() resumeDf['City'] = resumeDf['City'].str.strip() # + # Understanding judicial state distribution resumeDf_grouped_state = resumeDf.groupby('State').size() all_states = list(resumeDf_grouped_state.index) all_sizes = resumeDf_grouped_state.values selected_states, selected_sizes = [], [] for row in zip(all_states, all_sizes): if row[-1] > 1 and len(row[0]) > 2: selected_states.append(row[0]) selected_sizes.append(row[1]) plt.figure(figsize=(8, 8)) plt.barh(selected_states, selected_sizes) plt.show() # + # Understanding City distribution for the largest 10 cities MAXIMUM_STATE, MAXIMUM_CITIES = 10, 40 top_states = [item[0] for item in sorted(zip(selected_states, selected_sizes), key=lambda x: x[-1], reverse=True) if item[0] != 'NONE'][:MAXIMUM_STATE] resumeDf_popular_states = resumeDf[resumeDf['State'].isin(top_states)] resumeDf_popular_states_grouped = resumeDf_popular_states.groupby(['City', 'State']).size() multi_indexes_grouped = resumeDf_popular_states_grouped.index popular_cities, popular_city_size = [], [] for row in range(len(resumeDf_popular_states_grouped)): popular_cities.append(multi_indexes_grouped[row][0]) popular_city_size.append(resumeDf_popular_states_grouped[row]) sorted_cities = sorted(zip(popular_cities, popular_city_size), key=lambda x: x[-1], reverse=True) popular_cities, popular_city_size = [list(tup)[:MAXIMUM_CITIES] for tup in zip(*sorted_cities)] plt.figure(figsize=(8, 12)) plt.barh(popular_cities, popular_city_size) plt.show() # + # Understanding most popular contents in title other_corrections = {'developer': 'Developer', 'Jr.': 'Junior', 'Php': 'PHP', 'php': 'PHP', 'mca': 'MCA', 'java': 'Java', 'Full Stack': 'FullStack', '1': 'one', '2': 'two', '3': 'three', '4': 'four', '5': 'five', '6': 'six', '7': 'seven', '8': 'eight', '9': 'nine', '0': 'zero', 'Sr': 'Senior'} custom_stopwords = ['NONE', 'I', 'Pvt', 'Ltd'] stopwords = list(set(sw.words('english') + custom_stopwords)) resume_titles = [] for line in list(resumeDf['Resume_title']): temp_list = [] if str(type(line)) == "<class 'str'>" or not math.isnan(line): # Apply other corrections for correction in other_corrections.keys(): line = line.replace(correction, other_corrections.get(correction)) for word in line.split(): if word not in stopwords: filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', word.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue temp_list.append(filtered_wd.lower()) if len(temp_list) > 0: resume_titles.append(temp_list) resume_titles_flattened = list(itertools.chain(*resume_titles)) # Futher flatten resume titles new_resume_titles_flattened = [] for row in resume_titles_flattened: if len(row.split()) == 1: new_resume_titles_flattened.append(row.lower()) else: for item in row.split(): filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', item.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue for correction in other_corrections.keys(): if filtered_wd == correction: filtered_wd = filtered_wd.replace(correction, other_corrections.get(correction)) break new_resume_titles_flattened.append(item.lower()) # - # Creating N-grams lower_ngrams, upper_ngram = 2, 5 n_trigrams = dict(Counter(list(everygrams(new_resume_titles_flattened, lower_ngrams, upper_ngram)))) n_trigrams_tup = [(n_trigrams.get(key), key) for key in n_trigrams.keys()] n_trigrams_tup.sort(key=lambda item: item[0], reverse=True) print ("Total N-Gram Tuples =", len(n_trigrams_tup)) # Printing Top Tuples MAX_TUPLES = 500 n_trigrams_tup[:MAX_TUPLES] # + # Understanding most popular contents in description custom_stopwords = ['NONE', 'to', 'using', 'i', 'pvt', 'ltd', 'it', 'use', 'having', 'along', 'would', 'etc', 'gives'] stopwords = list(set(sw.words('english') + custom_stopwords)) resume_descr = [] for line in list(resumeDf['Description']): temp_list = [] if str(type(line)) == "<class 'str'>" or not math.isnan(line): # Apply other corrections for correction in other_corrections.keys(): line = line.replace(correction, other_corrections.get(correction)) for word in line.split(): if word not in stopwords: filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', word.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue if len(filtered_wd) > 1: temp_list.append(filtered_wd.lower()) if len(temp_list) > 0: resume_descr.append(temp_list) resume_descr_flattened = list(itertools.chain(*resume_descr)) # Futher flatten resume titles new_resume_descr_flattened = [] for row in resume_descr_flattened: if len(row.split()) == 1: new_resume_descr_flattened.append(row.lower()) else: for item in row.split(): filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', item.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue for correction in other_corrections.keys(): if filtered_wd == correction: filtered_wd = filtered_wd.replace(correction, other_corrections.get(correction)) break new_resume_descr_flattened.append(item.lower()) # - # Creating N-grams lower_ngrams, upper_ngram = 2, 5 n_trigrams = dict(Counter(list(everygrams(new_resume_descr_flattened, lower_ngrams, upper_ngram)))) n_trigrams_tup = [(n_trigrams.get(key), key) for key in n_trigrams.keys()] n_trigrams_tup.sort(key=lambda item: item[0], reverse=True) print ("Total N-Gram Tuples =", len(n_trigrams_tup)) # Printing Top Tuples MAX_TUPLES = 750 n_trigrams_tup[:MAX_TUPLES] # + # Understanding most popular contents in Additional Information custom_stopwords = ['NONE', 'to', 'using', 'i', 'pvt', 'ltd', 'it', 'use', 'having', 'along', 'would', 'etc', 'gives'] stopwords = list(set(sw.words('english') + custom_stopwords)) resume_add_info = [] for line in list(resumeDf['Additional Information']): temp_list = [] if str(type(line)) == "<class 'str'>" or not math.isnan(line): # Apply other corrections for correction in other_corrections.keys(): line = line.replace(correction, other_corrections.get(correction)) for word in line.split(): if word not in stopwords: filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', word.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue if len(filtered_wd) > 1: temp_list.append(filtered_wd.lower()) if len(temp_list) > 0: resume_add_info.append(temp_list) resume_add_info_flattened = list(itertools.chain(*resume_add_info)) # Futher flatten resume titles new_resume_add_info_flattened = [] for row in resume_add_info_flattened: if len(row.split()) == 1: new_resume_add_info_flattened.append(row.lower()) else: for item in row.split(): filtered_wd = re.sub('[^0-9a-zA-Z]+', ' ', item.strip()).strip() if len(filtered_wd) == 0 or filtered_wd in stopwords: continue for correction in other_corrections.keys(): if filtered_wd == correction: filtered_wd = filtered_wd.replace(correction, other_corrections.get(correction)) break new_resume_add_info_flattened.append(item.lower()) # - # Creating N-grams lower_ngrams, upper_ngram = 2, 7 n_trigrams = dict(Counter(list(everygrams(new_resume_add_info_flattened, lower_ngrams, upper_ngram)))) n_trigrams_tup = [(n_trigrams.get(key), key) for key in n_trigrams.keys()] n_trigrams_tup.sort(key=lambda item: item[0], reverse=True) print ("Total N-Gram Tuples =", len(n_trigrams_tup)) # Printing Top Tuples MAX_TUPLES = 750 n_trigrams_tup[:MAX_TUPLES] # + # Fetching the Educational Details continued_for = 0 common_key = [] degrees = [] others = [] pre_common_replacement = {'Masters of':'Master of', 'Bachelors of': 'Bachelor of', 'Master s in': 'Master of', 'Master in': 'Master of', 'Master s': 'Master', 'Bachelor s in': 'Bachelor of', 'Bachelor in': 'Bachelor of', 'Bachelor s': 'Bachelor' } common_replacement = {'BE': 'Bachelor of Engineering', 'B E': 'Bachelor of Engineering', 'MCA': 'Master of Computer Application', 'ME': 'Master of Engineering', 'M E': 'Master of Engineering', 'CSE': 'Computer Science Engineering', 'CS': 'Computer Science Engineering', 'B A': 'Bachelor of Arts', 'BA': 'Bachelor of Arts', 'BCS': 'Bachelor of Computer Science', 'B C S': 'Bachelor of Computer Science', 'M C A': 'Master of Computer Application', 'B Tech': 'Bachelor of Technology', 'B C A': 'Bachelor of Computer Application', 'BTech': 'Bachelor of Technology', 'BCA': 'Bachelor of Computer Application', 'B Sc': 'Bachelor of Science', 'B S': 'Bachelor of Science', 'BS': 'Bachelor of Science', 'BBA': 'Bachelor of Business Application', 'B B A': 'Bachelor of Business Application', 'B S C': 'Bachelor of Science', 'SSLC': 'Class X', 'S S L C': 'Class X', 'AISSE': 'Class X', 'MS': 'Master of Science', 'M S': 'Master of Science', 'PhD': 'Doctoral', 'P h D': 'Doctoral', 'M Sc': 'Master of Science', 'Bcom': 'Bachelor of Commerce', 'B com': 'Bachelor of Commerce', 'BSc': 'Bachelor of Science', 'MSc': 'Master of Science', 'MBA': 'Master of Business Administration', 'M B A': 'Master of Business Administration', 'M Tech': 'Master of Technology', 'MTech': 'Master of Technology', 'M Sc': 'Master of Science', 'S S C': 'Class X', 'SSC': 'Class X', 'HSC': 'Class XII', 'PG': 'Post Graduate', 'UG': 'Under Graduate', 'S S C': 'Class X', 'H S C': 'Class XII', 'C B S E': 'Class XII', 'CBSE': 'Class XII', 'Higher Secondary': 'Class XII', '10th': 'Class X', '12th': 'Class XII', 'BBM': 'Bachelor of Business Management', 'B B M': 'Bachelor of Business Management', 'HIGH SCHOOL': 'Class XII', 'Plus Two': 'Class XII', 'I C S E': 'Class X', 'Secondary School Examination': 'Class X'} common_replacement_vals = list(set([common_replacement.get(key).lower() for key in common_replacement.keys()])) unique_identifiers = {'B': ''} for row in resumeDf['Educations']: if str(type(row)) != "<class 'float'>": try: vals = eval(row) for idx in range(len(vals)): education_row = vals[idx][0] key = list(education_row.keys())[0] degree_val = education_row.get(key) degree_val = re.sub('[^0-9a-zA-Z]+', ' ', degree_val.strip()).strip() if degree_val.lower() == 'none' or len(degree_val) < 2: continue elif degree_val == 'Bachelor': degrees.append('Ordinary Bachelor') continue elif degree_val == 'Master': degrees.append('Ordinary Master') continue else: # Pre-common replacement for precommon in pre_common_replacement.keys(): if degree_val.find(precommon) != -1: degree_val = degree_val.replace(precommon, pre_common_replacement.get(precommon)) break flag=False; for common_vals in common_replacement_vals: if degree_val.lower().find(common_vals) != -1: degrees.append(common_vals) flag = True; break if not flag: for key in common_replacement.keys(): if key.lower() in degree_val.lower(): degrees.append(common_replacement.get(key).lower()) flag = True; break if not flag: if degree_val == 'Bachelor': degrees.append('Ordinary Bachelor') elif degree_val == 'Master': degrees.append('Ordinary Master') else: degrees.append(degree_val) others.append(degree_val) except: continue # - # Printing the most common degrees degree_map = dict(Counter(degrees)) degree_map_tup = [(degree_map.get(key), key) for key in degree_map.keys()] degree_map_tup.sort(key=lambda item: item[0], reverse=True) degree_map_tup # + # Capturing the skillsets all_skillsets = [] map_skillset_experiences = {} # Mapping Order = ['Less than 1 years', '1 year', '2 years', '3 '] for skills in resumeDf['Skills']: if str(type(skills)) == "<class 'float'>": continue try: for skill in eval(skills): skillset = '' if skill.find('(') != -1: skillset = skill[:skill.find('(')].strip().upper() start_idx, end_idx = skill.find('('), skill.find(')') exp = skill[start_idx+1:end_idx].replace("\n", "") if 'year' in exp.lower(): s_idx = exp.find('year') numeric_year = exp[:s_idx].strip() if map_skillset_experiences.get(skillset) is None: map_skillset_experiences[skillset] = {numeric_year: 1} #map_skillset_experiences[skillset][numeric_year] = 1 elif map_skillset_experiences.get(skillset).get(numeric_year) is None: map_skillset_experiences[skillset].update({numeric_year:1}) #map_skillset_experiences[skillset][numeric_year] = 1 else: #print ("I came tp 3") map_skillset_experiences[skillset][numeric_year] = map_skillset_experiences.get(skillset).get(numeric_year) + 1 else: skillset = skill.strip().upper() all_skillsets.append(skillset) except: #print (ex) continue fav_skillsets = dict(Counter(all_skillsets)) fav_skillsets_tup = [(fav_skillsets.get(key), key) for key in fav_skillsets.keys()] fav_skillsets_tup.sort(key=lambda item: item[0], reverse=True) # - # Printing the most observed skillsets fav_skillsets_tup # Printing the experience level across top-200 most observed skillsets print ("Skillset", "\t", "Experiences") for _, skill in fav_skillsets_tup[:200]: print (skill, '\t', map_skillset_experiences.get(skill))
resume_dataset/ExploringResumeDataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import serial # !ls /dev/tty* ser = serial.Serial('/dev/tty.usbmodem14101') # open serial port print(ser.name) # check which port was really used ser.read_all() # + ser.close() # + import usb.core import usb.util # find our device dev = usb.core.find(idVendor=0xfffe, idProduct=0x0001) # was it found? if dev is None: raise ValueError('Device not found') # set the active configuration. With no arguments, the first # configuration will be the active one dev.set_configuration() # get an endpoint instance cfg = dev.get_active_configuration() intf = cfg[(0,0)] ep = usb.util.find_descriptor( intf, # match the first OUT endpoint custom_match = \ lambda e: \ usb.util.endpoint_direction(e.bEndpointAddress) == \ usb.util.ENDPOINT_OUT) # - busses = usb.busses() for bus in busses: devices = bus.devices for dev in devices: if dev != None: try: xdev = usb.core.find(idVendor=dev.idVendor, idProduct=dev.idProduct) if xdev._manufacturer is None: xdev._manufacturer = usb.util.get_string(xdev, xdev.iManufacturer) if xdev._product is None: xdev._product = usb.util.get_string(xdev, xdev.iProduct) stx = '%6d %6d: '+str(xdev._manufacturer).strip()+' = '+str(xdev._product).strip() print(stx % (dev.idVendor,dev.idProduct)) except: pass
notebooks/fpga/01-TinyFPGABX-Signal.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/jcmachicao/modpred_2/blob/main/modpred__05.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="65CXAgd99qGO" # #**5** # #### Cuaderno 05 # # **Curso Modelamiento Predictivo** # --- # ### **Monte Carlo para en rutas aleatorias** # * Autor: <NAME> # * Licencia: [GestioDinámica](http://www.gestiodinamica.com) 2020 # + [markdown] id="FTIPpRpw9qGR" # ### Importación de Librerías # Se requiere importar la librería para gráficos y la de generación de números aleatorios. # + id="Tzu40ve29qGS" cellView="form" #@title Importación import matplotlib.pyplot as plt import random import pandas as pd import numpy as np # + [markdown] id="ei9quAiX9qGV" # ### Generación de Herramientas # # + [markdown] id="cyZpAcJK9qGV" # Las "rutas_aleatorias" están definidas en un plano, por lo tantos se usa 4 sentidos (N, S, E, O). Esto describe además el grado de complejidad del sistema. <br> # Por ejemplo, si se hubiera elegido una configuración en 3 dimensiones (para un dron por ejemplo), el número de direcciones podría ser de hasta 6, incluyendo hacia arriba y hacia abajo. # Por ejemplo, si se asigna n1 para expresar que el número de pasos que se hace en un experimento, la ruta que se dibuja es como se muestra. # # + id="t_6m7WRJ9qGW" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="422a9162-fde3-4c59-fd62-c0cbda110711" #@title Generación aleatoria de Rutas n1 = 10 #se pone todas las variables a cero lineax = [0] lineay = [0] x, y, dx, dy = 0, 0, 0, 0 #se hace una rutina para generar una caminata aleatoria for i in range(n1): (dx, dy) = random.choice([(0, 1), (0, -1), (1, 0), (-1, 0)]) x += dx y += dy lineax.append(x) lineay.append(y) #se dibuja la caminata aleatoria plt.figure(figsize=(10,10), facecolor='lightgray') plt.plot(lineax ,lineay, 'b:', alpha=0.5, lw=5) for i in range(len(lineax)): plt.text(lineax[i] + np.random.rand()/10, lineay[i] + np.random.rand()/10, i, fontsize=14) # + [markdown] id="lqE0rb189qGZ" # ### Definición de Rutas Aleatorias (Creación de Función) # + id="vZ4ZOA8A9qGa" cellView="form" #@title Definición de funciones def rutas_aleatorias(n): #Genera coordenadas luego de n pasos x, y = 0, 0 for i in range(n): (dx, dy) = random.choice([(0, 1), (0, -1), (1, 0), (-1, 0)]) x += dx y += dy return (x, y) # + [markdown] id="SzhwkmgE9qGb" # ### Definición de Parámetros de Experimento # + [markdown] id="KMWD7Mvg9qGc" # Se establece: # * El **número de caminatas** con las cuales se probará el modelo. Esto representa la cantidad de experimentos. Mientras más experimentos ocurran más precisión se podrá lograr en la descripción del modelo. # * La **distancia máxima de retorno**, que significa a qué número de pasos el caminante ya se considera muy lejos de su punto inicial. En los ejemplos más típicos mostrados en publicaciones generalmente se usa el ejemplo de "volver en transporte en lugar de volver a pie". En este caso particular se repite dos veces cada distancia límite para graficar con evidencia la diferencia de generación de valores aleatorios. # * El **número de pasos** representa el máximo número de pasos de cada experimento. El algoritmo usará primero experimentos con 1 paso, luego con 2 pasos. El límite para estos experimentos se establece con este número. # # + id="Q_PvI4Vp9qGd" numero_caminatas = 500 # es el número de experimentos. dist_max_a_pie = [5, 10, 15] # distancia máxima de preferencia de retorno a pie numero_pasos = 120 # número de pasos en una caminata lm = len(dist_max_a_pie) #esto permite variar el número de experimentos de dist_max_a_pie # + [markdown] id="0n9Sy8lp9qGf" # Se aplica las caminatas con un rango de pasos del 1 al Núemro de Pasos para estas caminatas. <br> # Por ejemplo si la Distancia de Retorno es 4 (el caminante retorna a pie si su distancia final es menor que 4), si en el primer experimento con un largo de 1 paso, que puede ir en cualquiera de los 4 sentidos (N, S, E, O), para caminatas de 2 pasos, la probabilidad de regresar a pie es siempre 100% porque en todos los casos el punto final estará a menos de 4 pasos de distancia. <br> # Pero cuando la cantidad de pasos es mayor, la probabilidad de quedar más lejos de 4 se reparte con la probabilidad de quedar más cerca. # + id="LWorMIpo9qGf" cellView="form" #@title Generación de data de experimentos historico = [[], [], []] #alterar los vacíos en función de la longitud de alternativas dist_max_a_pie for j in range(0,lm): historico[j] = [[]] for largo_caminata in range(1, numero_pasos+1): regreso_a_pie = 0 #que requiera menos de 4 pasos para regresar al punto inicial for i in range(numero_caminatas): (x, y) = rutas_aleatorias(largo_caminata) distancia = abs(x) + abs(y) if distancia <= dist_max_a_pie[j]: regreso_a_pie += 1 porcent_reg_a_pie = (float(regreso_a_pie) / numero_caminatas)*100 #print(largo_caminata, porcent_reg_a_pie) historico[j].append([largo_caminata, porcent_reg_a_pie]) # + [markdown] id="xdrOdqS69qGh" # La generación de experimentos permite un gráfico aproximado de las curvas de pronóstico de cómo será el comportamiento total de muchos pasajeros en muchas decisiones que toman (en este caso cómo es el comportamiento total de muchos pasajeros para varios niveles de decisión de a partir de qué distancia empezarán a tomar bus para volver a su punto de partida). # + id="MGtXKQTZ9qGi" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 737} outputId="5b4d9e0e-baa1-429e-ac81-8af525384f9e" #@title ¿Qué probabilidad hay de tener pasajeros para un bus? plt.figure(figsize=(16,12), facecolor='lightgray') for i in range(0,lm): historico[i] = pd.DataFrame(historico[i]) historico[i].columns = ['pasos','% regpie'] plt.plot(historico[i]['pasos'],historico[i]['% regpie'], label='DistanciaMax = '+str(dist_max_a_pie[i])+' pasos') plt.legend(loc='lower left') plt.xlabel('Diferentes Largos de Caminata', fontsize=14) plt.ylabel('Probabilidad de Retorno a Pie', fontsize=14) plt.title('Curvas de Probabilidad de Retorno a Pie para Preferencias de Distancia Máxima', fontsize=16) plt.grid(True) plt.ylim(0,100) plt.show() # + id="ODl4cG2RApAs" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="0a07a48c-e80d-4ee5-89d6-5108bf30e079" historico[0].tail() # + [markdown] id="x1YoJrw19qGk" # ## Conclusión # + [markdown] id="XPXGjSPr9qGl" # A medida que el límite de número de pasos de retorno se hace más grande, hay menos incertidumbre en la probabilidad de retorno en bus. En cambio si el número límite es muy pequeño, las probabilidades se hacen más confusas y podría ser más dificil determinar qué formula está escondida detrás del sistema. Sin embargo las tendencias se pueden identificar todavía. El nivel de aleatoriedad no está tan disperso en el conjunto y sigue un patrón que permite delinear promedios incluso con los números aleatorios. # Esto podría variar si, por ejemplo, se complica el sistema con factor adicional de complejidad. Por ejemplo, si el caminante se distrae. # + [markdown] id="EmDmZ8wr9qGQ" # --- # Desarrollado por GestioDinámica. <br> # Autor: <NAME> <br> # Este ejemplo está adaptado utilizando la fuente. Video Original: "A Random Walk & Monte Carlo Simulation || Python Tutorial || Learn Python Programming" Socratica. Fuente: https://www.youtube.com/watch?v=BfS2H1y6tzQ
modpred__05.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Sample script using EEGNet to classify Event-Related Potential (ERP) EEG data # from a four-class classification task, using the sample dataset provided in # the MNE [1, 2] package: # https://martinos.org/mne/stable/manual/sample_dataset.html#ch-sample-data # # The four classes used from this dataset are: # LA: Left-ear auditory stimulation # RA: Right-ear auditory stimulation # LV: Left visual field stimulation # RV: Right visual field stimulation # # # The code to process, filter and epoch the data are originally from Alexandre # Barachant's PyRiemann [3] package, released under the BSD 3-clause. A copy of # the BSD 3-clause license has been provided together with this software to # comply with software licensing requirements. # # When you first run this script, MNE will download the dataset and prompt you # to confirm the download location (defaults to ~/mne_data). Follow the prompts # to continue. The dataset size is approx. 1.5GB download. # # For comparative purposes you can also compare EEGNet performance to using # Riemannian geometric approaches with xDAWN spatial filtering [4-8] using # PyRiemann (code provided below). # # [1] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, # <NAME>, <NAME>, MNE software for processing MEG and EEG data, # NeuroImage, Volume 86, 1 February 2014, Pages 446-460, ISSN 1053-8119. # # [2] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, # <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, MEG and EEG data # analysis with MNE-Python, Frontiers in Neuroscience, Volume 7, 2013. # # [3] https://github.com/alexandrebarachant/pyRiemann. # # [4] <NAME>, <NAME> ,"A Plug&Play P300 BCI Using Information Geometry" # arXiv:1409.0107. link # # [5] <NAME>, <NAME>, <NAME> ,"A New generation of Brain-Computer # Interface Based on Riemannian Geometry", arXiv: 1310.8115. # # [6] <NAME> and <NAME>, "Channel selection procedure using riemannian # distance for BCI applications," in 2011 5th International IEEE/EMBS # Conference on Neural Engineering (NER), 2011, 348-351. # # [7] <NAME>, <NAME>, <NAME> and <NAME>, “Multiclass # Brain-Computer Interface Classification by Riemannian Geometry,” in IEEE # Transactions on Biomedical Engineering, vol. 59, no. 4, p. 920-928, 2012. # # [8] <NAME>, <NAME>, <NAME> and <NAME>, “Classification of # covariance matrices using a Riemannian-based kernel for BCI applications“, # in NeuroComputing, vol. 112, p. 172-178, 2013. # # # Portions of this project are works of the United States Government and are not # subject to domestic copyright protection under 17 USC Sec. 105. Those # portions are released world-wide under the terms of the Creative Commons Zero # 1.0 (CC0) license. # # Other portions of this project are subject to domestic copyright protection # under 17 USC Sec. 105. Those portions are licensed under the Apache 2.0 # license. The complete text of the license governing this material is in # the file labeled LICENSE.TXT that is a part of this project's official # distribution. import sys sys.path.append('../../arl-eegmodels/') # + tags=[] import numpy as np # mne imports import mne from mne import io from mne.datasets import sample # EEGNet-specific imports from EEGModels import EEGNet from tensorflow.keras import utils as np_utils from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras import backend as K # PyRiemann imports from pyriemann.estimation import XdawnCovariances from pyriemann.tangentspace import TangentSpace from pyriemann.utils.viz import plot_confusion_matrix from sklearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression # tools for plotting confusion matrices from matplotlib import pyplot as plt # - # while the default tensorflow ordering is 'channels_last' we set it here # to be explicit in case if the user has changed the default ordering K.set_image_data_format('channels_last') # + tags=[] ##################### Process, filter and epoch the data ###################### data_path = sample.data_path() # Set parameters and read data raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0., 1 event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4) # Setup for reading the raw data raw = io.Raw(raw_fname, preload=True, verbose=False) raw.filter(2, None, method='iir') # replace baselining with high-pass events = mne.read_events(event_fname) raw.info['bads'] = ['MEG 2443'] # set bad channels picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=None, preload=True, verbose=False) labels = epochs.events[:, -1] # extract raw data. scale by 1000 due to scaling sensitivity in deep learning X = epochs.get_data()*1000 # format is in (trials, channels, samples) y = labels kernels, chans, samples = 1, 60, 151 # take 50/25/25 percent of the data to train/validate/test X_train = X[0:144,] Y_train = y[0:144] X_validate = X[144:216,] Y_validate = y[144:216] X_test = X[216:,] Y_test = y[216:] # + ############################# EEGNet portion ################################## # convert labels to one-hot encodings. Y_train = np_utils.to_categorical(Y_train-1) Y_validate = np_utils.to_categorical(Y_validate-1) Y_test = np_utils.to_categorical(Y_test-1) # convert data to NHWC (trials, channels, samples, kernels) format. Data # contains 60 channels and 151 time-points. Set the number of kernels to 1. X_train = X_train.reshape(X_train.shape[0], chans, samples, kernels) X_validate = X_validate.reshape(X_validate.shape[0], chans, samples, kernels) X_test = X_test.reshape(X_test.shape[0], chans, samples, kernels) print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # configure the EEGNet-8,2,16 model with kernel length of 32 samples (other # model configurations may do better, but this is a good starting point) model = EEGNet(nb_classes = 4, Chans = chans, Samples = samples, dropoutRate = 0.5, kernLength = 32, F1 = 8, D = 2, F2 = 16, dropoutType = 'Dropout') # compile the model and set the optimizers model.compile(loss='categorical_crossentropy', optimizer='adam', metrics = ['accuracy']) # count number of parameters in the model numParams = model.count_params() # set a valid path for your system to record model checkpoints checkpointer = ModelCheckpoint(filepath='/tmp/checkpoint.h5', verbose=1, save_best_only=True) # + ############################################################################### # if the classification task was imbalanced (significantly more trials in one # class versus the others) you can assign a weight to each class during # optimization to balance it out. This data is approximately balanced so we # don't need to do this, but is shown here for illustration/completeness. ############################################################################### # the syntax is {class_1:weight_1, class_2:weight_2,...}. Here just setting # the weights all to be 1 class_weights = {0:1, 1:1, 2:1, 3:1} # + ################################################################################ # fit the model. Due to very small sample sizes this can get # pretty noisy run-to-run, but most runs should be comparable to xDAWN + # Riemannian geometry classification (below) ################################################################################ fittedModel = model.fit(X_train, Y_train, batch_size = 16, epochs = 10,#300, verbose = 2, validation_data=(X_validate, Y_validate), callbacks=[checkpointer], class_weight = class_weights) # load optimal weights #model.load_weights('/tmp/checkpoint.h5') # + ############################################################################### # can alternatively used the weights provided in the repo. If so it should get # you 93% accuracy. Change the WEIGHTS_PATH variable to wherever it is on your # system. ############################################################################### # WEIGHTS_PATH = /path/to/EEGNet-8-2-weights.h5 # model.load_weights(WEIGHTS_PATH) # + ############################################################################### # make prediction on test set. ############################################################################### probs = model.predict(X_test) preds = probs.argmax(axis = -1) acc = np.mean(preds == Y_test.argmax(axis=-1)) print("Classification accuracy: %f " % (acc)) # + ############################# PyRiemann Portion ############################## # code is taken from PyRiemann's ERP sample script, which is decoding in # the tangent space with a logistic regression n_components = 2 # pick some components # set up sklearn pipeline clf = make_pipeline(XdawnCovariances(n_components), TangentSpace(metric='riemann'), LogisticRegression()) preds_rg = np.zeros(len(Y_test)) # reshape back to (trials, channels, samples) X_train = X_train.reshape(X_train.shape[0], chans, samples) X_test = X_test.reshape(X_test.shape[0], chans, samples) # train a classifier with xDAWN spatial filtering + Riemannian Geometry (RG) # labels need to be back in single-column format clf.fit(X_train, Y_train.argmax(axis = -1)) preds_rg = clf.predict(X_test) # Printing the results acc2 = np.mean(preds_rg == Y_test.argmax(axis = -1)) print("Classification accuracy: %f " % (acc2)) # plot the confusion matrices for both classifiers names = ['audio left', 'audio right', 'vis left', 'vis right'] plt.figure(0) plot_confusion_matrix(preds, Y_test.argmax(axis = -1), names, title = 'EEGNet-8,2') plt.figure(1) plot_confusion_matrix(preds_rg, Y_test.argmax(axis = -1), names, title = 'xDAWN + RG')
examples/ERP.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def code_cesar(message, k): message=message.upper() stroka='' for text in message.split(): for i in text: number=(ord(i)-65+k)%26 let=chr(65+number) stroka+=let stroka+=' ' return stroka code_cesar('Veni vidi vici',3) def code_atbash(message): message=message.lower() stroka='' for i in message: if i==' ': number=1072 let=chr(number) elif i=='а': number=32 let=chr(number) else: number=ord(i)-1073 let=chr(1103-number) stroka+=let return stroka code_atbash('Привет нашему миру')
lab01/lab01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import pickle import gensim from collections import Counter, defaultdict # + import sys import os def add_sys_path(p): p = os.path.abspath(p) if p not in sys.path: sys.path.append(p) add_sys_path('../') # - from evaluation import evaluate import data_split from evaluation.evaluate import read_dataset import json from tqdm.auto import tqdm, trange from sklearn.neighbors import KDTree # # Load w2v models import my_knn from importlib import reload reload(my_knn) w2v = gensim.models.KeyedVectors.load_word2vec_format( #load_word2vec_format '../baselines/models/model.bin', binary=True, unicode_errors='ignore', ) add_pos = True n = 300 w2v_embedder = my_knn.W2VWrapper(w2v, n=n, add_pos=add_pos) w2v_embedder_pos = my_knn.W2VWrapper(w2v, n=n, add_pos=add_pos, pos_weights={'NOUN': 1.0, 'PREP': 0.1}, default_weight=0.5) w2v_embedder_pos_vb = my_knn.W2VWrapper(w2v, n=n, add_pos=add_pos, pos_weights={'VERB': 1.0, 'PREP': 0.1}, default_weight=0.5) # # Evaluation # # ## Nouns def w2v_scorer(text1, text2): return np.dot(w2v_embedder.get_text_vec(text1), w2v_embedder.get_text_vec(text2)) ** 5 public_test_verbs = pd.read_csv('../../datasets/ru/nouns_private_no_labels.tsv', header=None) public_test_verbs.columns = ['text'] full_syn_storage, full_rel_storage, full_rel_df = my_knn.prepare_storages( synsets_filename='../../datasets/ruwordnet/synsets.N.xml', relations_filename='../../datasets/ruwordnet/synset_relations.N.xml', forbidden_words=set() ) full_w2v_vecs = np.stack([w2v_embedder(t) for t in tqdm(full_syn_storage.texts_long) ]) full_w2v_tree = KDTree(full_w2v_vecs) full_w2v_vecs_pos = np.stack([w2v_embedder_pos(t) for t in tqdm(full_syn_storage.texts_long) ]) full_w2v_tree_pos = KDTree(full_w2v_vecs_pos) public_test_hypos = { txt: my_knn.hypotheses_knn( txt, index=full_w2v_tree_pos, text2vec=w2v_embedder_pos, synset_storage=full_syn_storage, rel_storage=full_rel_storage, decay=3, k=100, grand_mult=0.5, neighbor_scorer=w2v_scorer, ) for txt in tqdm(public_test_verbs.text) } sub = my_knn.dict2submission(public_test_hypos, full_syn_storage.id2synset) sub sub.to_csv('../baselines/predictions/nouns_private_dale.tsv', sep='\t', encoding='utf-8', header=None, index=None) sub.head(15) # ## Verbs public_test_verbs = pd.read_csv('../../datasets/ru/verbs_private_no_labels.tsv', header=None) public_test_verbs.columns = ['text'] full_syn_storage_v, full_rel_storage_v, full_rel_df_v = my_knn.prepare_storages( synsets_filename='../../datasets/ruwordnet/synsets.V.xml', relations_filename='../../datasets/ruwordnet/synset_relations.V.xml', forbidden_words=set() ) full_w2v_vecs_pos_v = np.stack([w2v_embedder_pos_vb(t) for t in tqdm(full_syn_storage_v.texts_long) ]) full_w2v_tree_pos_v = KDTree(full_w2v_vecs_pos_v) public_test_hypos = { txt: my_knn.hypotheses_knn( txt, #index=full_ft_tree, text2vec=ft_embedder, index=full_w2v_tree_pos_v, text2vec=w2v_embedder_pos_vb, synset_storage=full_syn_storage_v, rel_storage=full_rel_storage_v, decay=3, k=100, grand_mult=0.5, neighbor_scorer=w2v_scorer, ) for txt in tqdm(public_test_verbs.text) } sub = my_knn.dict2submission(public_test_hypos, full_syn_storage_v.id2synset) sub sub.to_csv('../baselines/predictions/verbs_private_dale.tsv', sep='\t', encoding='utf-8', header=None, index=None) sub.head(15)
code/dale/dale_ru.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # New stats format in acb.com # # Playing with the new stats format in acb.com (launched in October 2019) import pandas as pd season = 2019 urls = [ 'http://www.acb.com/estadisticas-individuales/{}/temporada_id/{}/tipo_id/0'.format(x, season) for x in [ 'valoracion', 'puntos', 'rebotes', 'asistencias', 'robos', 'tapones', 'mas-menos', 'minutos', 'tiros3', 'tiros3-porciento', 'tiros2', 'tiros2-porciento', 'tiros1', 'tiros1-porciento', 'rebotes-defensivos', 'rebotes-ofensivos', 'faltas-recibidas', 'faltas-cometidas', 'mates' ] ] data = pd.concat([pd.read_html(url)[0].iloc[:, 1:] for url in urls], axis=0).drop_duplicates() data.columns = [ 'name', 'games', 'minutes', 'points', '3p_converted', '3p_attempted', '3p_percentage', '2p_converted', '2p_attempted', '2p_percentage', '1p_converted', '1p_attempted', '1p_percentage', 'offensive_rebounds', 'deffensive_rebounds', 'rebounds', 'assists', 'steals', 'turnovers', 'blocks', 'received_blocks', 'dunks', 'faults', 'received_faults', 'plus_minus', 'pir' ] data = data.set_index('name') data.describe() # ## PIR and plus-minus data[['pir', 'plus_minus']].sum(axis=1).sort_values(ascending=False).head(18) # ## Offensive players ( data[ ['points', 'offensive_rebounds', 'assists', 'received_faults', '3p_converted', '2p_converted', '1p_converted', 'plus_minus'] ].sum(axis=1) - data[ ['3p_attempted', '2p_attempted', '1p_attempted', 'turnovers', 'received_blocks' ] ].sum(axis=1) ).sort_values(ascending=False).head(18) # ## Deffensive players ( data[ ['deffensive_rebounds', 'steals', 'blocks', 'plus_minus'] ].sum(axis=1) - data['faults'] ).sort_values(ascending=False).head(18) # ## Team players (data['plus_minus'] + data['minutes'] / 2 - data['pir']).sort_values(ascending=False).head(18) # ## Assists by turnover ((data['assists'] + 1) / (data['turnovers'] + 1)).sort_values(ascending=False).head(18) # ## Up in the air ( data['dunks'] + data['blocks'] - data['received_blocks'] + data['2p_converted'] - data['2p_attempted'] ).sort_values(ascending=False).head(18) # ## Greedy ( data[['3p_attempted', '2p_attempted', 'turnovers', 'received_blocks']].sum(axis=1) - data[['assists','plus_minus']].sum(axis=1) ).sort_values(ascending=False).head(18)
ACBstats/acb_stats.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Benchmarking de algoritmos de fluxo optico para BGE médica # Abordagem apenas diferencial # + # # !python hello_world_opencv.py --image img/peppers.png # - # ## Metodologia Lucas e Kanade # # + #from IPython.display import display, Math, Latex# #display(Math(r'\sqrt{a^2 + b^2}')) # - # %run -i 'sparse_OF.py' # %run -i 'LK_pyramid_OF.py' # ## Metodologia Horn e Schunck # %run -i 'Horn_Schunck.py' # ## Metodologia Farneback # %run -i 'Farneback_OF.py' # %run -i 'vid_processing.py'
sandbox/fluxo_optico.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 0.0. IMPORTS # + import inflection import math import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from IPython.core.display import HTML from IPython.display import Image # - # ## 0.1. Helper Functions # + code_folding=[0, 4, 19, 26, 31, 36] def load_csv(path): df = pd.read_csv(path, low_memory=False) return df def rename_columns(df, old_columns): snakecase = lambda x: inflection.underscore(x) cols_new = list(map(snakecase, old_columns)) print(f"Old columns: {df.columns.to_list()}") # Rename df.columns = cols_new print(f"\nNew columns: {df.columns.to_list()}") print('\n', df.columns) return df def show_dimensions(df): print(f"Number of Rows: {df1.shape[0]}") print(f"Number of Columns: {df1.shape[1]}") print(f"Shape: {df1.shape}") return None def show_data_types(df): print(df.dtypes) return None def check_na(df): print(df.isna().sum()) return None def show_descriptive_statistical(df): # Central Tendency - mean, median ct1 = pd.DataFrame(df.apply(np.mean)).T ct2 = pd.DataFrame(df.apply(np.median)).T # Dispersion - std, min, max, range, skew, kurtosis d1 = pd.DataFrame(df.apply(np.std)).T d2 = pd.DataFrame(df.apply(min)).T d3 = pd.DataFrame(df.apply(max)).T d4 = pd.DataFrame(df.apply(lambda x: x.max() - x.min())).T d5 = pd.DataFrame(df.apply(lambda x: x.skew())).T d6 = pd.DataFrame(df.apply(lambda x: x.kurtosis())).T m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis'] print(m) def jupyter_settings(): # %matplotlib inline # %pylab inline plt.style.use( 'ggplot') plt.rcParams['figure.figsize'] = [24, 9] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() # - jupyter_settings() # + [markdown] heading_collapsed=true # ## 0.2. Path Definition # + hidden=true # path home_path = 'C:\\Users\\sindolfo\\rossmann-stores-sales\\' raw_data_path = 'data\\raw\\' interim_data_path = 'data\\interim\\' # + [markdown] heading_collapsed=true # ## 0.3. Loading Data # + hidden=true ## Historical data including Sales df_sales_raw = load_csv(home_path+raw_data_path+'train.csv') ## Supplemental information about the stores df_store_raw = load_csv(home_path+raw_data_path+'store.csv') # Merge df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store') # - # # 1.0. DATA DESCRIPTION df1 = df_raw.copy() df1.to_csv(home_path+interim_data_path+'df1.csv') # ### Data fields # # Most of the fields are self-explanatory. The following are descriptions for those that aren't. # # - **Id** - an Id that represents a (Store, Date) duple within the test set # - **Store** - a unique Id for each store # - **Sales** - the turnover for any given day (this is what you are predicting) # - **Customers** - the number of customers on a given day # - **Open** - an indicator for whether the store was open: 0 = closed, 1 = open # - **StateHoliday** - indicates a state holiday. Normally # all stores, with few exceptions, are closed on state holidays. Note that all schools are closed on public holidays and weekends. a = public # holiday, b = Easter holiday, c = Christmas, 0 = None # - **SchoolHoliday** - indicates if the (Store, Date) was affected by the closure of public schools # - **StoreType** - differentiates between 4 different store models: a, b, c, d # - **Assortment** - describes an assortment level: a = basic, b = extra, c = extended # - **CompetitionDistance** - distance in meters to the nearest competitor store # - **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened # - **Promo** - indicates whether a store is running a promo on that day # - **Promo2** - Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating # - **Promo2Since[Year/Week]** - describes the year and calendar week when the store started participating in Promo2 # - **PromoInterval** - describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. # E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store # + [markdown] heading_collapsed=true # ## 1.1. Rename Columns # + hidden=true cols_old = [ 'Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval' ] df1 = rename_columns(df1, cols_old) # + [markdown] heading_collapsed=true # ## 1.2. Data Dimensions # + hidden=true show_dimensions(df1) # + [markdown] heading_collapsed=true # ## 1.3. Data Types # + hidden=true show_data_types(df1) ## Date is a object type. This is wrong. In the section "Types Changes" others chages is made. df1['date'] = pd.to_datetime(df1['date']) # + [markdown] heading_collapsed=true # ## 1.4. Check NA # + hidden=true check_na(df1) ## Columns with NA vales ## competition_distance 2642 ## competition_open_since_month 323348 ## competition_open_since_year 323348 ## promo2_since_week 508031 ## promo2_since_year 508031 ## promo_interval 508031 # + [markdown] heading_collapsed=true # ## 1.5. Fillout NA # + hidden=true # competition_distance: distance in meters to the nearest competitor store # # Assumption: if there is a row that is NA in this column, # it is because there is no close competitor. # The way I used to represent this is to put # a number much larger than the maximum value # of the competition_distance variable. # # The number is 250000. df1['competition_distance'] = df1['competition_distance'].apply(lambda x : 250000 if math.isnan(x) else x) # competition_open_since_month: # gives the approximate year and month of the # time the nearest competitor was opened # # Assumption: I'm going to keep this variable because # it's important to have something that expresses # the feeling of "since it happened" or "until when". # # If it's NA I'll copy the month of sale of that line. df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1) #competition_open_since_year # The same assumption from competition_open_since_month df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1) # promo2_since_week: # describes the year and calendar week when the store started participating in Promo2 # # The same assumption from competition_open_since_month df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1) # promo2_since_year: # describes the year and calendar week when the store started participating in Promo2 df1['promo2_since_year'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1) # promo_interval month_map = {1: 'Jan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug',9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'} df1['promo_interval'].fillna(0, inplace=True) df1['month_map'] = df1['date'].dt.month.map(month_map) df1['is_promo'] = df1[['promo_interval', 'month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1) # + [markdown] heading_collapsed=true # ## 1.6. Type Changes # + hidden=true df1['competition_open_since_month'] = df1['competition_open_since_month'].astype('int64') df1['competition_open_since_year'] = df1['competition_open_since_year'].astype('int64') df1['promo2_since_week'] = df1['promo2_since_week'].astype('int64') df1['promo2_since_year'] = df1['promo2_since_year'].astype('int64') # - # ## 1.7. Descriptive Statistical num_attributes = df1.select_dtypes(include=['int64', 'float64']) cat_attributes = df1.select_dtypes(exclude=['int64', 'float64', 'datetime64[ns]']) # ### 1.7.1 Numerical Attributes show_descriptive_statistical(num_attributes) sns.displot(df1['sales']) # ### 1.7.2 Categorical Attributes cat_attributes.apply(lambda x: x.unique().shape[0]) # + aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)] plt.subplot(1, 3, 1) sns.boxplot(x='state_holiday', y='sales', data=aux1) plt.subplot(1, 3, 2) sns.boxplot(x='store_type', y='sales', data=aux1) plt.subplot(1, 3, 3) sns.boxplot(x='assortment', y='sales', data=aux1)
notebooks/c0.1-sg-data-description.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import IPython.display from ipywidgets import interact, interactive, fixed import numpy as np import matplotlib.pyplot as plt import copy from scipy.io import wavfile from scipy.signal import butter, lfilter import scipy.ndimage # - theta=[] with open("data/myo.data") as f: for line in f: theta.append(float(line)) plt.plot(theta) # + #pip3 install pyserial import serial with serial.Serial('/dev/cu.usbmodem142301', 19200, timeout=5) as s: for line in s: print(line) # -
jupyter/myo-sigproc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Final Machine Learning Pipeline # # The pipeline features # # - open source classes # - in house package classes # - only uses the selected features # - we score new data # # Reproducibility: Setting the seed # # With the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**. # + # data manipulation and plotting import pandas as pd import numpy as np import matplotlib.pyplot as plt # for saving the pipeline import joblib # from Scikit-learn from sklearn.linear_model import Lasso from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import MinMaxScaler, Binarizer # from feature-engine from feature_engine.imputation import ( AddMissingIndicator, MeanMedianImputer, CategoricalImputer, ) from feature_engine.encoding import ( RareLabelEncoder, OrdinalEncoder, ) from feature_engine.transformation import LogTransformer from feature_engine.selection import DropFeatures from feature_engine.wrappers import SklearnTransformerWrapper import preprocessors as pp # + # load dataset data = pd.read_csv('train.csv') # rows and columns of the data print(data.shape) # visualise the dataset data.head() # + # Cast MSSubClass as object data['MSSubClass'] = data['MSSubClass'].astype('O') # - # # Separate dataset into train and test # # It is important to separate our data intro training and testing set. # # When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting. # # Our feature engineering techniques will learn: # # - mean # - mode # - exponents for the yeo-johnson # - category frequency # - and category to number mappings # # from the train set. # # **Separating the data into train and test involves randomness, therefore, we need to set the seed.** # + # Let's separate into train and test set # Remember to set the seed (random_state for this sklearn function) X_train, X_test, y_train, y_test = train_test_split( data.drop(['Id', 'SalePrice'], axis=1), # predictive variables data['SalePrice'], # target test_size=0.1, # portion of dataset to allocate to test set random_state=0, # we are setting the seed here ) X_train.shape, X_test.shape # - # # Target # # We apply the logarithm y_train = np.log(y_train) y_test = np.log(y_test) # # Configuration # + # categorical variables with NA in train set CATEGORICAL_VARS_WITH_NA_FREQUENT = ['BsmtQual', 'BsmtExposure', 'BsmtFinType1', 'GarageFinish'] CATEGORICAL_VARS_WITH_NA_MISSING = ['FireplaceQu'] # numerical variables with NA in train set NUMERICAL_VARS_WITH_NA = ['LotFrontage'] TEMPORAL_VARS = ['YearRemodAdd'] REF_VAR = "YrSold" # this variable is to calculate the temporal variable, # can be dropped afterwards DROP_FEATURES = ["YrSold"] # variables to log transform NUMERICALS_LOG_VARS = ["LotFrontage", "1stFlrSF", "GrLivArea"] # variables to binarize BINARIZE_VARS = ['ScreenPorch'] # variables to map QUAL_VARS = ['ExterQual', 'BsmtQual', 'HeatingQC', 'KitchenQual', 'FireplaceQu'] EXPOSURE_VARS = ['BsmtExposure'] FINISH_VARS = ['BsmtFinType1'] GARAGE_VARS = ['GarageFinish'] FENCE_VARS = ['Fence'] # categorical variables to encode CATEGORICAL_VARS = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'RoofStyle', 'Exterior1st', 'Foundation', 'CentralAir', 'Functional', 'PavedDrive', 'SaleCondition'] # variable mappings QUAL_MAPPINGS = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0} EXPOSURE_MAPPINGS = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4} FINISH_MAPPINGS = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6} GARAGE_MAPPINGS = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3} # the selected variables FEATURES = [ 'MSSubClass', 'MSZoning', 'LotFrontage', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'OverallQual', 'OverallCond', 'YearRemodAdd', 'RoofStyle', 'Exterior1st', 'ExterQual', 'Foundation', 'BsmtQual', 'BsmtExposure', 'BsmtFinType1', 'HeatingQC', 'CentralAir', '1stFlrSF', '2ndFlrSF', 'GrLivArea', 'BsmtFullBath', 'HalfBath', 'KitchenQual', 'TotRmsAbvGrd', 'Functional', 'Fireplaces', 'FireplaceQu', 'GarageFinish', 'GarageCars', 'GarageArea', 'PavedDrive', 'WoodDeckSF', 'ScreenPorch', 'SaleCondition', # this one is only to calculate temporal variable: "YrSold", ] # + X_train = X_train[FEATURES] X_test = X_test[FEATURES] X_train.shape, X_test.shape # - # # Pipeline - End-to-end # # We have 3 steps less, they are commented out. So the pipeline is also simpler: # # - the yeo-johnson transformation # - 1 of the mappings # - the selection procedure # # this makes the pipeline faster and easier to deploy. # + # set up the pipeline price_pipe = Pipeline([ # ===== IMPUTATION ===== # impute categorical variables with string missing ('missing_imputation', CategoricalImputer( imputation_method='missing', variables=CATEGORICAL_VARS_WITH_NA_MISSING)), ('frequent_imputation', CategoricalImputer( imputation_method='frequent', variables=CATEGORICAL_VARS_WITH_NA_FREQUENT)), # add missing indicator ('missing_indicator', AddMissingIndicator(variables=NUMERICAL_VARS_WITH_NA)), # impute numerical variables with the mean ('mean_imputation', MeanMedianImputer( imputation_method='mean', variables=NUMERICAL_VARS_WITH_NA )), # == TEMPORAL VARIABLES ==== ('elapsed_time', pp.TemporalVariableTransformer( variables=TEMPORAL_VARS, reference_variable=REF_VAR)), ('drop_features', DropFeatures(features_to_drop=[REF_VAR])), # ==== VARIABLE TRANSFORMATION ===== ('log', LogTransformer(variables=NUMERICALS_LOG_VARS)), # ('yeojohnson', YeoJohnsonTransformer(variables=NUMERICALS_YEO_VARS)), ('binarizer', SklearnTransformerWrapper( transformer=Binarizer(threshold=0), variables=BINARIZE_VARS)), # === mappers === ('mapper_qual', pp.Mapper( variables=QUAL_VARS, mappings=QUAL_MAPPINGS)), ('mapper_exposure', pp.Mapper( variables=EXPOSURE_VARS, mappings=EXPOSURE_MAPPINGS)), ('mapper_finish', pp.Mapper( variables=FINISH_VARS, mappings=FINISH_MAPPINGS)), ('mapper_garage', pp.Mapper( variables=GARAGE_VARS, mappings=GARAGE_MAPPINGS)), # ('mapper_fence', pp.Mapper( # variables=FENCE_VARS, mappings=FENCE_MAPPINGS)), # == CATEGORICAL ENCODING ('rare_label_encoder', RareLabelEncoder( tol=0.01, n_categories=1, variables=CATEGORICAL_VARS )), # encode categorical and discrete variables using the target mean ('categorical_encoder', OrdinalEncoder( encoding_method='ordered', variables=CATEGORICAL_VARS)), ('scaler', MinMaxScaler()), # ('selector', SelectFromModel(Lasso(alpha=0.001, random_state=0))), ('Lasso', Lasso(alpha=0.001, random_state=0)), ]) # - # train the pipeline price_pipe.fit(X_train, y_train) # + # evaluate the model: # ==================== # make predictions for train set pred = price_pipe.predict(X_train) # determine mse, rmse and r2 print('train mse: {}'.format(int( mean_squared_error(np.exp(y_train), np.exp(pred))))) print('train rmse: {}'.format(int( mean_squared_error(np.exp(y_train), np.exp(pred), squared=False)))) print('train r2: {}'.format( r2_score(np.exp(y_train), np.exp(pred)))) print() # make predictions for test set pred = price_pipe.predict(X_test) # determine mse, rmse and r2 print('test mse: {}'.format(int( mean_squared_error(np.exp(y_test), np.exp(pred))))) print('test rmse: {}'.format(int( mean_squared_error(np.exp(y_test), np.exp(pred), squared=False)))) print('test r2: {}'.format( r2_score(np.exp(y_test), np.exp(pred)))) print() print('Average house price: ', int(np.exp(y_train).median())) # - # Identical results to when we did all the engineering manually. # let's evaluate our predictions respect to the real sale price plt.scatter(y_test, price_pipe.predict(X_test)) plt.xlabel('True House Price') plt.ylabel('Predicted House Price') plt.title('Evaluation of Lasso Predictions') # + # let's evaluate the distribution of the errors: # they should be fairly normally distributed y_test.reset_index(drop=True, inplace=True) preds = pd.Series(price_pipe.predict(X_test)) errors = y_test - preds errors.hist(bins=30) plt.show() # + # now let's save the scaler joblib.dump(price_pipe, 'price_pipe.joblib') # - # # Score new data # + # load the unseen / new dataset data = pd.read_csv('test.csv') data.drop('Id', axis=1, inplace=True) data['MSSubClass'] = data['MSSubClass'].astype('O') data = data[FEATURES] print(data.shape) # + new_vars_with_na = [ var for var in FEATURES if var not in CATEGORICAL_VARS_WITH_NA_FREQUENT + CATEGORICAL_VARS_WITH_NA_MISSING + NUMERICAL_VARS_WITH_NA and data[var].isnull().sum() > 0] new_vars_with_na # - data[new_vars_with_na].head() data[new_vars_with_na].isnull().mean() # + data.dropna(subset=new_vars_with_na, inplace=True) print(data.shape) # - new_preds = price_pipe.predict(data) # let's plot the predicted sale prices pd.Series(np.exp(new_preds)).hist(bins=50) # # Conclusion # # Now we are ready for deployment!!!
section-04-research-and-development/08-final-machine-learning-pipeline.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: NLP # language: python # name: nlp # --- # # Sentiment Identification # # ## BACKGROUND # # A large multinational corporation is seeking to automatically identify the sentiment that their customer base talks # about on social media. They would like to expand this capability into multiple languages. Many 3rd party tools exist for sentiment analysis, however, they need help with under-resourced languages. # # ## GOAL # # Train a sentiment classifier (Positive, Negative, Neutral) on a corpus of the provided documents. Your goal is to # maximize accuracy. There is special interest in being able to accurately detect negative sentiment. The training data # includes documents from a wide variety of sources, not merely social media, and some of it may be inconsistently # labeled. Please describe the business outcomes in your work sample including how data limitations impact your results # and how these limitations could be addressed in a larger project. # # ## DATA # Link to data: http://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.max_colwidth', None) # ## Data Exploration # + import emoji import functools import operator import re import numpy as np import pandas as pd import matplotlib.pyplot as plt import nltk import spacy import string import re import os import tensorflow as tf from tensorflow import keras from imblearn.over_sampling import SMOTE from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif from sklearn.linear_model import LogisticRegression from sklearn.metrics import plot_confusion_matrix from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import Pipeline DATA_DIR = os.path.abspath('../data/raw') # - data_path = os.path.join(DATA_DIR, 'Roman Urdu Dataset.csv') raw_df = pd.read_csv(data_path, skipinitialspace=True, names=['comment', 'sentiment', 'nan'], encoding='utf-8') raw_df.tail() #Print a concise summary of a DataFrame. raw_df.info() # Check missing data raw_df.isnull().sum() # For each column of the dataframe, we want to know numbers of unique attributes and the attributes values. for column in raw_df.columns: unique_attribute = (raw_df[column].unique()) print('{0:20s} {1:5d}\t'.format(column, len(unique_attribute)), unique_attribute[0:10]) # ## Initial Data Preprocessing # -- Drop the NaN column # -- Replace "Neative" - > "Negative" # + cleaned_df = raw_df.copy() cleaned_df.drop('nan',axis=1,inplace=True) cleaned_df.dropna(axis=0, subset = ['comment'], inplace=True) cleaned_df.replace(to_replace='Neative', value='Negative', inplace=True) cleaned_df.dropna(subset=['sentiment'], inplace=True) cleaned_df.head(5) # - # ## Examine the class label imbalance print(f'There are total {cleaned_df.shape[0]} comments') print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Positive"].shape[0]} Positive comments') print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Neutral"].shape[0]} Neutral comments') print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Negative"].shape[0]} Negative comments') # # Data Preprocessing: # # ### 1. Encode the Labels # # ### 2. Tokenization: # -- Applied lower case for each token # -- remove 0-9 numeric # -- remove the punctuation # -- (TO DO) Remove stop words # ### 3. Train, Val, Test split # # # + # Encode the output labels: # Negative -> 0 # Neutral -> 1 # Positive -> 2 le = LabelEncoder() le.fit(cleaned_df['sentiment']) cleaned_df['sentiment']= le.transform(cleaned_df['sentiment']) # + # tokenize for a single document def tokenizer(doc): """ Tokenize a single document""" tokens = [word.lower() for word in nltk.word_tokenize(doc)] tokens = [re.sub(r'[0-9]', '', word) for word in tokens] tokens = [re.sub(r'['+string.punctuation+']', '', word) for word in tokens] tokens = ' '.join(tokens) em_split_emoji = emoji.get_emoji_regexp().split(tokens) em_split_whitespace = [substr.split() for substr in em_split_emoji] em_split = functools.reduce(operator.concat, em_split_whitespace) tokens = ' '.join(em_split) return tokens cleaned_df['comment'] = cleaned_df['comment'].apply(lambda x: tokenizer(x)) # + train_df, test_df = train_test_split(cleaned_df, test_size=0.2,random_state=40) train_labels = train_df['sentiment'] test_labels = test_df['sentiment'] train_features = train_df['comment'] test_features = test_df['comment'] # - # # Gridsearch Pipeline: LogisticRegression # # ### TF-IDF # from typing import Any, List, Tuple def vectorize(train_texts: List[str], train_labels, test_texts: List[str]) -> Tuple[Any, Any]: """ Convert the document into word n-grams and vectorize it :param train_texts: of training texts :param train_labels: An array of labels from the training dataset :param test_texts: List of test texts :return: A tuple of vectorize training_text and vectorize test texts """ kwargs = { 'ngram_range': (1, 2), 'analyzer': 'word', 'min_df': MIN_DOCUMENT_FREQUENCY } # Use TfidfVectorizer to convert the raw documents to a matrix of TF-IDF features # vectorizer = TfidfVectorizer(**kwargs) X_train = vectorizer.fit_transform(train_texts) X_test = vectorizer.transform(test_texts) selector = SelectKBest(f_classif, k=min(30000, X_train.shape[1])) selector.fit(X_train, train_labels) X_train = selector.transform(X_train) X_test = selector.transform(X_test) return X_train, X_test # + from sklearn.ensemble import GradientBoostingClassifier NGRAM_RANGE = (1, 2) TOKEN_MODE = 'word' MIN_DOCUMENT_FREQUENCY = 2 X_train, X_test = vectorize(train_features, train_labels, test_features) # gridsearch lr_tfidf = Pipeline([ ('clf', LogisticRegression(random_state=40, solver = 'saga')) ]) C_OPTIONS = [1, 3, 5, 7, 10] param_grid = [ { 'clf__penalty': ['l1', 'l2'], 'clf__C': C_OPTIONS } ] gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid, scoring='accuracy', cv=5, verbose=2, n_jobs=-1) # - gs_lr_tfidf.fit(X_train, train_labels) print('Best parameter set: %s ' % gs_lr_tfidf.best_params_) print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_) clf_lr = gs_lr_tfidf.best_estimator_ print('Test Accuracy: %.3f' % clf_lr.score(X_test, test_labels)) # + from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix y_pred_lr_tfidf = gs_lr_tfidf.predict(X_test) y_test_exp = test_labels.to_numpy() print('Precision Test for model: {}' .format(precision_score(y_test_exp, y_pred_lr_tfidf, average=None))) print('Recall Test for model: {}' .format(recall_score(test_labels, y_pred_lr_tfidf, average=None))) print('F1 Test for model: {}' .format(f1_score(test_labels, y_pred_lr_tfidf, average=None))) print('Confusion matrix (Test):') print(confusion_matrix(test_labels, y_pred_lr_tfidf)) # + title_options = [("Confusion matrix, without normalization", None), ("Normalization confusion matrix", 'true')] classes_names = np.array(['Negative', 'Neutral', 'Positive']) fig = plt.figure(figsize=(18,9)) nrows=1 ncols=2 for idx,value in enumerate(title_options): ax = fig.add_subplot(nrows, ncols, idx+1) disp= plot_confusion_matrix(clf_lr, X_test, test_labels, display_labels=classes_names, cmap=plt.cm.Blues, normalize=value[1], ax = ax) disp.ax_.set_title(value[0]) # - # # Multiclass Classification - > Binary Clasification # # Since the model can only accurately predict negative sentiment about ~50% of time, I want to see if I can improve upon this result. One idea is to combine neutral sentiment and positive sentiment into one label and turn this analysis into a binary classification problem. # # Since we have class imbalance issues when reforumlate to binary classifiaiton problems, use SMOTE to generate synthetic data within the minority class to have both classes have equal numbers of samples in training. The test accuracy improves from ~50% to ~70% on <b>negative sentiment </b>! from imblearn.pipeline import Pipeline # Combine the neutral/positive labels into one label -> 1 train_labels_binary = train_labels.map(lambda x: 1 if (x==2 or x==1) else 0) test_labels_binary = test_labels.map(lambda x: 1 if (x==2 or x==1) else 0) train_labels_binary.value_counts() # Class Imbalance Issues print(f'There are {train_labels_binary.value_counts()[0]} that can be classified as negative sentiments') print(f'There are {train_labels_binary.value_counts()[1]} that can be classified as non-negative sentiments' ) # + # tfidf = TfidfVectorizer(strip_accents=None, # lowercase=False, # preprocessor=None) param_grid = [{'clf__penalty': ['l1', 'l2'], 'clf__C': [0, 1, 3, 5, 7, 10]}, ] lr_tfidf = Pipeline([ ('smote', SMOTE(sampling_strategy=1.0, random_state=5, k_neighbors=10)), ('clf', LogisticRegression(random_state=1, solver = 'saga')) ]) gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid, scoring='accuracy', cv=5, verbose=2, n_jobs=-1) # - gs_lr_tfidf.fit(X_train, train_labels_binary) print('Best parameter set: %s ' % gs_lr_tfidf.best_params_) print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_) clf_lr = gs_lr_tfidf.best_estimator_ print('Test Accuracy: %.3f' % clf_lr.score(X_test, test_labels_binary)) # + from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix y_pred_lr_tfidf = gs_lr_tfidf.predict(X_test) y_test_exp_binary = test_labels_binary.to_numpy() print('Precision Test for model: {}' .format(precision_score(y_test_exp_binary, y_pred_lr_tfidf, average=None))) print('Recall Test for model: {}' .format(recall_score(test_labels_binary, y_pred_lr_tfidf, average=None))) print('F1 Test for model: {}' .format(f1_score(test_labels_binary, y_pred_lr_tfidf, average=None))) print('ROC AUC Train: %.3f for Logistic Regression' % roc_auc_score(y_test_exp_binary, y_pred_lr_tfidf, average=None)) print('Confusion matrix (Test):') print(confusion_matrix(test_labels_binary, y_pred_lr_tfidf)) # + title_options = [("Confusion matrix, without normalization", None), ("Normalization confusion matrix", 'true')] classes_names = np.array(['Negative', 'Positive/Neutral']) fig = plt.figure(figsize=(18,9)) nrows=1 ncols=2 for idx,value in enumerate(title_options): ax = fig.add_subplot(nrows, ncols, idx+1) disp= plot_confusion_matrix(clf_lr, X_test, test_labels_binary, display_labels=classes_names, cmap=plt.cm.Blues, normalize=value[1], ax = ax) disp.ax_.set_title(value[0]) # - # # Gridsearch Pipeline: Naive Bayes # # + tfidf = TfidfVectorizer(strip_accents=None, lowercase=False, preprocessor=None) param_grid = [ { 'clf__alpha': [0.25, 0.3, 0.35, 0.4, 0.45, 0.50] }, ] nb_tfidf = Pipeline([ #('vect', tfidf), ('smote', SMOTE(sampling_strategy=1.0, random_state=5, k_neighbors=3)), ('clf', MultinomialNB()) ]) gs_nb_tfidf = GridSearchCV(nb_tfidf, param_grid, scoring='accuracy', cv=5, verbose=2, n_jobs=-1) # - gs_nb_tfidf.fit(X_train, train_labels_binary) print('Best parameter set: %s ' % gs_nb_tfidf.best_params_) print('CV Accuracy: %.3f' % gs_nb_tfidf.best_score_) clf_nb = gs_nb_tfidf.best_estimator_ print('Test Accuracy: %.3f' % clf_nb.score(X_test, test_labels_binary)) # + from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix y_pred_nb_tfidf = gs_nb_tfidf.predict(X_test) y_test_exp_binary = test_labels_binary.to_numpy() print('Precision Test for model: {}' .format(precision_score(y_test_exp_binary, y_pred_nb_tfidf, average=None))) print('Recall Test for model: {}' .format(recall_score(test_labels_binary, y_pred_nb_tfidf, average=None))) print('F1 Test for model: {}' .format(f1_score(test_labels_binary, y_pred_nb_tfidf, average=None))) print('ROC AUC Train: %.3f for Naive Bayes' % roc_auc_score(y_test_exp_binary, y_pred_nb_tfidf, average=None)) print('Confusion matrix (Test):') print(confusion_matrix(test_labels_binary, y_pred_nb_tfidf)) # + title_options = [("Confusion matrix, without normalization", None), ("Normalization confusion matrix", 'true')] classes_names = np.array(['Negative', 'Positive/Neutral']) fig = plt.figure(figsize=(18,9)) nrows=1 ncols=2 for idx,value in enumerate(title_options): ax = fig.add_subplot(nrows, ncols, idx+1) disp= plot_confusion_matrix(clf_nb, X_test, test_labels_binary, #display_labels=sorted(test_labels_binary.unique()), cmap=plt.cm.Blues, display_labels=classes_names, normalize=value[1], ax = ax) disp.ax_.set_title(value[0]) # -
notebooks/2.0_cchen_roman_urdu_ngram_models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Imports # + import sys import numpy as np import matplotlib.pyplot as plt from sklearn import svm from sklearn.decomposition import PCA from sklearn.pipeline import make_pipeline from sklearn.preprocessing import MinMaxScaler from sklearn.externals import joblib import torch import torchvision import torchvision.transforms as transforms import pickle import pandas as pd import os sys.path.append('../../Utils') from SVC_Utils import * from data_downloaders import * # - # # Load/Process TinyImageNet # + get_tiny_imagenet('../../Datasets/') train_dir = "../../Datasets/tiny-imagenet-200/train" test_dir = "../../Datasets/tiny-imagenet-200/val" # load training set trainset = torchvision.datasets.ImageFolder(train_dir, transform=transforms.ToTensor()) trainloader = torch.utils.data.DataLoader(trainset, batch_size=int(trainset.__len__()/2), shuffle=True, num_workers=2) ftrainloader = torch.utils.data.DataLoader(trainset, batch_size=trainset.__len__(), shuffle=True, num_workers=2) # load test set testset = torchvision.datasets.ImageFolder(test_dir, transform=transforms.ToTensor()) testloader = torch.utils.data.DataLoader(testset, batch_size=testset.__len__(), shuffle=False, num_workers=2) # - traininputs, traintargets=load(trainloader) testinputs, testtargets=load(testloader) ftraininputs, ftraintargets=load(ftrainloader) # # Model Training n_components=180 C_range=np.logspace(0,1,2) gamma_range=np.logspace(-2,-1,2) clfs=hp_grid(n_components=n_components, C_range=C_range, gamma_range=gamma_range) fitted_clfs=train_grid(clfs, traininputs, traintargets) # # Model Testing/Evaluation # + #Stores training and testing accuracies in matrices (Rows: C_range, Cols: gamma_range) #train_accs=np.random.randn(len(C_range),len(gamma_range)) test_accs=np.random.randn(len(C_range),len(gamma_range)) test_preds=[] k=0; for i in range(len(C_range)): for j in range(len(gamma_range)): #train_accs[i,j]=predict_eval(fitted_clfs[k], traininputs, traintargets, training=True)[1] preds, test_accs[i,j]=predict_eval(fitted_clfs[k], testinputs, testtargets) test_preds.append(preds) k+=1 # + idx=['C = 1','C = 10'] cols=['gamma = .01','gamma = .1'] trainacc_df=pd.DataFrame(data=train_accs, index=idx, columns=cols) testacc_df=pd.DataFrame(data=test_accs, index=idx, columns=cols) # - #training accuracy for C/gamma grid trainacc_df.style.background_gradient(cmap='GnBu') #test accuracy for C/gamma grid testacc_df.style.background_gradient(cmap='GnBu') # # Save Models maxacc, gen=maxacc_gen(test_accs, train_accs, clfs) fn_max_acc = 'SVMTinyImageNet_maxacc_proba.pkl' fn_gen = 'SVMTinyImageNet_gen_proba.pkl' save_proba(fn_max_acc, maxacc, ftraininputs, ftraintargets) save_proba(fn_gen, gen, ftraininputs, ftraintargets)
Classification_baselines/TinyImageNet/svmPCA_TinyImageNet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 001-Create Initialization of centroids for each dataset # Use modified kmeans++ method from sklearn package import numpy as np import os from src.util import cen_init, expand_dataset datasets = [name.split('.')[0] for name in os.listdir("datasets/") if '-' not in name] datasets # ### expand datasets expands = [10000, 100000, 1000000] data_dir = 'datasets/' for data in datasets: d = np.loadtxt(data_dir+data+'.txt') cen = np.loadtxt(data_dir+data+'-c.txt') pa = np.loadtxt(data_dir+data+'-pa.txt') for i in range(3): new_d, new_pa = expand_dataset(d, cen, pa, expands[i]) new_data_file = data_dir+data+'e'+str(i+1)+'.txt' new_cen_file = data_dir+data+'e'+str(i+1)+'-c.txt' new_pa_file = data_dir+data+'e'+str(i+1)+'-pa.txt' np.savetxt(new_data_file, new_d, fmt='%d') np.savetxt(new_cen_file, cen, fmt='%d') np.savetxt(new_pa_file, new_pa, fmt='%d') print(data+'e'+str(i+1)+' finished') # ### initialize centroids new_datasets = [name.split('.')[0] for name in os.listdir("datasets/") if '-' not in name] new_datasets random_state = np.random.RandomState(222) for name in new_datasets: data = np.loadtxt(os.path.join('datasets', name+'.txt')) gtcen = np.loadtxt(os.path.join('datasets', name+'-c.txt')) n_clusters = gtcen.shape[0] cen = cen_init(data, n_clusters, random_state) np.savetxt(os.path.join('datasets', name+'-ic.txt'), cen, delimiter=' ', fmt='%d') print(name+' finished')
001-create_initialization_centroids_and_expand_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: seismic-interpretation # language: python # name: seismic-interpretation # --- # Copyright (c) Microsoft Corporation. # # Licensed under the MIT License. # # Converting SEG-Y files for training or validation # # This notebook describes how to prepare your own SEG-Y files for training. # # If you don’t have your owns SEG-Y file, you can run *01_segy_sample_files.jpynb* notebook for generating synthetics files. # # To use your own SEG-Y volumes to train models in the DeepSeismic repo, you need to bring at least one pair of ground truth and label data SEG-Y files where the files have an identical shape. The seismic data file contains typical SEG-Y post stack data traces and the label data file should contain an integer class label at every sample in each trace. # # For each SEG-Y file, run the convert_segy.py script to create a npy file. Optionally, you can normalize and/or clip the data in the SEG-Y file as it is converted to npy. # # Once you have a pair of ground truth and related label npy files, you can edit one of the training scripts in the repo to use these files. One example is the [dutchf3 train.py](../../experiments/interpretation/dutchf3_patch/train.py) script. # # + from itkwidgets import view import numpy as np import os SEGYFILE= './normalsegy.segy' PREFIX='normalsegy' OUTPUTDIR='data' # - # ## convert_segy.py usage # !python ./convert_segy.py --help # # Example run # # Convert the SEG-Y file to a single output npy file in the local directory. Do not normalize or clip the data # !python ./convert_segy.py --prefix {PREFIX} --input_file {SEGYFILE} --output_dir {OUTPUTDIR} --clip # ## Post processing instructions # # There should now be on npy file in the local directory named donuthole_10_100_00000.npy. The number relate to the anchor point # of the array. In this case, inline 10, crossline 100, and depth 0 is the origin [0,0,0] of the array. # # Rerun the convert_segy script for the related label file npydata = np.load(f"./{OUTPUTDIR}/{PREFIX}_10_100_00000.npy") view(npydata, slicing_planes=True) # ### Prepare train/test splits file # # Once the data and label segy files are converted to npy, use the `prepare_dutchf3.py` script on the resulting npy file to generate the list of patches as input to the train script. # # In the next cell is a example of how to run this script. Note that we are using the same npy (normalsegy_10_100_00000.npy) file as seismic and labels because it is only for ilustration purposes. # # Also, once you've prepared the data set, you'll find your files in the following directory tree: # # data_dir # ├── output_dir # ├── split # │&emsp; ├── section_train.txt # │&emsp; ├── section_train_val.txt # │&emsp; ├── section_val.txt # !python ../../../scripts/prepare_dutchf3.py split_train_val section --data_dir={OUTPUTDIR} --label_file={PREFIX}_10_100_00000.npy --output_dir=splits --section_stride=2 --log_config=None --split_direction=both
examples/interpretation/segyconverter/02_segy_convert_sample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Model Comparison and Conclusion # This note summarizes the work and compare the models in one sight. # ## Libraries # Importing necessary libraries. # + run_control={"marked": true} import pandas as pd import numpy as np from tensorflow.keras.models import load_model from sklearn.metrics import confusion_matrix from malig_data import * # - # Defining file Path # + basic_NN_file = "../models/basic_NN.h5" smote_NN_file = "../models/SMOTE_NN.h5" adasyn_NN_file = "../models/ADASYN_NN.h5" basic_CNN_file = "../models/basic_cnn.h5" smote_CNN_file = "../models/smote_cnn.h5" adasyn_CNN_file = "../models/adasyn_cnn.h5" VGG19_file = "../models/vgg19.h5" VGG19_adasyn_file = '../models/vgg19_adasyn.h5' ResNet50_file = "../models/ResNet50.h5" ResNet50_adasyn_file = "../models/ResNet50 Adasyn.h5" # - def model_loading(filepath, train_X, train_y, val_X, val_y): """ Function that displays the confusion matrix of the model and returns the saved model filepath: where the model is saved train_X: train image train_y: train target val_X: validation image val_y: validation target """ saved_model = load_model(filepath) results_train = saved_model.evaluate(train_X, train_y) print(f'Training Loss: {results_train[0]:.3} \nTraining Accuracy: {results_train[1]:.3}') print('----------') results_test = saved_model.evaluate(val_X, val_y) print(f'Test Loss: {results_test[0]:.3} \nTest Accuracy: {results_test[1]:.3}') predictions = saved_model.predict_classes(val_X) cm = confusion_matrix(val_y, predictions, labels=[0,1]) index=["Actual Malig", "Actual Benign"] columns=["Predicted Malig", "Predicted Benign"] df = pd.DataFrame(data=cm,index=index, columns=columns) display(df) return saved_model # Model Comparisons basic_NN_model = model_loading(basic_NN_file, train_img, train_y, val_img, val_y) smote_NN_model = model_loading(smote_NN_file, smote_img, smote_labels, val_img, val_y) adasyn_NN_model = model_loading(adasyn_NN_file, adasyn_img, adasyn_labels, val_img, val_y) basic_CNN_model = model_loading(basic_CNN_file, train_images, train_y, val_images, val_y) smote_CNN_model = model_loading(smote_CNN_file, smote_images, smote_labels, val_images, val_y) adasy_CNN_model = model_loading(adasyn_CNN_file, adasyn_images, adasyn_labels, val_images, val_y) vgg19_model = model_loading(VGG19_file, train_images, train_y, val_images, val_y) vgg19_model19_adasyn_model = model_loading(VGG19_adasyn_file, adasyn_images, adasyn_labels, val_images, val_y) resNet50_model = model_loading(ResNet50_file, train_images, train_y, val_images, val_y) resNet50_adasyn_model = model_loading(ResNet50_adasyn_file, adasyn_images, adasyn_labels, val_images, val_y) # ## Conclusion # Based on above observation, the best model to work with is CNN model with adasyn data balance. It has exhibited 94.6% with further tuninng, it would be able to bring higher accuracy.
notebooks/.ipynb_checkpoints/(6) Model Comparison and Conclusion-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TRTR Dataset E #import libraries import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd import os print('Libraries imported!!') # + #define directory of functions and actual directory HOME_PATH = '' #home path of the project FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/UTILITY' ACTUAL_DIR = os.getcwd() #change directory to functions directory os.chdir(HOME_PATH + FUNCTIONS_DIR) #import functions for data labelling analisys from utility_evaluation import DataPreProcessor from utility_evaluation import train_evaluate_model #change directory to actual directory os.chdir(ACTUAL_DIR) print('Functions imported!!') # - # ## 1. Read data #read real dataset train_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/E_PimaIndiansDiabetes_Real_Train.csv') categorical_columns = ['Outcome'] for col in categorical_columns : train_data[col] = train_data[col].astype('category') train_data #read test data test_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/E_PimaIndiansDiabetes_Real_Test.csv') for col in categorical_columns : test_data[col] = test_data[col].astype('category') test_data target = 'Outcome' #quick look at the breakdown of class values print('Train data') print(train_data.shape) print(train_data.groupby(target).size()) print('#####################################') print('Test data') print(test_data.shape) print(test_data.groupby(target).size()) # ## 2. Pre-process training data # + target = 'Outcome' categorical_columns = None numerical_columns = train_data.select_dtypes(include=['int64','float64']).columns.tolist() categories = None data_preprocessor = DataPreProcessor(categorical_columns, numerical_columns, categories) x_train = data_preprocessor.preprocess_train_data(train_data.loc[:, train_data.columns != target]) y_train = train_data.loc[:, target] x_train.shape, y_train.shape # - # ## 3. Preprocess test data x_test = data_preprocessor.preprocess_test_data(test_data.loc[:, test_data.columns != target]) y_test = test_data.loc[:, target] x_test.shape, y_test.shape # ## 4. Create a dataset to save the results results = pd.DataFrame(columns = ['model','accuracy','precision','recall','f1']) results # ## 4. Train and evaluate Random Forest Classifier rf_results = train_evaluate_model('RF', x_train, y_train, x_test, y_test) results = results.append(rf_results, ignore_index=True) rf_results # ## 5. Train and Evaluate KNeighbors Classifier knn_results = train_evaluate_model('KNN', x_train, y_train, x_test, y_test) results = results.append(knn_results, ignore_index=True) knn_results # ## 6. Train and evaluate Decision Tree Classifier dt_results = train_evaluate_model('DT', x_train, y_train, x_test, y_test) results = results.append(dt_results, ignore_index=True) dt_results # ## 7. Train and evaluate Support Vector Machines Classifier svm_results = train_evaluate_model('SVM', x_train, y_train, x_test, y_test) results = results.append(svm_results, ignore_index=True) svm_results # ## 8. Train and evaluate Multilayer Perceptron Classifier mlp_results = train_evaluate_model('MLP', x_train, y_train, x_test, y_test) results = results.append(mlp_results, ignore_index=True) mlp_results # ## 9. Save results file results.to_csv('RESULTS/models_results_real.csv', index=False) results
notebooks/Dataset E - Pima Indians Diabetes/Synthetic data evaluation/Utility/TRTR Dataset E.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dataproc - Create Cluster # # ## Intended Use # A Kubeflow Pipeline component to create a cluster in Google Cloud Dataproc service. # # ## Run-Time Parameters: # Name | Description # :--- | :---------- # project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to. # region | Required. The Cloud Dataproc region in which to handle the request. # name | Optional. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused. # name_prefix | Optional. The prefix of the cluster name. # initialization_actions | Optional. List of GCS URIs of executables to execute on each node after config is completed. By default, executables are run on master and all worker nodes. # config_bucket | Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. # image_version | Optional. The version of software inside the cluster. # cluster | Optional. The full [cluster config](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#Cluster) # wait_interval | The wait seconds between polling the operation. Defaults to 30s. # # ## Output: # Name | Description # :--- | :---------- # cluster_name | The cluster name of the created cluster. # ## Sample # # Note: the sample code below works in both IPython notebook or python code directly. # ### Set sample parameters # + tags=["parameters"] # Required Parameters PROJECT_ID = '<Please put your project ID here>' # Optional Parameters EXPERIMENT_NAME = 'Dataproc - Create Cluster' COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/create_cluster/component.yaml' # - # ### Install KFP SDK # Install the SDK (Uncomment the code if the SDK is not installed before) # + #KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz' # #!pip3 install $KFP_PACKAGE --upgrade # - # ### Load component definitions # + import kfp.components as comp dataproc_create_cluster_op = comp.load_component_from_url(COMPONENT_SPEC_URI) display(dataproc_create_cluster_op) # - # ### Here is an illustrative pipeline that uses the component import kfp.dsl as dsl import kfp.gcp as gcp import json @dsl.pipeline( name='Dataproc create cluster pipeline', description='Dataproc create cluster pipeline' ) def dataproc_create_cluster_pipeline( project_id = PROJECT_ID, region = 'us-central1', name='', name_prefix='', job_name_prefix='', initialization_actions='', config_bucket='', image_version='', cluster='', wait_interval='30' ): dataproc_create_cluster_op(project_id, region, name, name_prefix, job_name_prefix, initialization_actions, config_bucket, image_version, cluster, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa')) # ### Compile the pipeline pipeline_func = dataproc_create_cluster_pipeline pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) # ### Submit the pipeline for execution # + #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
components/gcp/dataproc/create_cluster/sample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %run "../../../common/0_notebooks_base_setup.py" # --- # # <img src='../../../common/logo_DH.png' align='left' width=35%/> # # # Estadística Inferencial # ## Imports import scipy.stats as stats import pandas as pd import numpy as np import math import seaborn as sns import matplotlib.pyplot as plt # ## Ejercicio: Tests sobre una proporción # # Fuimos contratados por una empresa de lotería para averiguar la proporción de clientes que compra determinado producto. La firma va a mantener su plan de marketing actual si esta proporción es de 50% o más pero va a triplicar su gasto en publicidad en caso contrario. # # El dataset que vamos a usar es de datos sintéticos (construido por nosotros) usando la función `generar` # https://numpy.org/doc/1.18/reference/random/generated/numpy.random.Generator.binomial.html#numpy.random.Generator.binomial def generar(trials, p, obs): random_generator = np.random.default_rng() data = random_generator.binomial(trials, p, obs) result = pd.DataFrame(data, columns= ['compra']) return result # + p_generacion = 0.4 trials = 1 obs = 100 data_ej3 = generar(trials, p_generacion, obs) #sns.distplot(data_ej3) sns.histplot(data_ej3, kde = True, stat = 'density', binrange=(-0.5, 1.5)); # - # ### 1. ¿Cuál es la hipótesis nula y cuál es la alternativa? # # ### 2. ¿Cuál es el valor del estimador de la proporción de clientes que compra? # ### 3. ¿Cuáles son la media y el desvío estándar poblacionales? # ### 4. ¿Qué distribución tiene la proporción de clientes que compran determinado producto si asumimos que n es lo suficientemente grande? # ### 5. Definamos un test de hipótesis (estadístico de prueba y regla de decisión) para la hipótesis 3.1 con un nivel de significación de 0.05. # ### 6. ¿Qué decisión tomamos en base a la muestra? # ### 7. ¿Cuál es el p-valor? # ### 8. Construyamos un intervalo de confianza de 95% para la proporción de clientes que compra.
clase_16_EstadisticaInferencial2/2_checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### **Exercise 1**: Remove String Spaces # remove the spaces from the string, then return the resultant string. # + def remove_spaces(x): res = x.replace(' ', '') return res assert remove_spaces('This is an example') == 'Thisisanexample' assert remove_spaces(' <- lookout -> ') == '<-lookout->' assert remove_spaces(' ') == '' # - # ### **Exercise 2**: A Needle in the Haystack # Write a function that takes an array of strings, containing one "needle" # # After your function finds the needle it should return a message (as a string) that says: "found the needle at position " plus the index it found the needle # + def find_needle(list_of_strings): res = f"found the needle at position {list_of_strings.index('needle')}" return res assert find_needle(['hay', 'junk', 'hay', 'hay', 'moreJunk', 'needle', 'randomJunk']) == "found the needle at position 5" # - # ### **Exercise 3**: Century From Year # Return the century for the given year. # # **Note**: For simplicity The first century spans from the year 0 up to and including the year 99, the rest follow similar pattern # # There are different approaches, these info can be useful for some of them: # - `str(x)`: converts x to a string # - `int(x)`: converts x to an integer number # - `a % b`: returns the integer remainder of the division a/b # + def get_century(x): res = int(str(x)[:2])+1 return res assert get_century(1705) == 18 assert get_century(1900) == 20 assert get_century(1601) == 17 assert get_century(2000) == 21 # - # What if we used the [strict definition](https://en.wikipedia.org/wiki/Century): # # The first century spans from the year 1 up to and including the year 100 # + def get_strict_century(x): x_str=str(x) first2digits = int(x_str[:2]) if x_str[2:] == '00': res = first2digits else: res = first2digits+1 return res assert get_strict_century(1705) == 18 assert get_strict_century(1900) == 19 assert get_strict_century(1601) == 17 assert get_strict_century(2000) == 20 # - # ### **Exercise 4**: Mumbling # Given a string, create the lists as in the example # + def accum(x): res = [] multiplier = 1 for i in x: partial = i*multiplier res.append(partial) multiplier += 1 return res assert accum("abcd") == ['a', 'bb', 'ccc', 'dddd'] assert accum("RqaEzty") == ['R', 'qq', 'aaa', 'EEEE', 'zzzzz', 'tttttt', 'yyyyyyy'] assert accum("cwAt") == ['c', 'ww', 'AAA', 'tttt'] # - # What if the letter size has to be more "proper": first letter uppercase, rest lowercase # + def accum_proper(x): res = [] multiplier = 1 for i in x: partial = (i*multiplier).title() res.append(partial) multiplier += 1 return res assert accum_proper("abcd") == ['A', 'Bb', 'Ccc', 'Dddd'] assert accum_proper("RqaEzty") == ['R', 'Qq', 'Aaa', 'Eeee', 'Zzzzz', 'Tttttt', 'Yyyyyyy'] assert accum_proper("cwAt") == ['C', 'Ww', 'Aaa', 'Tttt'] # - # ### **Exercise 5**: Complementary DNA # In DNA strings, symbols "A" and "T" are complements of each other, as "C" and "G". You have function with one side of the DNA, you need to get the other complementary side. # + def DNA_complement(x): complements = {'A':'T','T':'A','C':'G','G':'C'} #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! res = '' for i in x: res += complements[i] #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! return res assert DNA_complement("ATTGC") == "TAACG" assert DNA_complement("GTAT") == "CATA" # - # ### **Exercise 6**: Decode the Morse code # self-explainatory, not so easy # + MORSE = { '.-': 'A', '-...': 'B', '-.-.': 'C', '-..': 'D', '.': 'E', '..-.': 'F', '--.': 'G', '....': 'H', '..': 'I', '.---': 'J', '-.-': 'K', '.-..': 'L', '--': 'M', '-.': 'N', '---': 'O', '.--.': 'P', '--.-': 'Q', '.-.': 'R', '...': 'S', '-': 'T', '..-': 'U', '...-': 'V', '.--': 'W', '-..-': 'X', '-.--': 'Y', '--..': 'Z', } def decode(x): MORSE[''] = ' ' res = ''.join([MORSE[i] for i in x.split(' ')]) res = res.replace(' ', ' ') return res assert decode('.... . -.-- .--- ..- -.. .') == 'HEY JUDE' assert decode('- .... . --.- ..- .. -.-. -.- -... .-. --- .-- -. ..-. --- -..- .--- ..- -- .--. ... --- ...- . .-. - .... . .-.. .- --.. -.-- -.. --- --.') \ == 'THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG' # - # What if we had to create an encoder instead? # # **HINT**: to reverse a dictionary you can do `rev_d = {v: k for k,v in d.items()}` # + def encoder(x): REVERSE_MORSE = {v: k for k,v in MORSE.items()} REVERSE_MORSE[' '] = ' ' res = ' '.join([REVERSE_MORSE[i] for i in x]) res = res.replace(' ', ' ') return res assert encoder('HEY JUDE') == '.... . -.-- .--- ..- -.. .' assert encoder('THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG') == \ '- .... . --.- ..- .. -.-. -.- -... .-. --- .-- -. ..-. --- -..- .--- ..- -- .--. ... --- ...- . .-. - .... . .-.. .- --.. -.-- -.. --- --.' # -
archive/2019-20_semester1/06 - 20191127/CodeExercises_solved.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Run CLAHE and match z-adjacent histograms # - from phathom.preprocess.filtering import preprocess from phathom.utils import tifs_in_dir, make_dir from phathom.io.conversion import tifs_to_zarr import os import multiprocessing working_dir = '/media/jswaney/Drive/Justin/marmoset' path_to_tifs = '20181206_eF9_A34_2/C1_ij' output_path = '20181206_eF9_A34_2/C1_bm4d_clahe' nb_workers = 12 threshold = None kernel_size = 127 # + paths, filenames = tifs_in_dir(os.path.join(working_dir, path_to_tifs)) output_abspath = make_dir(os.path.join(working_dir, output_path)) args_list = [] for path, filename in zip(paths, filenames): output_path = os.path.join(output_abspath, filename) args = (path, output_path, threshold, kernel_size) args_list.append(args) with multiprocessing.Pool(nb_workers) as pool: pool.starmap(preprocess, args_list) # - chunks = (64, 64, 64) nb_workers = 4 tif_dir = 'round1/syto16_clahe.tiffs' zarr_path = 'round1/syto16.zarr' tifs_to_zarr(os.path.join(working_dir, tif_dir), os.path.join(working_dir, zarr_path), chunks, nb_workers=nb_workers) chunks = (64, 64, 64) nb_workers = 4 tif_dir = 'round2/syto16_clahe.tiffs' zarr_path = 'round2/syto16.zarr' tifs_to_zarr(os.path.join(working_dir, tif_dir), os.path.join(working_dir, zarr_path), chunks, nb_workers=nb_workers)
notebooks/0_preprocess_images.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import credentials credentials.credentials import pandas as pd import geopandas as gpd from cityiq import CityIq from cartoframes.auth import set_default_credentials import time # ## Set Credentials Here set_default_credentials('creds_usignite.json') myCIQ = CityIq("City") myCIQ.fetchToken() myCIQ.fetchMetadata("assets","pedestrian","eventTypes:PEDEVT", size=100000) pedestrian_sensor_metadata = myCIQ.getAssets() pedestrian_sensor_metadata_df = pd.DataFrame(pedestrian_sensor_metadata) pedestrian_sensor_metadata_df.dropna(inplace = True) # split coordinates into lat and lng latlng = pedestrian_sensor_metadata_df["coordinates"].str.split(":", n = 1, expand = True) pedestrian_sensor_metadata_df["latitude"]= latlng[0].astype(float) pedestrian_sensor_metadata_df["longitude"]= latlng[1].astype(float) # ### Set hours and end_time # How many hours you want to request in total h = 2 # When is the end hour you want to request end_time = time.strptime('May 31 2020 10:00AM', '%b %d %Y %I:%M%p') pedestrian_hourly_aggregates = pd.DataFrame() Time = int(time.mktime(end_time))*1000 # loop through assets to fetch events for each asset for index, row in pedestrian_sensor_metadata_df.iterrows(): endTime = Time # Because when hours exceed 12, the server will be overloaded hours = 0 while hours < h: # empty list to collect events pedestrian_sensor_events_list = [] hours = hours + 1 startTime = endTime - 3600000 # startTime is 1 hour before endTime myCIQ.fetchEvents("assets", row.assetUid, "PEDEVT", startTime, endTime, pageSize=100000) assetEvents = myCIQ.getEvents() if assetEvents == []: continue for a in assetEvents: a["directionUnit"] = a["properties"]["directionUnit"] a["speedUnit"] = a["properties"]["speedUnit"] a["eventUid"] = a["properties"]["eventUid"] a["counter_direction_speed"] = a["measures"]["counter_direction_speed"] a["counter_direction_pedestrianCount"] = a["measures"]["counter_direction_pedestrianCount"] a["pedestrianCount"] = a["measures"]["pedestrianCount"] a["counter_direction"] = a["measures"]["counter_direction"] a["speed"] = a["measures"]["speed"] a["direction"] = a["measures"]["direction"] pedestrian_sensor_events_list.append(a) # with one hour of data make a dataframe, drop nulls, and group/aggregate pedestrian counts pedestrian_sensor_events_df = pd.DataFrame(pedestrian_sensor_events_list) pedestrian_sensor_events_df.dropna(inplace = True) # group by location ID to get a sum of pedestrianCounts grouped_SD_ped_sensor_events_df = pedestrian_sensor_events_df.groupby('assetUid').agg({'pedestrianCount': ['sum'], 'counter_direction_pedestrianCount': ['sum']}) grouped_SD_ped_sensor_events_df['pedestrianCount_sum'] = grouped_SD_ped_sensor_events_df['pedestrianCount'] + grouped_SD_ped_sensor_events_df['counter_direction_pedestrianCount'] grouped_SD_ped_sensor_events_df = grouped_SD_ped_sensor_events_df.drop(['pedestrianCount', 'counter_direction_pedestrianCount'], axis=1) grouped_SD_ped_sensor_events_df['startTime'] = startTime grouped_SD_ped_sensor_events_df['assetUid'] = grouped_SD_ped_sensor_events_df.index grouped_SD_ped_sensor_events_df["latitude"] = row.latitude grouped_SD_ped_sensor_events_df["longitude"] = row.longitude pedestrian_hourly_aggregates = pd.concat([pedestrian_hourly_aggregates, grouped_SD_ped_sensor_events_df], ignore_index=True) endTime = startTime print("one iter finished!") # + time_list = [] for i, row in pedestrian_hourly_aggregates.iterrows(): time_list.append(time.asctime(time.localtime(float(row['startTime']/1000)))) pedestrian_hourly_aggregates['time'] = time_list pedestrian_hourly_aggregates = pedestrian_hourly_aggregates.drop('startTime', axis=1) pedestrian_hourly_aggregates['pedestrianCount_sum'] = pedestrian_hourly_aggregates['pedestrianCount_sum'].astype(int) # - pedestrian_hourly_aggregates.head(30) pedestrian_hourly_aggregates.to_csv("pedestrian_count_sample.csv")
CityIQ_PedCount_Hourly_Agg.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- ## Implement an algorith to determine if a string has all unique characters: def name(strin): check = [] c = [] for x in strin: c.append(x) check = set(c) if len(c)==len(check): print('All Unique chars') else: print('Not Unique chars') name('Musskan') ## Given 2 strings, write a method to decide if one is a permutation of the other def palin(str1,str2): str1 = str1[::-1] str2 = str2[::-1] if str1 == str2: print('palindrome') else: print('none') palin('Tenet','Tenet') ## check if two strings are permutation str1 = 'tenet' str2 = 'neett' c1 = [] c2 = [] for x1 in str1: c1.append(x1) set1 = sorted(c1) for x2 in str2: c2.append(x2) set2 =sorted(c2) if set1==set2: print('Hell Yeah, they are!') else: print('Nah, they are not') # + ## Replace spaces with %20 except c1 = [] new_str = "Mr <NAME> " new_str = new_str.rstrip() for x in new_str: c1.append(x) c1 = ["%20" if x1 == ' ' else x1 for x1 in c1] final = "".join(c1) final # + ## String Compression: # Implement a algorithm to perform String Compression using the count of the repeated word # eg: "aaaabbbcc" output: ab3c2 # - str4 = "aaaaabbccc" c1, final_, c2= ([] for i in range(3)) for x in str4: c1.append(x) set_val = set(c1) for x1 in set_val: final = c1.count(x1) c2.append(x1) final_.append(final) output = list(zip(c2,final_)) print(output)
Strings Problems with Solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Video Pipeline Details # # This notebook goes into detail about the stages of the video pipeline in the base overlay and is written for people who want to create and integrate their own video IP. For most regular input and output use cases the high level wrappers of `HDMIIn` and `HDMIOut` should be used. # # Both the input and output pipelines in the base overlay consist of four stages, an HDMI frontend, a colorspace converter, a pixel format converter, and the video DMA. For the input the stages are arranged Frontend -> Colorspace Converter -> Pixel Format -> VDMA with the order reversed for the output side. The aim of this notebook is to give you enough information to use each stage separately and be able to modify the pipeline for your own ends. # # Before exploring the pipeline we'll import the entire pynq.lib.video module where all classes relating to the pipelines live. We'll also load the base overlay to serve as an example. # # The following table shows the IP responsible for each stage in the base overlay which will be referenced throughout the rest of the notebook # # |Stage | Input IP | Output IP | # |------------------|:---------------------------------------|:-----------------------------------| # |Frontend (Timing) |`video/hdmi_in/frontend/vtc_in` |`video/hdmi_out/frontend/vtc_out` | # |Frontend (Other) |`video/hdmi_in/frontend/axi_gpio_hdmiin`|`video/hdmi_out/frontend/axi_dynclk`| # |Colour Space |`video/hdmi_in/color_convert` |`video/hdmi_out/color_convert` | # |Pixel Format |`video/hdmi_in/pixel_pack` |`video/hdmi_outpixel_unpack` | # |VDMA |`video/axi_vdma` |`video/axi_vdam` | # + from pynq.overlays.base import BaseOverlay from pynq.lib.video import * base = BaseOverlay("base.bit") # - # ## HDMI Frontend # # The HDMI frontend modules wrap all of the clock and timing logic. The HDMI input frontend can be used independently from the rest of the pipeline by accessing its driver from the base overlay. hdmiin_frontend = base.video.hdmi_in.frontend # Creating the device will signal to the computer that a monitor is connected. Starting the frontend will wait attempt to detect the video mode, blocking until a lock can be achieved. Once the frontend is started the video mode will be available. hdmiin_frontend.start() hdmiin_frontend.mode # The HDMI output frontend can be accessed in a similar way. hdmiout_frontend = base.video.hdmi_out.frontend # and the mode must be set prior to starting the output. In this case we are just going to use the same mode as the input. hdmiout_frontend.mode = hdmiin_frontend.mode hdmiout_frontend.start() # Note that nothing will be displayed on the screen as no video data is currently being send. # ## Colorspace conversion # # The colorspace converter operates on each pixel independently using a 3x4 matrix to transform the pixels. The converter is programmed with a list of twelve coefficients in the folling order: # # | |in1 |in2 |in3 | 1 | # |-----|----|----|----|----| # |out1 |c1 |c2 |c3 |c10 | # |out2 |c4 |c5 |c6 |c11 | # |out3 |c7 |c8 |c9 |c12 | # # Each coefficient should be a floating point number between -2 and +2. # # The pixels to and from the HDMI frontends are in BGR order so a list of coefficients to convert from the input format to RGB would be: # # [0, 0, 1, # 0, 1, 0, # 1, 0, 0, # 0, 0, 0] # # reversing the order of the pixels and not adding any bias. # # The driver for the colorspace converters has a single property that contains the list of coefficients. # + colorspace_in = base.video.hdmi_in.color_convert colorspace_out = base.video.hdmi_out.color_convert bgr2rgb = [0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0] colorspace_in.colorspace = bgr2rgb colorspace_out.colorspace = bgr2rgb colorspace_in.colorspace # - # ## Pixel format conversion # # The pixel format converters convert between the 24-bit signal used by the HDMI frontends and the colorspace converters to either an 8, 24, or 32 bit signal. 24-bit mode passes the input straight through, 32-bit pads the additional pixel with 0 and 8-bit mode selects the first channel in the pixel. This is exposed by a single property to set or get the number of bits. # + pixel_in = base.video.hdmi_in.pixel_pack pixel_out = base.video.hdmi_out.pixel_unpack pixel_in.bits_per_pixel = 8 pixel_out.bits_per_pixel = 8 pixel_in.bits_per_pixel # - # ## Video DMA # # The final element in the pipeline is the video DMA which transfers video frames to and from memory. The VDMA consists of two channels, one for each direction which operate completely independently. To use a channel its mode must be set prior to start being called. After the DMA is started `readframe` and `writeframe` transfer frames. Frames are only transferred once with the call blocking if necessary. `asyncio` coroutines are available as `readframe_async` and `writeframe_async` which yield instead of blocking. A frame of the size of the output can be retrieved from the VDMA by calling `writechannel.newframe()`. This frame is not guaranteed to be initialised to blank so should be completely written before being handed back. # + inputmode = hdmiin_frontend.mode framemode = VideoMode(inputmode.width, inputmode.height, 8) vdma = base.video.axi_vdma vdma.readchannel.mode = framemode vdma.readchannel.start() vdma.writechannel.mode = framemode vdma.writechannel.start() # - frame = vdma.readchannel.readframe() vdma.writechannel.writeframe(frame) # In this case, because we are only using 8 bits per pixel, only the red channel is read and displayed. # # The two channels can be tied together which will ensure that the input is always mirrored to the output vdma.readchannel.tie(vdma.writechannel) # ### Frame Ownership # # The VDMA driver has a strict method of frame ownership. Any frames returned by `readframe` or `newframe` are owned by the user and should be destroyed by the user when no longer needed by calling `frame.freebuffer()`. Frames handed back to the VDMA with `writeframe` are no longer owned by the user and should not be touched - the data may disappear at any time. # ## Cleaning up # # It is vital to stop the VDMA before reprogramming the bitstream otherwise the memory system of the chip can be placed into an undefined state. If the monitor does not power on when starting the VDMA this is the likely cause. vdma.readchannel.stop() vdma.writechannel.stop() # Copyright (C) 2020 Xilinx, Inc
Pynq-ZU/base/notebooks/video/hdmi_video_pipeline.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Watch Me Code 2: Say My Name # # Same program as WMC 1, but with a for loop. # name = input("What is your name? ") times = int(input("How many times would you like me to say your name %s? " % name)) for i in range(times): print(name)
content/lessons/05/Watch-Me-Code/WMC2-Say-My-Name.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Building models # # - Trying to crate PDF where parameters are things that you care about # # ## Bayesian models # # - How to change the parameters to better capture biology # # ## NN # # - More work to figure out what it actually learned # # What we learn in both cases is based on how we define parameters and which ones we use # # Models and enhancers # # ## Enhancers # # - Very long # - Actual functional sequences are short and sparse # # ### Enhancers as mixtures of different kinds of sequences # # - Useful sequences punctuated with noise # - Not every base should come from the same distribution then # - How do we learn to ignore noise though? Isn't that kind of circular? # ## Models with multiple sets of parameters # # - $\Phi_{0}$ and $\Phi_{1}$ are the two vectors of parameters # ### Training the model # # - Calculating the likiehood of the data given the model # $P( X | \Phi_{0}, \Phi_{1})$ = $P( X | \theta)$ = $\prod_{i} \prod_{j} P(X_{ij} | \theta)$ # - The problem with above is that have many param sets. Either base is from foreground or background and we don't know what they are. # - Solve by introducing a new set of variables. # - $C_{ij}$ = 0 if $X_{ij}$ is from foreground and 1 if $X_{ij}$ = background # - Trying to create a process we can sample and assume that same process generated the training data # - Assuming all enhancers have background and foreground need to decide if $C_{11}$ is from foreground or background # - $P(C_{ij} = 0 | \lambda ) = \lambda_{1}$ # - $P(C_{ij} = 1 | \lambda ) = \lambda_{2}$ # - $\lambda_{1} + \lambda_{2}$ = 1 # $P(X_{ij} | C_{}ij\Phi_{i}) = \Phi_{k}$ # As long as have $C$ variables writing function becomes easier to write. # # For every base in the input data you can assume they are independent in this model. # # $P(X, c | \theta) = \prod_{i} \prod_{j} P(X_{ij}, C_{ij} | \theta)$ # # Expand where $C_{ij}$ = 0 and $C_{ij}$ = 1 # $P(C_{ij})P(X_{ij} | C_{ik}, \theta)$ # # $\prod_{l=0}[P(C_{ij} = l)P(X_{ij} | C_{ij}= l, \theta)] ^ {[C_{ik}=l]}$ # Where $C_{ij=l}$ is indicator variable that is either 1 or 0 depending on if base is from foreground or background. # $log(P(x | \theta)) = [\sum_{c}q(c)log P(x, c | \theta)] + KL(q(z)||P(c|x)$ # # - $P(C|X)$: Best guess if base is in foreground or background given data # - Expectation maximization # - Regular (using this one) # - Variational inference # $KL$ diversion? # $q(C) = P(C|X)$ # # $P(C| X, \theta) = P(X, C | \theta) / P(X | \theta)$ # # Each $C_{ij}$ can only be 0 or 1. Probiblity of either 0 or 1 has to sum to 1. # - Expected value of X under some distribution # $E[X] = \sum_{X^1} P(X=X^1)X^1$
notes/10-5-21.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="Xl5fXSmrT8QV" import matplotlib.pyplot as plt import numpy as np from scipy import stats # + id="ex5KdQ4MQuTN" plt.rcParams["figure.figsize"] = (16,12) plt.rcParams.update({'font.size': 24}) # + [markdown] id="bv6zavSEhTYS" # # **Analytical Solutions** # # **Analytical solutions, also called closed-form solutions, are mathematical solutions in the form of math expressions.** # # - **Transparency: Because analytical solutions are presented as math expressions, they offer a clear view into how variables and interactions between variables affect the result.** # # - **Efficiency: Algorithms and models expressed with analytical solutions are often more efficient than equivalent numeric implementations.** # + [markdown] id="uD5AfaLdPcw8" # Source: [Derive closed-form analytical solutions to math and engineering problems](https://www.mathworks.com/discovery/analytical-solution.html) from MATLAB # + [markdown] id="hmWO4TqViXRr" # ## **Example of an Analytical Solution** # # **Finding the integral of the function $f(x)$ from [0,10]** # # $\int_0^{10} x \ dx$ # # $= \frac{x^2}{2} \Big|_0^{10}$ # # $= \frac{10^2}{2} - \frac{0^2}{2}$ # # $= \frac{100}{2}$ # # $=50$ # + id="tc8pXv8QUvnV" def func(x): return x # + colab={"base_uri": "https://localhost:8080/", "height": 742} id="dk-LOJR06NPw" outputId="9f803773-7cfa-4dae-b1dc-9812aba33293" plt.plot(np.arange(0,10,.1), [func(x) for x in np.arange(0,10,.1)],lw=2.5,label = '$f(x) = x$') plt.fill_between(np.arange(0,10,.1), [func(x) for x in np.arange(0,10,.1)], color='lightblue') plt.axvline(color='black') plt.axhline(color='black') plt.title('Analytical Solution') plt.grid() plt.legend(bbox_to_anchor=(.5, .5, .12, .4)); # + [markdown] id="TjM_gjOVj-Z9" # # **Numerical (Computational) Solutions** # # **Numerical methods are techniques by which mathematical problems cannot readily or possibly be solved by analytical method.** # # - **Numerical solutions are available only at selected (discrete) solution points, but not at all points covered by the functions as in the case with analytical solution methods.** # # - **Numerical methods are trail-and-error processes. Typically, users need to # estimate an initial solution with selected increment of the variable to which the intended solution will cover.** # # - **Two dissadvantages of numerical methods is that they are noisy and numerical methods take longer to compute relative to analytic methods.** # + [markdown] id="edt4HxEJ22xm" # Source: [Numerical Solution Methods for Engineering Analysis](https://www.sjsu.edu/me/docs/hsu-Chapter%2010%20Numerical%20solution%20methods.pdf) by <NAME> # + [markdown] id="XzfgVAxX-EBH" # Reference: [Monte Carlo Integration](https://cs184.eecs.berkeley.edu/sp21/lecture/12-0/monte-carlo-integration) by <NAME> and <NAME> # + [markdown] id="u2VBnqlMqp0A" # Reference: [Monte Carlo Integration](https://cs.dartmouth.edu/wjarosz/publications/dissertation/chapter6.pdf) by <NAME> # + [markdown] id="-i0F2v99vxiu" # # **Monte Carlo Integration** # # **In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.** # # $\large{\langle F^N \rangle = (b-a)\frac{1}{N}} \sum_{i=0}^{N-1}f(x_i) \approx \int_a^b f(x) dx$ # + [markdown] id="-52xYafMKoYg" # Source: [Monte Carlo integration](https://en.wikipedia.org/wiki/Monte_Carlo_integration) from Wikipedia # + [markdown] id="emfa8BIoitSS" # Source: [Monte Carlo integration and random numbers](https://www.mv.helsinki.fi/home/rummukai/simu/random) by <NAME> # + id="1jOXHKdy6YJn" def mc_integral(func, limits = [0,1],sample_size = 1000): sample_list = [] while len(sample_list) < sample_size: sample_list.append(func(np.random.uniform(low = limits[0],high = limits[1]))) return [sum(sample_list) * ((limits[1] - limits[0])/sample_size),sample_list] # + id="QHvor4-bYF-s" integral_estimate, list_sample = mc_integral(func, limits=[0,10], sample_size = 200) # + colab={"base_uri": "https://localhost:8080/", "height": 742} id="zjTJCuZtkkIE" outputId="d8049a8d-ef45-41bc-ba2f-a84417f8aa0c" plt.plot(np.arange(0,10,.1), [func(x) for x in np.arange(0,10,.1)],lw=2.5, color = 'green') plt.bar(sorted(list_sample),np.linspace(0,10,len(list_sample)), color = 'lightgreen',width=.1,edgecolor='darkgreen',lw=.05) plt.axvline(color='black') plt.axhline(color='black') plt.title('Monte Carlo Integration Solution') plt.xticks(np.arange(-1,11,2)) plt.yticks(np.arange(-1,11,2)) plt.grid(); # + colab={"base_uri": "https://localhost:8080/"} id="pg8Dqg3zmkqE" outputId="e39ffb15-5774-46b8-90b9-66e4962af569" integral_estimate # + [markdown] id="DiI33Wvh5ikD" # # **Estimating a Hard to Solve Integral** # + [markdown] id="C2WxGpJo-fnI" # **The probability density function of a normal distribution: $\large{\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}}(\frac{x-\mu}{\sigma})^2}$** # # $\large{=\frac{1}{\sigma\sqrt{2\pi}}e^\frac{(x-\mu)^2}{2\sigma^2}}$ # + [markdown] id="HdmRNTPSc5mE" # ## **Example:** # + [markdown] id="tN1otLopdRcJ" # **Miraculin—a protein naturally # produced in a rare tropical fruit—can convert a sour taste # into a sweet taste. Consequently, miraculin has the potential # to be an alternative low-calorie sweetener. In Plant # Science (May, 2010), a group of Japanese environmental # engineers investigated the ability of a hybrid tomato plant # to produce miraculin. For a particular generation of the # tomato plant, the amount $Y$ of miraculin produced (measured # in micro-grams per gram of fresh weight) had a mean # of 105.3 and a standard deviation of 8.0. Assume that $Y$ is # normally distributed.** # # **Find the probability that the amount of miraculin produced for a batch of tomatos ranges from 100 micro-grams to 110 micro-grams.** # + [markdown] id="h4EOcdWpeOKz" # **$\frac{1}{8\sqrt{2\pi}}e^{-\frac{1}{2}}(\frac{x-105.3}{8})^2$** # + [markdown] id="RE_cfJj-c9Zz" # Source: [Statistics for Engineering and the Sciences](https://www.routledge.com/Statistics-for-Engineering-and-the-Sciences/Mendenhall-Sincich/p/book/9781498728850) by <NAME> and <NAME> # + id="4N0Rq4klDoQO" def norm_dist(x, mu = 105.3, sigma = 8): return 1/(sigma*(2*np.pi)**.5) * np.e**((-1/2) * ((x-mu)/sigma)**2) mu,sigma = 105.3,8 # + id="wOQOREv-YHJI" x_i = np.arange(mu-(sigma*3),mu+(sigma*3),.01) y = [norm_dist(x) for x in x_i] # + colab={"base_uri": "https://localhost:8080/", "height": 742} id="KcV3mMbKD8oq" outputId="a20e2da8-4702-45cb-c34a-f86892dfd075" plt.plot(x_i,y) plt.axvline(x=105.3, color = 'purple', label = f'Mean: {mu}', linestyle='--') plt.axhline(color = 'black') plt.xlabel('Miraculin (in micro-grams)') plt.legend(); # + id="VDvFNme9brg4" integral_estimate, mc_sample = mc_integral(norm_dist, limits=[100,110], sample_size = 1000) # + colab={"base_uri": "https://localhost:8080/"} id="7MueIOsLk_vB" outputId="ec7725b4-370c-48b5-f8e0-2fc134efbc4b" integral_estimate # + [markdown] id="MFbl7n70MTm6" # ### **Checking the Monte Carlo integration estimate against z-score** # + colab={"base_uri": "https://localhost:8080/"} id="qjYI68FbME5v" outputId="205f9163-25f1-4a44-e888-42b6173d2110" a = (100-mu)/sigma b = (110-mu)/sigma analytic_solution = stats.norm.cdf(b)-stats.norm.cdf(a) analytic_solution # + id="9EbXW9SRlY4H" mc_sample_plot = sorted(mc_integral(norm_dist, limits=[100,mu], sample_size = 50)[1]) + sorted(mc_integral(norm_dist, limits=[mu,110], sample_size = 50)[1], reverse=True) # + id="yIoWbPeAlmvy" colab={"base_uri": "https://localhost:8080/", "height": 714} outputId="163db999-7d30-463c-a1ad-a9d69a0013c6" plt.plot(x_i,y) plt.bar(np.linspace(100,110,len(mc_sample_plot)), mc_sample_plot, color = 'lightgreen',width=.1,edgecolor='darkgreen',lw=.1); # + id="x0hNRlD1-6cT" error_plot = [] for i in range(10,10010,10): mc_sample_i = mc_integral(norm_dist, limits=[100,110], sample_size = i)[0] error_plot.append(abs(analytic_solution - mc_sample_i)) # + colab={"base_uri": "https://localhost:8080/", "height": 770} id="aR4uvV_u_9tm" outputId="fd0b6dc4-08e6-4236-bc43-16fc723a47d3" plt.plot([i for i in range(10,10010,10)],error_plot) plt.title('Monte Carlo Integration Error') plt.xlabel('Number of Samples') plt.ylabel('Error'); # + colab={"base_uri": "https://localhost:8080/"} id="8sCEP3XyCIxV" outputId="8dbd8c6b-ff12-4927-b4a9-683cd06032bc" 1/len(mc_sample)**.5 # + [markdown] id="AERZV7t1bsfq" # # **References and Additional Learning** # + [markdown] id="2cUUis-5b6gY" # ## **Textbook** # - **[Statistics for Engineering and the Sciences](https://www.routledge.com/Statistics-for-Engineering-and-the-Sciences/Mendenhall-Sincich/p/book/9781498728850) by <NAME> and <NAME>** # # # ## **Websites** # # - **[Derive closed-form analytical solutions to math and engineering problems](https://www.mathworks.com/discovery/analytical-solution.html) from MATLAB** # # - **[Monte Carlo integration and random numbers](https://www.mv.helsinki.fi/home/rummukai/simu/random) by <NAME>** # # - **[Monte Carlo integration](https://en.wikipedia.org/wiki/Monte_Carlo_integration) from Wikipedia** # # - **[Monte Carlo Integration](https://cs184.eecs.berkeley.edu/sp21/lecture/12-0/monte-carlo-integration) by <NAME> and <NAME>** # # - **[Monte Carlo Integration](https://cs.dartmouth.edu/wjarosz/publications/dissertation/chapter6.pdf) by <NAME>** # # - **[Numerical Solution Methods for Engineering Analysis](https://www.sjsu.edu/me/docs/hsu-Chapter%2010%20Numerical%20solution%20methods.pdf) by <NAME>** # + [markdown] id="jaKbFoXQbuip" # # **Connect** # # - **Feel free to connect with Adrian on [YouTube](https://www.youtube.com/channel/UCPuDxI3xb_ryUUMfkm0jsRA), [LinkedIn](https://www.linkedin.com/in/adrian-dolinay-frm-96a289106/), [Twitter](https://twitter.com/DolinayG) and [GitHub](https://github.com/ad17171717)**
Statistics Workshops/Statistics_with_Python!_Monte_Carlo_Integration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.027978, "end_time": "2020-09-12T23:17:29.031838", "exception": false, "start_time": "2020-09-12T23:17:29.003860", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # # RadarCOVID-Report # + [markdown] papermill={"duration": 0.023103, "end_time": "2020-09-12T23:17:29.078104", "exception": false, "start_time": "2020-09-12T23:17:29.055001", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ## Data Extraction # + papermill={"duration": 1.725341, "end_time": "2020-09-12T23:17:30.826356", "exception": false, "start_time": "2020-09-12T23:17:29.101015", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] import datetime import logging import os import shutil import tempfile import textwrap import uuid import dataframe_image as dfi import matplotlib.ticker import numpy as np import pandas as pd import seaborn as sns # %matplotlib inline # + papermill={"duration": 0.031399, "end_time": "2020-09-12T23:17:30.881086", "exception": false, "start_time": "2020-09-12T23:17:30.849687", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] sns.set() matplotlib.rcParams['figure.figsize'] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") # + [markdown] papermill={"duration": 0.022996, "end_time": "2020-09-12T23:17:30.927019", "exception": false, "start_time": "2020-09-12T23:17:30.904023", "status": "completed"} tags=[] # ### COVID-19 Cases # + papermill={"duration": 0.773202, "end_time": "2020-09-12T23:17:31.723040", "exception": false, "start_time": "2020-09-12T23:17:30.949838", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df = pd.read_csv("https://covid19tracking.narrativa.com/csv/confirmed.csv") radar_covid_countries = {"Spain"} # radar_covid_regions = { ... } confirmed_df = confirmed_df[confirmed_df["Country_EN"].isin(radar_covid_countries)] # confirmed_df = confirmed_df[confirmed_df["Region"].isin(radar_covid_regions)] # set(confirmed_df.Region.tolist()) == radar_covid_regions # + papermill={"duration": 0.042166, "end_time": "2020-09-12T23:17:31.790366", "exception": false, "start_time": "2020-09-12T23:17:31.748200", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_country_columns = list(filter(lambda x: x.startswith("Country_"), confirmed_df.columns)) confirmed_regional_columns = confirmed_country_columns + ["Region"] confirmed_df.drop(columns=confirmed_regional_columns, inplace=True) confirmed_df = confirmed_df.sum().to_frame() confirmed_df.tail() # + papermill={"duration": 0.043034, "end_time": "2020-09-12T23:17:31.857265", "exception": false, "start_time": "2020-09-12T23:17:31.814231", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df.reset_index(inplace=True) confirmed_df.columns = ["sample_date_string", "cumulative_cases"] confirmed_df.sort_values("sample_date_string", inplace=True) confirmed_df["new_cases"] = confirmed_df.cumulative_cases.diff() confirmed_df["rolling_mean_new_cases"] = confirmed_df.new_cases.rolling(7).mean() confirmed_df.tail() # + papermill={"duration": 0.042223, "end_time": "2020-09-12T23:17:31.923298", "exception": false, "start_time": "2020-09-12T23:17:31.881075", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] extraction_date_confirmed_df = \ confirmed_df[confirmed_df.sample_date_string == extraction_date] extraction_previous_date_confirmed_df = \ confirmed_df[confirmed_df.sample_date_string == extraction_previous_date].copy() if extraction_date_confirmed_df.empty and \ not extraction_previous_date_confirmed_df.empty: extraction_previous_date_confirmed_df["sample_date_string"] = extraction_date extraction_previous_date_confirmed_df["new_cases"] = \ extraction_previous_date_confirmed_df.rolling_mean_new_cases extraction_previous_date_confirmed_df["cumulative_cases"] = \ extraction_previous_date_confirmed_df.new_cases + \ extraction_previous_date_confirmed_df.cumulative_cases confirmed_df = confirmed_df.append(extraction_previous_date_confirmed_df) confirmed_df.tail() # + papermill={"duration": 0.22433, "end_time": "2020-09-12T23:17:32.172096", "exception": false, "start_time": "2020-09-12T23:17:31.947766", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] confirmed_df[["new_cases", "rolling_mean_new_cases"]].plot() # + [markdown] papermill={"duration": 0.026157, "end_time": "2020-09-12T23:17:32.225153", "exception": false, "start_time": "2020-09-12T23:17:32.198996", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Extract API TEKs # + papermill={"duration": 0.498996, "end_time": "2020-09-12T23:17:32.750746", "exception": false, "start_time": "2020-09-12T23:17:32.251750", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] from Modules.RadarCOVID import radar_covid exposure_keys_df = radar_covid.download_last_radar_covid_exposure_keys(days=14) exposure_keys_df[[ "sample_date_string", "source_url", "region", "key_data"]].head() # + papermill={"duration": 0.046156, "end_time": "2020-09-12T23:17:32.824070", "exception": false, "start_time": "2020-09-12T23:17:32.777914", "status": "completed"} tags=[] exposure_keys_summary_df = \ exposure_keys_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "tek_count"}, inplace=True) exposure_keys_summary_df.head() # + [markdown] papermill={"duration": 0.027207, "end_time": "2020-09-12T23:17:32.878530", "exception": false, "start_time": "2020-09-12T23:17:32.851323", "status": "completed"} tags=[] # ### Dump API TEKs # + papermill={"duration": 0.061835, "end_time": "2020-09-12T23:17:32.967526", "exception": false, "start_time": "2020-09-12T23:17:32.905691", "status": "completed"} tags=[] tek_list_df = exposure_keys_df[["sample_date_string", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( "sample_date").tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( "Data/TEKs/Current/RadarCOVID-TEKs.json", lines=True, orient="records") tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json( "Data/TEKs/Daily/RadarCOVID-TEKs-" + extraction_date + ".json", lines=True, orient="records") tek_list_df.to_json( "Data/TEKs/Hourly/RadarCOVID-TEKs-" + extraction_date_with_hour + ".json", lines=True, orient="records") tek_list_df.head() # + [markdown] papermill={"duration": 0.027818, "end_time": "2020-09-12T23:17:33.023265", "exception": false, "start_time": "2020-09-12T23:17:32.995447", "status": "completed"} tags=[] # ### Load TEK Dumps # + papermill={"duration": 0.038315, "end_time": "2020-09-12T23:17:33.089175", "exception": false, "start_time": "2020-09-12T23:17:33.050860", "status": "completed"} tags=[] import glob def load_extracted_teks(mode, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame() paths = list(reversed(sorted(glob.glob(f"Data/TEKs/{mode}/RadarCOVID-TEKs-*.json")))) if limit: paths = paths[:limit] for path in paths: logging.info(f"Loading TEKs from '{path}'...") iteration_extracted_teks_df = pd.read_json(path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) return extracted_teks_df # + [markdown] papermill={"duration": 0.0278, "end_time": "2020-09-12T23:17:33.144673", "exception": false, "start_time": "2020-09-12T23:17:33.116873", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Daily New TEKs # + papermill={"duration": 0.104609, "end_time": "2020-09-12T23:17:33.276902", "exception": false, "start_time": "2020-09-12T23:17:33.172293", "status": "completed"} tags=[] daily_extracted_teks_df = load_extracted_teks(mode="Daily", limit=14) daily_extracted_teks_df.head() # + papermill={"duration": 0.052132, "end_time": "2020-09-12T23:17:33.358877", "exception": false, "start_time": "2020-09-12T23:17:33.306745", "status": "completed"} tags=[] tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() # + papermill={"duration": 0.043579, "end_time": "2020-09-12T23:17:33.430973", "exception": false, "start_time": "2020-09-12T23:17:33.387394", "status": "completed"} tags=[] new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "new_tek_count", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.head() # + papermill={"duration": 0.050006, "end_time": "2020-09-12T23:17:33.509740", "exception": false, "start_time": "2020-09-12T23:17:33.459734", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] new_tek_devices_df = daily_extracted_teks_df.copy() new_tek_devices_df["new_sample_extraction_date"] = \ pd.to_datetime(new_tek_devices_df.sample_date) + datetime.timedelta(1) new_tek_devices_df["extraction_date"] = pd.to_datetime(new_tek_devices_df.extraction_date) new_tek_devices_df = new_tek_devices_df[ new_tek_devices_df.new_sample_extraction_date == new_tek_devices_df.extraction_date] new_tek_devices_df.head() # + papermill={"duration": 0.046977, "end_time": "2020-09-12T23:17:33.586222", "exception": false, "start_time": "2020-09-12T23:17:33.539245", "status": "completed"} tags=[] new_tek_devices_df.set_index("extraction_date", inplace=True) new_tek_devices_df = new_tek_devices_df.tek_list.apply(lambda x: len(set(x))).to_frame() new_tek_devices_df.reset_index(inplace=True) new_tek_devices_df.rename(columns={ "extraction_date": "sample_date_string", "tek_list": "new_tek_devices"}, inplace=True) new_tek_devices_df["sample_date_string"] = new_tek_devices_df.sample_date_string.dt.strftime("%Y-%m-%d") new_tek_devices_df.head() # + [markdown] papermill={"duration": 0.029731, "end_time": "2020-09-12T23:17:33.645668", "exception": false, "start_time": "2020-09-12T23:17:33.615937", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Hourly New TEKs # + papermill={"duration": 0.162035, "end_time": "2020-09-12T23:17:33.837732", "exception": false, "start_time": "2020-09-12T23:17:33.675697", "status": "completed"} tags=[] hourly_extracted_teks_df = load_extracted_teks(mode="Hourly", limit=24) hourly_extracted_teks_df.head() hourly_tek_list_df = hourly_extracted_teks_df.groupby("extraction_date_with_hour").tek_list.apply( lambda x: set(sum(x, []))).reset_index() hourly_tek_list_df = hourly_tek_list_df.set_index("extraction_date_with_hour").sort_index(ascending=True) hourly_new_tek_df = hourly_tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() hourly_new_tek_df.rename(columns={ "tek_list": "new_tek_count"}, inplace=True) hourly_new_tek_df.tail() # + papermill={"duration": 0.060056, "end_time": "2020-09-12T23:17:33.927949", "exception": false, "start_time": "2020-09-12T23:17:33.867893", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_new_tek_devices_df = hourly_extracted_teks_df.copy() hourly_new_tek_devices_df["new_sample_extraction_date"] = \ pd.to_datetime(hourly_new_tek_devices_df.sample_date) + datetime.timedelta(1) hourly_new_tek_devices_df["extraction_date"] = pd.to_datetime(hourly_new_tek_devices_df.extraction_date) hourly_new_tek_devices_df = hourly_new_tek_devices_df[ hourly_new_tek_devices_df.new_sample_extraction_date == hourly_new_tek_devices_df.extraction_date] hourly_new_tek_devices_df.set_index("extraction_date_with_hour", inplace=True) hourly_new_tek_devices_df_ = pd.DataFrame() for i, chunk_df in hourly_new_tek_devices_df.groupby("extraction_date"): chunk_df = chunk_df.copy() chunk_df.sort_index(inplace=True) chunk_tek_count_df = chunk_df.tek_list.apply(lambda x: len(set(x))) chunk_df = chunk_tek_count_df.diff().fillna(chunk_tek_count_df).to_frame() hourly_new_tek_devices_df_ = hourly_new_tek_devices_df_.append(chunk_df) hourly_new_tek_devices_df = hourly_new_tek_devices_df_ hourly_new_tek_devices_df.reset_index(inplace=True) hourly_new_tek_devices_df.rename(columns={ "tek_list": "new_tek_devices"}, inplace=True) hourly_new_tek_devices_df.tail() # + papermill={"duration": 0.049462, "end_time": "2020-09-12T23:17:34.007849", "exception": false, "start_time": "2020-09-12T23:17:33.958387", "status": "completed"} tags=[] hourly_summary_df = hourly_new_tek_df.merge( hourly_new_tek_devices_df, on=["extraction_date_with_hour"], how="outer") hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df.tail() # + [markdown] papermill={"duration": 0.031025, "end_time": "2020-09-12T23:17:34.069767", "exception": false, "start_time": "2020-09-12T23:17:34.038742", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Data Merge # + papermill={"duration": 0.049474, "end_time": "2020-09-12T23:17:34.150282", "exception": false, "start_time": "2020-09-12T23:17:34.100808", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df = exposure_keys_summary_df.merge(new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() # + papermill={"duration": 0.046324, "end_time": "2020-09-12T23:17:34.227779", "exception": false, "start_time": "2020-09-12T23:17:34.181455", "status": "completed"} tags=[] result_summary_df = result_summary_df.merge(new_tek_devices_df, on=["sample_date_string"], how="outer") result_summary_df.head() # + papermill={"duration": 0.049789, "end_time": "2020-09-12T23:17:34.309750", "exception": false, "start_time": "2020-09-12T23:17:34.259961", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df = result_summary_df.merge(confirmed_df, on=["sample_date_string"], how="left") result_summary_df.head() # + papermill={"duration": 0.054475, "end_time": "2020-09-12T23:17:34.396293", "exception": false, "start_time": "2020-09-12T23:17:34.341818", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df["tek_count_per_new_case"] = \ result_summary_df.tek_count / result_summary_df.rolling_mean_new_cases result_summary_df["new_tek_count_per_new_case"] = \ result_summary_df.new_tek_count / result_summary_df.rolling_mean_new_cases result_summary_df["new_tek_devices_per_new_case"] = \ result_summary_df.new_tek_devices / result_summary_df.rolling_mean_new_cases result_summary_df["new_tek_count_per_new_tek_device"] = \ result_summary_df.new_tek_count / result_summary_df.new_tek_devices result_summary_df.head() # + papermill={"duration": 0.043168, "end_time": "2020-09-12T23:17:34.471828", "exception": false, "start_time": "2020-09-12T23:17:34.428660", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df.set_index("sample_date", inplace=True) result_summary_df = result_summary_df.sort_index(ascending=False) # + [markdown] papermill={"duration": 0.032202, "end_time": "2020-09-12T23:17:34.536263", "exception": false, "start_time": "2020-09-12T23:17:34.504061", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ## Report Results # # + [markdown] papermill={"duration": 0.032146, "end_time": "2020-09-12T23:17:34.600880", "exception": false, "start_time": "2020-09-12T23:17:34.568734", "status": "completed"} tags=[] # ### Summary Table # + papermill={"duration": 0.055552, "end_time": "2020-09-12T23:17:34.688596", "exception": false, "start_time": "2020-09-12T23:17:34.633044", "status": "completed"} tags=[] result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[[ "tek_count", "new_tek_count", "new_cases", "rolling_mean_new_cases", "tek_count_per_new_case", "new_tek_count_per_new_case", "new_tek_devices", "new_tek_devices_per_new_case", "new_tek_count_per_new_tek_device"]] result_summary_df # + [markdown] papermill={"duration": 0.032811, "end_time": "2020-09-12T23:17:34.754475", "exception": false, "start_time": "2020-09-12T23:17:34.721664", "status": "completed"} tags=[] # ### Summary Plots # + papermill={"duration": 1.254186, "end_time": "2020-09-12T23:17:36.041518", "exception": false, "start_time": "2020-09-12T23:17:34.787332", "status": "completed"} tags=[] summary_ax_list = result_summary_df[[ "rolling_mean_new_cases", "tek_count", "new_tek_count", "new_tek_devices", "new_tek_count_per_new_tek_device", "new_tek_devices_per_new_case" ]].sort_index(ascending=True).plot.bar( title="Summary", rot=45, subplots=True, figsize=(15, 22)) summary_ax_list[-1].yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) # + [markdown] papermill={"duration": 0.035603, "end_time": "2020-09-12T23:17:36.113302", "exception": false, "start_time": "2020-09-12T23:17:36.077699", "status": "completed"} tags=[] # ### Hourly Summary Plots # + papermill={"duration": 0.54481, "end_time": "2020-09-12T23:17:36.693636", "exception": false, "start_time": "2020-09-12T23:17:36.148826", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] hourly_summary_ax_list = hourly_summary_df.plot.bar( title="Last 24h Summary", rot=45, subplots=True) # + [markdown] papermill={"duration": 0.037655, "end_time": "2020-09-12T23:17:36.768828", "exception": false, "start_time": "2020-09-12T23:17:36.731173", "status": "completed"} tags=[] # ### Publish Results # + papermill={"duration": 3.239973, "end_time": "2020-09-12T23:17:40.046337", "exception": false, "start_time": "2020-09-12T23:17:36.806364", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): media_path = get_temporary_image_path() dfi.export(df, media_path) return media_path summary_plots_image_path = save_temporary_plot_image(ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image(df=result_summary_df) hourly_summary_plots_image_path = save_temporary_plot_image(ax=hourly_summary_ax_list) # + [markdown] papermill={"duration": 0.037025, "end_time": "2020-09-12T23:17:40.120762", "exception": false, "start_time": "2020-09-12T23:17:40.083737", "status": "completed"} tags=[] # ### Save Results # + papermill={"duration": 0.056502, "end_time": "2020-09-12T23:17:40.214164", "exception": false, "start_time": "2020-09-12T23:17:40.157662", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv(report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html(report_resources_path_prefix + "Summary-Table.html") _ = shutil.copyfile(summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile(summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile(hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") report_daily_url_pattern = \ "https://github.com/pvieito/RadarCOVID-Report/blob/master/Notebooks/" \ "RadarCOVID-Report/{report_type}/RadarCOVID-Report-{report_date}.ipynb" report_daily_url = report_daily_url_pattern.format( report_type="Daily", report_date=extraction_date) report_hourly_url = report_daily_url_pattern.format( report_type="Hourly", report_date=extraction_date_with_hour) # + [markdown] papermill={"duration": 0.037224, "end_time": "2020-09-12T23:17:40.288609", "exception": false, "start_time": "2020-09-12T23:17:40.251385", "status": "completed"} tags=[] # ### Publish on README # + papermill={"duration": 0.053283, "end_time": "2020-09-12T23:17:40.378828", "exception": false, "start_time": "2020-09-12T23:17:40.325545", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() summary_table_html = result_summary_df.to_html() readme_contents = readme_contents.format( summary_table_html=summary_table_html, report_url_with_hour=report_hourly_url, extraction_date_with_hour=extraction_date_with_hour) with open("README.md", "w") as f: f.write(readme_contents) # + [markdown] papermill={"duration": 0.037112, "end_time": "2020-09-12T23:17:40.453080", "exception": false, "start_time": "2020-09-12T23:17:40.415968", "status": "completed"} pycharm={"name": "#%% md\n"} tags=[] # ### Publish on Twitter # + papermill={"duration": 2.234583, "end_time": "2020-09-12T23:17:42.724484", "exception": false, "start_time": "2020-09-12T23:17:40.489901", "status": "completed"} pycharm={"name": "#%%\n"} tags=[] enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule": import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) hourly_summary_plots_media = api.media_upload(hourly_summary_plots_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, hourly_summary_plots_media.media_id, ] extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] new_teks = extraction_date_result_summary_df.new_tek_count.sum().astype(int) new_teks_last_hour = extraction_date_result_hourly_summary_df.new_tek_count.sum().astype(int) new_devices = extraction_date_result_summary_df.new_tek_devices.sum().astype(int) new_devices_last_hour = extraction_date_result_hourly_summary_df.new_tek_devices.sum().astype(int) new_tek_count_per_new_tek_device = \ extraction_date_result_summary_df.new_tek_count_per_new_tek_device.sum() new_tek_devices_per_new_case = \ extraction_date_result_summary_df.new_tek_devices_per_new_case.sum() status = textwrap.dedent(f""" Report Update – {extraction_date_with_hour} #ExposureNotification #RadarCOVID Shared Diagnoses Day Summary: - New TEKs: {new_teks} ({new_teks_last_hour:+d} last hour) - New Devices: {new_devices} ({new_devices_last_hour:+d} last hour, {new_tek_count_per_new_tek_device:.2} TEKs/device) - Usage Ratio: {new_tek_devices_per_new_case:.2%} devices/case Report Link: {report_hourly_url} """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids)
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-09-12.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # **Map SalishSea** # + # %matplotlib inline import numpy as np import xarray as xr import os from matplotlib import pyplot as plt, animation, rc from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.colors as mcolors from cartopy import crs, feature import cmocean cmap = cmocean.cm.deep # - # ## Paths # Define paths paths = { 'NEMO': '/results2/SalishSea/nowcast-green.201905/', 'coords': '/Users/jvalenti/MOAD/SSC_masks/coordinates_seagrid_SalishSea201702.nc', 'mask': '/Users/jvalenti/MOAD/SSC_masks/mesh_mask201702.nc', 'out': '/Users/jvalenti/MOAD/analysis-jose/notebooks/results/', } # ## Simulation coords = xr.open_dataset(paths['coords'], decode_times=False) mask = xr.open_dataset(paths['mask']) # + # create some data to use for the plot dt = 0.001 t = np.arange(0.0, 10.0, dt) r = np.exp(-t[:1000]/0.05) # impulse response x = np.random.randn(len(t)) s = np.convolve(x, r)[:len(x)]*dt # colored noise fig = plt.figure(figsize=(9, 4),facecolor='white') ax = fig.add_subplot(121) # the main axes is subplot(111) by default plt.plot(t, s) plt.axis([0, 1, 1.1*np.amin(s), 2*np.amax(s)]) plt.xlabel('time (s)') plt.ylabel('current (nA)') plt.title('Subplot 1: \n Gaussian colored noise') axins = ax.inset_axes([0.5, 0.5, 0.47, 0.47]) axins.hist(s, 400) #plt.title('Probability') axins.set_xticklabels('') axins.set_yticklabels('') plt.show() # + # # Make map # blevels = list(np.arange(0,450,15)) # fig, ax = plt.subplots(figsize=(38, 16), subplot_kw={'projection': crs.Mercator()}) # ax.set_extent([-125.5, -122, 48, 50.5], crs=crs.PlateCarree()) # ax.add_feature(feature.GSHHSFeature('low', facecolor='lightgray',edgecolor='lightgray'),zorder=2) # ax.add_feature(feature.RIVERS, edgecolor='k',zorder=5) # #ax.add_feature(feature.OCEAN,zorder=1) # im=ax.contourf(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),cmap=cmap,levels=blevels) # #plt.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),colors='w',levels=blevels,linewidths=0.05) # #plt.xticks(fontsize=14) # #plt.yticks(fontsize=14) # gl = ax.gridlines( # linestyle='--', color='gray', draw_labels=True, # xlocs=range(-125, -121), ylocs=range(47, 52),zorder=5) # gl.top_labels, gl.right_labels = False, False # cbar = fig.colorbar(im, location='bottom',aspect=60,shrink=0.3,pad=0.05) # cbar.set_label('Depth [m]') # ax.text(-0.05, 0.55, 'Latitude', va='bottom', ha='center', # rotation='vertical', rotation_mode='anchor', # transform=ax.transAxes, fontsize=14,weight="bold") # ax.text(0.5, -0.05, 'Longitude', va='bottom', ha='center', # rotation='horizontal', rotation_mode='anchor', # transform=ax.transAxes, fontsize=14,weight="bold") # #axins = ax.inset_axes([0.65, 0.75, 0.5, 0.5],projection=crs.PlateCarree()) # ax.set_extent([-160, -75, 65, 25], crs=crs.PlateCarree()) # ax.add_feature(feature.GSHHSFeature('intermediate', edgecolor='k', facecolor='lightgray')) # ax.add_feature(feature.BORDERS,zorder=3) # #plt.title('Probability') # gl = axins.gridlines(crs=crs.PlateCarree(), draw_labels=True, xlocs=np.linspace(-150,-50,5), ylocs=np.linspace(55,35,3), # linewidth=2, color='gray', alpha=0.5, linestyle='--') # gl.xlabel_style = {'size': 25} # gl.ylabel_style = {'size': 25} # gl.bottom_labels, gl.left_labels = False, False # plt.show() # #plt.savefig("/Users/jvalenti/Desktop/baty.pdf") # + # Make map blevels = list(np.arange(0,450,15)) fig, ax = plt.subplots(figsize=(38, 16), subplot_kw={'projection': crs.Mercator()}) ax.set_extent([-125.5, -122, 48, 50.5], crs=crs.PlateCarree()) ax.add_feature(feature.GSHHSFeature('high', facecolor='lightgray',edgecolor='lightgray'),zorder=2) ax.add_feature(feature.RIVERS, edgecolor='k',zorder=5) #ax.add_feature(feature.OCEAN,zorder=1) im=ax.contourf(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),cmap=cmap,levels=blevels) #plt.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),colors='w',levels=blevels,linewidths=0.05) #plt.xticks(fontsize=14) #plt.yticks(fontsize=14) gl = ax.gridlines( linestyle='--', color='gray', draw_labels=True, xlocs=range(-125, -121), ylocs=range(47, 52),zorder=5) gl.top_labels, gl.right_labels = False, False cbar = fig.colorbar(im, location='bottom',aspect=60,shrink=0.3,pad=0.05) cbar.set_label('Depth [m]') ax.text(-0.05, 0.55, 'Latitude', va='bottom', ha='center', rotation='vertical', rotation_mode='anchor', transform=ax.transAxes, fontsize=14,weight="bold") ax.text(0.5, -0.05, 'Longitude', va='bottom', ha='center', rotation='horizontal', rotation_mode='anchor', transform=ax.transAxes, fontsize=14,weight="bold") plt.savefig("/Users/jvalenti/Desktop/baty.pdf") # - states_provinces = feature.NaturalEarthFeature( category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none') # Make map fig, ax = plt.subplots(figsize=(20, 16), subplot_kw={'projection': crs.Mercator()}) ax.set_extent([-160, -75, 65, 25], crs=crs.PlateCarree()) ax.add_feature(feature.GSHHSFeature('intermediate', edgecolor='k', facecolor='lightgray')) #ax.add_feature(feature.OCEAN,zorder=1) ax.add_feature(feature.BORDERS,zorder=3) gl = ax.gridlines(crs=crs.PlateCarree(), draw_labels=True, xlocs=np.linspace(-150,-50,5), ylocs=np.linspace(55,35,3), linewidth=2, color='gray', alpha=0.5, linestyle='--') gl.xlabel_style = {'size': 25} gl.ylabel_style = {'size': 25} gl.bottom_labels, gl.left_labels = False, False plt.savefig("/Users/jvalenti/Desktop/map.pdf")
notebooks/parcels/Local/salishmap.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Feature Extraction #Unsupervised Algorithm #It detects the pattern among the data , It finds the corelation between data # - # # Importing the Libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # # Importing the Datasets dataset = pd.read_csv('E:\\Edu\\Data Science and ML\\Machinelearningaz\\Datasets\\Part 9 - Dimensionality Reduction\\Section 43 - Principal Component Analysis (PCA)\\Wine.csv') dataset.shape dataset.head() dataset.describe() X = dataset.iloc[:, 0:13].values y = dataset.iloc[:, 13].values # # Splitting Dataset into TrainingSet and TestSet from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) print(X_train) print(X_test) print(y_train) print(y_test) #to predict which user going to buy SUV # # Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) X_train # ytrain is not scaled because its categorical data # # Applying PCA from sklearn.decomposition import PCA pca = PCA(n_components = 2) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) explained_variance = pca.explained_variance_ratio_ #Take first two variance which are maximum so n_components=2 explained_variance # # Fitting Logistic Regression to Training Set from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(X_train, y_train) # # Predicting The Test set Results y_pred = classifier.predict(X_test) y_pred # # Making Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm #Accuracy (14+15+6)/(14+1+15+6) # # Visualising the Training Set results # + from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green','blue'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green','blue'))(i), label = j) plt.title('Logistic Regression (Training set)') plt.xlabel('PC1') plt.ylabel('PC2') plt.legend() plt.show() #red= who didnt buy SUV #green= who bought the SUV # - # # Visualising the Test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green','blue'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green','blue'))(i), label = j) plt.title('Logistic Regression (Test set)') plt.xlabel('PC1') plt.ylabel('PC2') plt.legend() plt.show()
6 Dimensionality Reduction/PCA (Principle Component Analysis) Logistic Regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from selenium import webdriver from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.common.by import By class TimeoutException(Exception): pass import pandas as pd ################################################### # make chrome incognito and headless to run faster options = webdriver.ChromeOptions() options.add_argument('--incognito') options.add_argument('--headless') # change the driver_path to wherever your webdriver is located driver_path = "C:/Users/<NAME>/Downloads/chromedriver_win32/chromedriver.exe" driver = webdriver.Chrome(executable_path=driver_path, options=options) # consider automating for it to scroll through yahoo singapore and skip sponsored post and select news link # use this element to differentiate out news from sponsored # --> <div class="Fz(12px) Fw(b) Tt(c) D(ib) Mb(6px) C($c-fuji-news) Mend(9px) Mt(-2px)" data-test-locator="catlabel" data-reactid="11">News</div> websites = ["https://sg.news.yahoo.com/e-scooter-rider-jailed-bedok-accident-consuming-drugs-bail-091840268.html", "https://sg.news.yahoo.com/four-drivers-fined-providing-illegal-chauffeured-services-unlicensed-vehicles-071542451.html", "https://sg.news.yahoo.com/hiv-data-leak-affected-patients-can-sue-moh-loss-data-says-gan-kim-yong-090625320.html", "https://sg.news.yahoo.com/american-behind-hiv-data-leak-pathological-liar-made-baseless-allegations-singapore-authorities-130957980.html", "https://sg.news.yahoo.com/prosecution-seeks-preventive-detention-recalcitrant-paedophile-57-sexually-assaulted-boy-122307488.html", "https://sg.news.yahoo.com/73-year-old-driver-jailed-assaulting-trying-bribe-traffic-cop-132421761.html", "https://sg.news.yahoo.com/cannabis-stay-illegal-proven-cannabinoid-medical-products-may-allowed-mha-moh-075514549.html", "https://sg.news.yahoo.com/jailed-accountant-embezzled-2-million-bought-two-homes-074249110.html", "https://sg.news.yahoo.com/ava-review-rules-governing-pet-boarding-businesses-sun-xueling-072603250.html", "https://sg.news.yahoo.com/record-high-visitor-arrivals-modest-increase-tourism-spending-singapore-2018-065117991.html", "https://sg.news.yahoo.com/islandwide-siren-sound-15-feb-commemorate-total-defence-day-044136885.html", "https://sg.news.yahoo.com/muhammad-jaris-goh-leads-accolades-singapore-mens-bowling-team-annual-awards-021447862.html", "https://sg.news.yahoo.com/yahoo-poll-enforcement-indiscriminate-burning-hdb-flats-230132255.html", "https://sg.news.yahoo.com/man-found-unconscious-drugs-near-crotch-acquitted-appeal-121247684.html", "https://sg.news.yahoo.com/comment-e-cigarettes-harm-reduction-not-elimination-aim-public-health-policy-111956884.html", "https://sg.news.yahoo.com/cyberattacks-cost-large-asia-pacific-healthcare-groups-average-32-million-study-100129697.html", "https://sg.news.yahoo.com/man-jailed-cheating-girlfriend-mother-nearly-180000-gamble-093652802.html", "https://sg.news.yahoo.com/hiv-data-leak-affected-patients-reportedly-feeling-suicidal-gan-kim-yong-082316256.html", "https://sg.news.yahoo.com/hiv-data-leak-agc-not-charge-brochez-osa-2016-faced-light-sentence-081539862.html", "https://sg.news.yahoo.com/hdb-launches-first-2019-sales-exercise-3739-flats-070237119.html", "https://sg.news.yahoo.com/jailed-man-posted-ex-lovers-contact-information-fake-online-sex-ad-035031208.html", "https://sg.news.yahoo.com/erp-charges-three-gantries-removed-peak-morning-hours-125315512.html", "https://sg.news.yahoo.com/singaporean-man-jailed-7-weeks-evading-ns-3-years-121420970.html", "https://sg.news.yahoo.com/proposal-decriminalise-attempted-suicide-tabled-parliament-part-penal-code-changes-115816274.html", "https://sg.news.yahoo.com/singaporeans-enjoy-automated-immigration-clearance-new-zealand-110947897.html", "https://sg.news.yahoo.com/singpost-received-91-complaints-misdelivered-lost-mail-2018-sim-ann-100946217.html", "https://sg.news.yahoo.com/couple-abused-maid-made-drink-tainted-water-jailed-090017797.html", "https://sg.news.yahoo.com/moe-increase-examinations-using-electronic-devices-ong-ye-kung-073158540.html", "https://sg.news.yahoo.com/parliament-new-saf-safety-measures-include-rear-view-cameras-bionix-vehicles-070546098.html", "https://sg.news.yahoo.com/nsf-liu-kais-death-bionix-driver-not-hear-commands-stop-060336184.html", "https://sg.news.yahoo.com/death-aloysius-pang-two-saf-personnel-vehicle-time-incident-similarly-qualified-045405681.html", "https://sg.news.yahoo.com/malaysian-government-vessel-greek-carrier-collide-within-singapore-waters-045603597.html", "https://sg.news.yahoo.com/burning-smell-eastern-part-singapore-not-caused-local-sources-no-haze-detected-nea-130031817.html", "https://sg.news.yahoo.com/conman-jailed-14-months-multiple-scams-involving-55000-105558452.html", "https://sg.news.yahoo.com/questions-saf-training-deaths-dominate-parliament-sitting-monday-082434673.html", "https://sg.news.yahoo.com/tea-good-heres-experts-say-064739635.html", "https://sg.news.yahoo.com/singpost-improve-service-standards-fined-100k-lapses-011717663.html", "https://sg.news.yahoo.com/yahoo-poll-feel-trump-kim-summit-held-vietnam-230034819.html"] ################################################### comments_list =[] wait_time = WebDriverWait(driver, 20) for i in websites: driver.get(i) # click View Reaction button #viewreact_button = driver.find_element_by_xpath("//button[@class='comments-title D(ib) Td(n) Bd(0) P(0) Fw(b) Fz(16px) Cur(p) C(#000)']").click() viewreact_button = wait_time.until(EC.visibility_of_element_located((By.XPATH,"//button[@class='comments-title D(ib) Td(n) Bd(0) P(0) Fw(b) Fz(16px) Cur(p) C(#000)']"))) viewreact_button.click() # keep clicking the Show More button to the end # tries to ignore StaleElementReferenceException and TimeOutException while True: try: more_button = wait_time.until(EC.visibility_of_element_located((By.XPATH,"//button[@class='Ff(ss) Fz(14px) Fw(b) Bdw(2px) Ta(c) Cur(p) Va(m) Bdrs(4px) O(n)! Lh(n) Bgc(#fff) C($c-fuji-blue-1-a) Bdc($c-fuji-blue-1-a) Bd C(#fff):h Bgc($c-fuji-blue-1-a):h Mt(20px) Mb(20px) Px(30px) Py(10px) showNext D(b) Mx(a) Pos(r)']"))) more_button.click() except Exception: break # find the comments comments = wait_time.until(EC.presence_of_all_elements_located((By.XPATH, "//div[@class='C($c-fuji-grey-l) Mb(2px) Fz(14px) Lh(20px) Pend(8px)']"))) for j in comments: comments_list.append(j.text) print(i + " completed") ################################################### #print(comments_list) df2 = pd.DataFrame(comments_list, columns=['Comments']) #print(df2) df2.to_csv("Scraped_Yahoo_Comments.csv", index=False) print("Scraped completed") driver.close() # -
1. Web Scraping/Yahoo Comments Scraper/Yahoo Comments Scraper.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # export from nbdev.imports import * from nbdev.sync import * from nbdev.export import * from nbconvert.preprocessors import ExecutePreprocessor # + # default_exp test # - # # Extract tests # # > The functions that grab the cells containing tests (filtering with potential flags) and execute them # Everything that is not an exported cell is considered a test, so you should make sure your notebooks can all run smoothly (and fast) if you want to use this functionality as the CLI. You can mark some cells with special flags (like slow) to make sure they are only executed when you authorize it. Those flags should be configured in your `settings.ini` (separated by a `|` if you have several of them). You can also apply a flag to one entire notebook by putting `# all_flag` in one of its cells. # ## Detect flags # The following functions detect the cells that should be excluded from the tests (unless their special flag is passed). # export _re_all_flag = re.compile(""" # Matches any line with #all_something and catches that something in a group: ^ # beginning of line (since re.MULTILINE is passed) \s* # any number of whitespace \#\s* # # then any number of whitespace all_(\S+) # all_ followed by a group with any non-whitespace chars \s* # any number of whitespace $ # end of line (since re.MULTILINE is passed) """, re.IGNORECASE | re.MULTILINE | re.VERBOSE) # export def check_all_flag(cells): "Check for an `# all_flag` cell and then return said flag" for cell in cells: if check_re(cell, _re_all_flag): return check_re(cell, _re_all_flag).groups()[0] nb = read_nb("04_test.ipynb") assert check_all_flag(nb['cells']) is None # + #export class _ReTstFlags(): def __init__(self): self._re = None @property def re(self): if self._re is None: self._re = re.compile(f""" # Matches any line with a test flad and catches it in a group: ^ # beginning of line (since re.MULTILINE is passed) \s* # any number of whitespace \#\s* # # then any number of whitespace ({Config().get('tst_flags', '')}) \s* # any number of whitespace $ # end of line (since re.MULTILINE is passed) """, re.IGNORECASE | re.MULTILINE | re.VERBOSE) return self._re _re_flags = _ReTstFlags() # - # export def get_cell_flags(cell): "Check for any special test flag in `cell`" if cell['cell_type'] != 'code' or len(Config().get('tst_flags',''))==0: return [] return _re_flags.re.findall(cell['source']) test_eq(get_cell_flags({'cell_type': 'code', 'source': "#hide\n# fastai2\n"}), ['fastai2']) test_eq(get_cell_flags({'cell_type': 'code', 'source': "#hide\n"}), []) # ## Testing a notebook # export class NoExportPreprocessor(ExecutePreprocessor): "An `ExecutePreprocessor` that executes cells that are not exported and don't have a flag in `flags`" def __init__(self, flags, **kwargs): self.flags = flags super().__init__(**kwargs) def preprocess_cell(self, cell, resources, index): if 'source' not in cell or cell['cell_type'] != "code": return cell, resources for f in get_cell_flags(cell): if f not in self.flags: return cell, resources res = super().preprocess_cell(cell, resources, index) return res # export def test_nb(fn, flags=None): "Execute tests in notebook in `fn` with `flags`" os.environ["IN_TEST"] = '1' if flags is None: flags = [] try: nb = read_nb(fn) all_flag = check_all_flag(nb['cells']) if all_flag is not None and all_flag not in flags: return mod = find_default_export(nb['cells']) ep = NoExportPreprocessor(flags, timeout=600, kernel_name='python3') pnb = nbformat.from_dict(nb) ep.preprocess(pnb) finally: os.environ.pop("IN_TEST") # ## Export- #hide notebook2script()
nbs/04_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/amipanes/dec/blob/master/Steel_Faults.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="XXOJtsafY6bx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="00471054-dc06-4201-a91b-a958479c89f0" # linear algebra import numpy as np # data processing import pandas as pd # data visualization import seaborn as sns # %matplotlib inline from matplotlib import pyplot as plt from matplotlib import style # Algorithms from sklearn import linear_model from sklearn import preprocessing from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC, LinearSVC from sklearn.naive_bayes import GaussianNB import numpy as np from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.ensemble import RandomForestClassifier import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler path_to_steel = '/content/steel_faults.csv' df = pd.read_csv(path_to_steel) df.head(5) # + id="DX4b9B2p7LFS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2acdb3c5-09ad-40c6-8818-b5ca5c58b5ea" path_to_steel = '/content/steel_faults.csv' df = pd.read_csv(path_to_steel) df.isnull().values.any() # + id="MurFJXp8ztNe" colab_type="code" colab={} from sklearn import datasets, linear_model from sklearn.model_selection import cross_validate from sklearn.metrics.scorer import make_scorer from sklearn.metrics import confusion_matrix from sklearn.svm import LinearSVC import numpy as np from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA import seaborn as sns import plotly as py import matplotlib.pyplot as plt import plotly.graph_objs as go import warnings warnings.filterwarnings("ignore") from sklearn.linear_model import LogisticRegression as LR from sklearn.tree import DecisionTreeClassifier as DTC from sklearn.metrics import accuracy_score from sklearn.neighbors import KNeighborsClassifier as KNC from sklearn.ensemble import RandomForestClassifier as RF from sklearn import preprocessing from sklearn.preprocessing import StandardScaler from sklearn.model_selection import GridSearchCV # + id="DJdPKEhIzuhj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="ca27b306-fb2c-415d-9ad8-aef17d478452" display(df.describe(include="all")) # + id="FIdlGigs0IQn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 741} outputId="c7e973a9-a962-47a1-9656-7985a2ddc457" sns.set(rc={'figure.figsize':(12,10)}) corr = df.corr() sns.heatmap(corr, xticklabels=corr.columns.values,yticklabels=corr.columns.values) # + id="a01qWyU70Upp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a49fb3fc-66bd-47b0-baa1-648509cafaaf" df.hist(figsize=(20,20)) plt.show() # + id="t_hM0cR-0hoE" colab_type="code" colab={} X1 = df.values y_dataframe = df[["Pastry","Z_Scratch","K_Scatch","Stains","Dirtiness","Bumps","Other_Faults"]] features = X1[:,0:27] x = pd.DataFrame(features) # + id="N3OFNQSzyDOm" colab_type="code" colab={} x = pd.DataFrame(features) # + id="JYG3i-YQ0qLQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="bec2eea7-45c4-41c2-861b-8692605f9923" y_dataframe.info() # + id="hh84F-vI03C6" colab_type="code" colab={} y = [] for i in range(y_dataframe.shape[0]): if y_dataframe["Pastry"].values[i] == 1: y.append("Pastry") elif y_dataframe["Z_Scratch"].values[i] == 1: y.append("Z_Scratch") elif y_dataframe["K_Scatch"].values[i] == 1: y.append("K_Scatch") elif y_dataframe["Stains"].values[i] == 1: y.append("Stains") elif y_dataframe["Dirtiness"].values[i] == 1: y.append("Dirtiness") elif y_dataframe["Bumps"].values[i] == 1: y.append("Bumps") else: y.append("Other_Faults") # + id="2grx84CoCmtl" colab_type="code" colab={} y=np.array(y) # + id="IX4QIVUU08nd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="abfe5cbd-c0d2-4867-b37e-7528f9147176" y.shape # + id="dMXnTZ3ODoA3" colab_type="code" colab={} # output of faults from y np array import pandas as pd import numpy as np q = pd.DataFrame({"faults":y}) print (q) q.to_csv('steel_faults6.csv') # + id="RocWVDR9Nn0c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="66afe15e-a495-4ae9-b971-af943fb5c49f" print(q) # + id="wf0OH4pugY8Y" colab_type="code" colab={} # add new column to original dataset called fault_label df = pd.read_csv('steel_faults.csv') df['Fault_label'] = '' df.to_csv('steel_faults2.csv') # + id="ej9ApF-aeeRd" colab_type="code" colab={} # adds the faults-label in single column df = pd.read_csv('steel_faults5.csv') df['Fault_label'] = faultstype["faults"].values df.to_csv('steel_faults6.csv') # + id="TSjanrSdN7_i" colab_type="code" colab={} # categorise the faults import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder # creating initial dataframe faults = ('Pastry','Z_Scratch','K_Scatch','Stains','Dirtiness','Bumps','Other_Faults') faults_df = pd.DataFrame(faults, columns=['faults']) # creating instance of labelencoder labelencoder = LabelEncoder() # converting type of columns to 'category' faults_df['faults'] = faults_df['faults'].astype('category') # Assigning numerical values and storing in another column faults_df['Fault_types_Cat']= labelencoder.fit_transform(faults_df['faults']) faults_df faults_df.to_csv('steel_faults7.csv') # + id="gOgRrgCvT2jq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ecbe9c8b-b16c-40eb-9b57-8ab7b69381e0" # Create a dictionary using the values above # will remap the values import pandas as pd df = pd.read_csv('/content/steel_faults4.csv') dict = {'Pastry' : 4 , 'Z_Scratch' : 6, 'K_Scatch': 2, 'Stains' : 5, 'Dirtiness' : 1,'Bumps' : 0, 'Other_Faults' : 3} # Print the dictionary print(dict) # Remap the values of the dataframe df['Fault_label']= df['Fault_label'].map(dict) df.to_csv('steel_faults8.csv') # + id="lYJpIJUla-dY" colab_type="code" colab={} # dropping original hot encoding labels df = pd.read_csv('/content/steel_faults8.csv') cols = [29,30,31,32,33,34,35] df.drop(df.columns[cols],axis=1,inplace=True) df.to_csv('steel_faults9.csv') # + id="K0HbecERVHPA" colab_type="code" colab={} # moves fault_labels to front and removes the header as in steel faults 5 df = pd.read_csv('/content/steel_faults9.csv') fault = df['Fault_label'] df.drop(labels=['Fault_label'], axis=1,inplace = True) df.insert(0, 'Fault_label', fault) df.to_csv('steel_faults10.csv',header = False, index = False) # + id="KPEY3QIxdR6C" colab_type="code" colab={} # delete the unnamed columns df = pd.read_csv('/content/steel_faults10.csv') cols = [1,2,3] df.drop(df.columns[cols],axis=1,inplace=True) df.to_csv('steel_faultsa.csv') # + id="3HvZphT6vzB2" colab_type="code" colab={} #importing libraries to do some feature selection from sklearn.datasets import load_boston import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm # %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.feature_selection import RFE from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso #Loading the dataset #x = load_boston() #df = pd.DataFrame(x.data, columns = x.feature_names) #df["MEDV"] = x.target y = y_dataframe #Target Variable x = pd.DataFrame(features) #no of features nof_list=np.arange(7,1358) high_score=0 #Variable to store the optimum features nof=0 score_list =[] for n in range(len(nof_list)): X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.3, random_state = 0) model = LinearRegression() rfe = RFE(model,nof_list[n]) X_train_rfe = rfe.fit_transform(X_train,y_train) X_test_rfe = rfe.transform(X_test) model.fit(X_train_rfe,y_train) score = model.score(X_test_rfe,y_test) score_list.append(score) if(score>high_score): high_score = score nof = nof_list[n] print("Optimum number of features: %d" %nof) print("Score with %d features: %f" % (nof, high_score)) # + id="IptH63241GEH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 622} outputId="734c751c-b4cc-4a75-b00a-4b7ce9ad5b7f" sns.set(rc={'figure.figsize':(12,10)}) corr = y_dataframe.corr() sns.heatmap(corr, xticklabels=corr.columns.values,yticklabels=corr.columns.values) # + id="dyeEw4F41SPX" colab_type="code" colab={} fig, ax=plt.subplots(1,2,figsize=(20,8)) faultstype['faults'].value_counts().plot.pie(ax=ax[0]) sns.countplot(x='faults', data=faultstype, ax=ax[1]) # + id="XnBsTfzp1HTa" colab_type="code" colab={} sc=StandardScaler() X=sc.fit_transform(x) # + id="deJm85bD1eTB" colab_type="code" colab={} x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.50, random_state = 42) # + id="MhZJYZkw-_Wa" colab_type="code" colab={} from numpy.random import RandomState import pandas as pd df = pd.read_csv('/content/steel_faultsa.csv') train = df.sample(frac=0.5, random_state=42) test = df.loc[~df.index.isin(train.index)] train.to_csv('steel_faults_train.csv') test.to_csv('steel_faults_test.csv') # + id="Y_8Pdalv1gyr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="aa5168c9-01f0-4bed-877e-159e7a8fb3cf" pca=PCA(10) pca.fit(x_test) pca.explained_variance_ratio_ # + id="-ltnthm81lX-" colab_type="code" colab={} pca_train=pca.transform(x_train) pca_test=pca.transform(x_test) pca_train=x_train pca_test=x_test # + id="ZuzL1Lr411ws" colab_type="code" colab={} pca_score= np.zeros(6) pca_accuracy= np.zeros(6) # + id="_Ybxa1SC1xzr" colab_type="code" colab={} Logistic_Regression = LR().fit(pca_train,y_train) pca_score[0]=Logistic_Regression.score(pca_train,y_train) predictions_LR = Logistic_Regression.predict(pca_test) pca_accuracy[0]=accuracy_score(y_test, predictions_LR) pca_accuracy[0] # + id="_hXhFsCbheo_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="227f4393-d3e7-4f24-cd43-c6a7e3dee000" from sklearn.metrics import confusion_matrix confusion_matrix = confusion_matrix(y_test, predictions_LR) print (confusion_matrix) # + id="Lizra-RYiVkS" colab_type="code" colab={} print (predictions_LR) # + id="1KqbG7pr19R4" colab_type="code" colab={} Decision_Tree_Classifier = DTC().fit(pca_train,y_train) pca_score[1]=Decision_Tree_Classifier.score(pca_train,y_train) predictions_DTC = Decision_Tree_Classifier.predict(pca_test) pca_accuracy[1]=accuracy_score(y_test, predictions_DTC) pca_score[1] # + id="VUfIKqTV2GwK" colab_type="code" colab={} K_Neighbors_Classifier = KNC(8).fit(pca_train,y_train) pca_score[3]=K_Neighbors_Classifier.score(pca_train,y_train) predictions_KNC = K_Neighbors_Classifier.predict(pca_test) pca_accuracy[3]=accuracy_score(y_test, predictions_KNC) # + id="U0oCfBMkQ_Gn" colab_type="code" colab={} #attemting to convert csv into lib svm # !python csvlibsvm.py steel_faults_train.csv libsvm.data 28 false # + id="5R4pmrHFLk-r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="735f079f-d141-469d-f700-c5aec7e64ccf" help(np.loadtxt) # + id="LqdkF-0aLqcs" colab_type="code" colab={} df = pd.read_csv('/content/steel_faults_train.csv') np.loadtxt(df) # + id="-mUvRu3cE9x9" colab_type="code" colab={} #confusion matrix y_pred= y_train y_true= pca_test from sklearn.metrics import confusion_matrix cm=confusion_matrix(y_true,pca_test) cm # + id="zXsyi3vkdfZk" colab_type="code" colab={} # Confusion Matrix y_pred= y_train y_true= pca_test from sklearn.metrics import confusion_matrix confusion_matrix(y_true, y_pred) # Accuracy from sklearn.metrics import accuracy_score accuracy_score(y_true, y_pred) # Recall from sklearn.metrics import recall_score recall_score(y_true, y_pred, average=None) # Precision from sklearn.metrics import precision_score precision_score(y_true, y_pred, average=None)
Steel_Faults.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.0 # language: julia # name: julia-1.6 # --- # ## Problem statement # # Consider the following optimization problem adapted from [Christiansen et al. (2020)](http://doi.org/10.1364/OE.28.004444): # We want to design a metallic (silver) nanoparticle to focus an incident $H_z$-polarized planewave on # a single spot, maximizing the electric-field intensity at this focal spot. The metallic # structure can be *any shape* of *any topology* (any connectivity, number of holes, etcetera) # surrounding the focal spot, as long as the metal lies within an annular "design region" $\Omega_d$: # between a minimum radius $r_s = 10$nm (the minimum distance from the focal spot) and an outer # radius $r_d=100$nm. The computational cell is of height $H$ and length $L$, and we employ a perfectly matched layer (PML) thickness of $d_{pml}$ to implement outgoing (radiation) boundary conditions for this finite domain. # ![](Illustration.png) # # The goal is find the arrangement of the silver material in the gray region that maximizes the |electric field|² at the center (the focal point). Every "pixel" in the gray region is effectively treated as a degree of freedom that can vary continuously between silver (shown in black below) and air (shown in white below). This is called density-based [topology optimization (TO)](https://en.wikipedia.org/wiki/Topology_optimization), and leads to a tractable optimization problem despite the huge number of parameters. A standard "projection" technique, described below, is used to "binarize" the structure by eventually forcing the material to be either silver or air almost everywhere. # # ## Topology optimization # # We use density-based topology optimization (TO) to maximize the electric field intensity at the center. In TO, every point in the design domain is a design degree of freedom that can vary continuously between air ($p=0$) and silver ($p=1$), which we discretize into a piece-wise constant parameter space $P$ for the design parameter $p\in [0,1]$. The material's electric permittivity ε is then given by: # # $$ # \varepsilon(p) = \left[n_{air}+p(n_{metal}-n_{air})\right]^2, # $$ # where $n_{air}=1$ and $n_{metal}$ are the refractive indices ($\sqrt{\varepsilon}$) of the air and metal, respectively. (It is tempting to simply linearly interpolate the permittivities ε, rather than the refractive indices, but this turns out to lead to artificial singularities in the case of metals where ε can pass through zero [4].) # # In practice, to avoid obtaining arbitrarily fine features as the spatial resolution is increased, one needs to regularize the problem with a minimum lengthscale $r_f$ by generating a smoothed/filtered parameter function $p_f$. (Although this regularizes the problem, strictly speaking it does not impose a minimum feature size because of the nonlinear-projection step below. In practical applications, one imposes additional [manufacturing constraints](http://doi.org/10.1364/OE.431188) explicitly.) We perform the smoothing $p \to p_f$ by solving a simple "damped diffusion" PDE, also called a Helmholtz filter [5], for $p_f$ given the design variables $p$: # $$ # \begin{aligned} # -r_f^2\nabla^2p_f+p_f&=p\, ,\\ # \left. \frac{\partial p_f}{\partial \vec{n}} \right\vert_{\partial\Omega_D} & =0 . # \end{aligned} # $$ # # We choose a filter radius $r_f=R_f/(2\sqrt{3})$ where $R_f=10$ nm, in order to match a published result (using a slightly different filtering scheme) for comparison [6]. # # Next, we apply a smoothed threshold projection to the intermediate variable $p_f$ to obtain a "binarized" density parameter $p_t$ that tends towards values of $0$ or $1$ almost everywhere [6] as the steepness $\beta$ of the thresholding is increased: # $$ # p_t = \frac{\tanh(\beta\eta)+\tanh\left[\beta(p_f-\eta)\right]}{\tanh(\beta\eta)+\tanh\left[\beta(1-\eta)\right]}. # $$ # Note that as $\beta\to\infty$, this threshold procedure goes to a step function, which would make the optimization problem non-differentiable. In consequence, the standard approach is to gradually increase $\beta$ to slowly binarize the design as the optimization progresses [6]. We will show how this is done below. # # Finally, we replace $p$ with the filtered and thresholded $p_t$ in the ε interpolation formula from above: # # $$ # \varepsilon(p_t) = \left[n_{air}+p_t(n_{metal}-n_{air})\right]^2, # $$ # This is the quantity that will be used for the $1/\varepsilon(x)$ coefficient in our Helmholtz PDE. # # # ## Coding # The following code is used to obtain the optimized geometry and the corresponding field, a more detailed explanation of the code can be found in this [tutorial](https://gridap.github.io/Tutorials/dev/pages/t018_TopOptEMFocus/). # + using Gridap, Gridap.Geometry, Gridap.Fields, GridapGmsh using LinearAlgebra using NLopt using CairoMakie, GridapMakie using ChainRulesCore, Zygote import ChainRulesCore: rrule NO_FIELDS = ZeroTangent() λ = 532 # Wavelength (nm) L = 600 # Width of the numerical cell (excluding PML) (nm) h1 = 600 # Height of the air region below the source (nm) h2 = 200 # Height of the air region above the source (nm) dpml = 300 # Thickness of the PML (nm) n_metal = 0.054 + 3.429im # Silver refractive index at λ = 532 nm n_air = 1 # Air refractive index μ = 1 # Magnetic permeability k = 2*π/λ # Wavenumber (nm^-1) model = GmshDiscreteModel("RecCirGeometry.msh") order = 1 reffe = ReferenceFE(lagrangian, Float64, order) V = TestFESpace(model, reffe, dirichlet_tags = ["DirichletEdges", "DirichletNodes"], vector_type = Vector{ComplexF64}) U = V # mathematically equivalent to TrialFESpace(V,0) degree = 2 Ω = Triangulation(model) dΩ = Measure(Ω, degree) Γ_s = BoundaryTriangulation(model; tags = ["Source"]) # Source line dΓ_s = Measure(Γ_s, degree) Ω_d = Triangulation(model, tags="Design") dΩ_d = Measure(Ω_d, degree) Ω_c = Triangulation(model, tags="Center") dΩ_c = Measure(Ω_c, degree) p_reffe = ReferenceFE(lagrangian, Float64, 0) Q = TestFESpace(Ω_d, p_reffe, vector_type = Vector{Float64}) P = Q np = num_free_dofs(P) # Number of cells in design region (number of design parameters) pf_reffe = ReferenceFE(lagrangian, Float64, 1) Qf = TestFESpace(Ω_d, pf_reffe, vector_type = Vector{Float64}) Pf = Qf fem_params = (; V, U, Q, P, Qf, Pf, np, Ω, dΩ, dΩ_d, dΩ_c, dΓ_s) R = 1e-10 LHp=(L/2, h1+h2) # Start of PML for x,y > 0 LHn=(L/2, 0) # Start of PML for x,y < 0 phys_params = (; k, n_metal, n_air, μ, R, dpml, LHp, LHn) # PML coordinate streching functions function s_PML(x; phys_params) σ = -3 / 4 * log(phys_params.R) / phys_params.dpml / phys_params.n_air xf = Tuple(x) u = @. ifelse(xf > 0 , xf - phys_params.LHp, - xf - phys_params.LHn) return @. ifelse(u > 0, 1 + (1im * σ / phys_params.k) * (u / phys_params.dpml)^2, $(1.0+0im)) end function ds_PML(x; phys_params) σ = -3 / 4 * log(phys_params.R) / phys_params.dpml / phys_params.n_air xf = Tuple(x) u = @. ifelse(xf > 0 , xf - phys_params.LHp, - xf - phys_params.LHn) ds = @. ifelse(u > 0, (2im * σ / phys_params.k) * (1 / phys_params.dpml)^2 * u, $(0.0+0im)) return ds.*sign.(xf) end struct Λ{PT} <: Function phys_params::PT end function (Λf::Λ)(x) s_x,s_y = s_PML(x; Λf.phys_params) return VectorValue(1/s_x, 1/s_y) end # Define the derivative for the Λ factor Fields.∇(Λf::Λ) = x -> TensorValue{2, 2, ComplexF64}(-(Λf(x)[1])^2 * ds_PML(x; Λf.phys_params)[1], 0, 0, -(Λf(x)[2])^2 * ds_PML(x; Λf.phys_params)[2]) r = 5/sqrt(3) # Filter radius β = 32.0 # β∈[1,∞], threshold sharpness η = 0.5 # η∈[0,1], threshold center a_f(r, u, v) = r^2 * (∇(v) ⋅ ∇(u)) function Filter(p0; r, fem_params) ph = FEFunction(fem_params.P, p0) op = AffineFEOperator(fem_params.Pf, fem_params.Qf) do u, v ∫(a_f(r, u, v))fem_params.dΩ_d + ∫(v * u)fem_params.dΩ_d, ∫(v * ph)fem_params.dΩ_d end pfh = solve(op) return get_free_dof_values(pfh) end function Threshold(pfh; β, η) return ((tanh(β * η) + tanh(β * (pfh - η))) / (tanh(β * η) + tanh(β * (1.0 - η)))) end ξd(p, n_air, n_metal)= 1 / (n_air + (n_metal - n_air) * p)^2 - 1 / n_air^2 # in the design region a_base(u, v; phys_params) = (1 / phys_params.n_air^2) * ((∇ .* (Λ(phys_params) * v)) ⊙ (Λ(phys_params) .* ∇(u))) - (phys_params.k^2 * phys_params.μ * (v * u)) a_design(u, v, pth; phys_params) = ((p -> ξd(p, phys_params.n_air, phys_params.n_metal)) ∘ pth) * (∇(v) ⊙ ∇(u)) function MatrixA(pth; phys_params, fem_params) A_mat = assemble_matrix(fem_params.U, fem_params.V) do u, v ∫(a_base(u, v; phys_params))fem_params.dΩ + ∫(a_design(u, v, pth; phys_params))fem_params.dΩ_d end return lu(A_mat) end function MatrixOf(fem_params) x0 = VectorValue(0,300) # Position of the field to be optimized δ = 1 return assemble_matrix(fem_params.U, fem_params.V) do u, v ∫((x->(1/(2*π)*exp(-norm(x - x0)^2 / 2 / δ^2))) * (∇(u) ⋅ ∇(v)) )fem_params.dΩ_c end end Dptdpf(pf, β, η) = β * (1.0 - tanh(β * (pf - η))^2) / (tanh(β * η) + tanh(β * (1.0 - η))) Dξdpf(pf, n_air, n_metal, β, η)= 2 * (n_air - n_metal) / (n_air + (n_metal - n_air) * Threshold(pf; β, η))^3 * Dptdpf(pf, β, η) DAdpf(u, v, pfh; phys_params, β, η) = ((p -> Dξdpf(p, phys_params.n_air, phys_params.n_metal, β, η)) ∘ pfh) * (∇(v) ⊙ ∇(u)) function gf_pf(pf_vec; β, η, phys_params, fem_params) pfh = FEFunction(fem_params.Pf, pf_vec) pth = (pf -> Threshold(pf; β, η)) ∘ pfh A_mat = MatrixA(pth; phys_params, fem_params) b_vec = assemble_vector(v->(∫(v)fem_params.dΓ_s), fem_params.V) u_vec = A_mat \ b_vec O_mat = MatrixOf(fem_params) real(u_vec' * O_mat * u_vec) end function rrule(::typeof(gf_pf), pf_vec; β, η, phys_params, fem_params) function U_pullback(dgdg) NO_FIELDS, dgdg * Dgfdpf(pf_vec; β, η, phys_params, fem_params) end gf_pf(pf_vec; β, η, phys_params, fem_params), U_pullback end function Dgfdpf(pf_vec; β, η, phys_params, fem_params) pfh = FEFunction(fem_params.Pf, pf_vec) pth = (pf -> Threshold(pf; β, η)) ∘ pfh A_mat = MatrixA(pth; phys_params, fem_params) b_vec = assemble_vector(v->(∫(v)fem_params.dΓ_s), fem_params.V) u_vec = A_mat \ b_vec O_mat = MatrixOf(fem_params) uh = FEFunction(fem_params.U, u_vec) w_vec = A_mat' \ (O_mat * u_vec) wconjh = FEFunction(fem_params.U, conj(w_vec)) l_temp(dp) = ∫(real(-2 * DAdpf(uh, wconjh, pfh; phys_params, β, η)) * dp)fem_params.dΩ_d dgfdpf = assemble_vector(l_temp, fem_params.Pf) return dgfdpf end function pf_p0(p0; r, fem_params) pf_vec = Filter(p0; r, fem_params) pf_vec[pf_vec .< 0] .= 0 pf_vec[pf_vec .> 1] .= 1.0 pf_vec end function rrule(::typeof(pf_p0), p0; r, fem_params) function pf_pullback(dgdpf) NO_FIELDS, Dgdp(dgdpf; r, fem_params) end pf_p0(p0; r, fem_params), pf_pullback end function Dgdp(dgdpf; r, fem_params) Af = assemble_matrix(fem_params.Pf, fem_params.Qf) do u, v ∫(a_f(r, u, v))fem_params.dΩ_d + ∫(v * u)fem_params.dΩ_d end wvec = Af' \ dgdpf wh = FEFunction(fem_params.Pf, wvec) l_temp(dp) = ∫(wh * dp)fem_params.dΩ_d return assemble_vector(l_temp, fem_params.P) end function gf_p(p0::Vector; r, β, η, phys_params, fem_params) pf_vec = pf_p0(p0; r, fem_params) gf_pf(pf_vec; β, η, phys_params, fem_params) end function gf_p(p0::Vector, grad::Vector; r, β, η, phys_params, fem_params) if length(grad) > 0 dgdp, = Zygote.gradient(p -> gf_p(p; r, β, η, phys_params, fem_params), p0) grad[:] = dgdp end gvalue = gf_p(p0::Vector; r, β, η, phys_params, fem_params) open("gvalue.txt", "a") do io write(io, "$gvalue \n") end gvalue end function gf_p_optimize(p_init; r, β, η, TOL = 1e-4, MAX_ITER = 500, phys_params, fem_params) ##################### Optimize ################# opt = Opt(:LD_MMA, fem_params.np) opt.lower_bounds = 0 opt.upper_bounds = 1 opt.ftol_rel = TOL opt.maxeval = MAX_ITER opt.max_objective = (p0, grad) -> gf_p(p0, grad; r, β, η, phys_params, fem_params) (g_opt, p_opt, ret) = optimize(opt, p_init) @show numevals = opt.numevals # the number of function evaluations return g_opt, p_opt end p_opt = fill(0.4, fem_params.np) # Initial guess β_list = [8.0, 16.0, 32.0] g_opt = 0 TOL = 1e-8 MAX_ITER = 100 for bi = 1 : 3 β = β_list[bi] g_opt, p_temp_opt = gf_p_optimize(p_opt; r, β, η, TOL, MAX_ITER, phys_params, fem_params) global p_opt = p_temp_opt end @show g_opt p0 = p_opt pf_vec = pf_p0(p0; r, fem_params) pfh = FEFunction(fem_params.Pf, pf_vec) pth = (pf -> Threshold(pf; β, η)) ∘ pfh A_mat = MatrixA(pth; phys_params, fem_params) b_vec = assemble_vector(v->(∫(v)fem_params.dΓ_s), fem_params.V) u_vec = A_mat \ b_vec uh = FEFunction(fem_params.U, u_vec) fig, ax, plt = plot(fem_params.Ω, pth, colormap = :binary) Colorbar(fig[1,2], plt) ax.aspect = AxisAspect(1) ax.title = "Design Shape" rplot = 110 # Region for plot limits!(ax, -rplot, rplot, (h1)/2-rplot, (h1)/2+rplot) save("shape.png", fig) maxe = 30 # Maximum electric field magnitude compared to the incident plane wave e1=abs2(phys_params.n_air^2) e2=abs2(phys_params.n_metal^2) fig, ax, plt = plot(fem_params.Ω, 2*(sqrt∘(abs((conj(∇(uh)) ⋅ ∇(uh))/(CellField(e1,fem_params.Ω) + (e2 - e1) * pth)))), colormap = :hot, colorrange=(0, maxe)) Colorbar(fig[1,2], plt) ax.title = "|E|" ax.aspect = AxisAspect(1) limits!(ax, -rplot, rplot, (h1)/2-rplot, (h1)/2+rplot) save("Field.png", fig) # - # ## Results # The optimized geometry and the corresponding field are displayed below. # ![](shape.png) # ![](Field.png) # # We can see that this shape is very close to that from [Christiansen et al. (2020)](http://doi.org/10.1364/OE.28.004444). In particular, the optimized field enhancement $|E/E_0|^2$ is found to be $4 \times$ `g_opt` = 700 at $\beta = 32$ (the factor 4 comes from the incident plane wave amplitude). For comparison, the optimized field enhancement from [Christiansen et al. (2020)](http://doi.org/10.1364/OE.28.004444) is about ~1000, that is because they did a further binarizing to the design parameters. Let $\beta=80$ and run for a 100 more optimization steps, we obtain $4 \times$ `g_opt` = 980 $\approx$ 1000, proving our results. # ## Mesh generation # The following code is used to generate the computation cell mesh file using Gmsh. # + using Gmsh import Gmsh: gmsh struct RecCirGeometry L::Float64 # Length of the normal region h1::Float64 # Height of normal region h2::Float64 # Height of the region above source rt::Float64 # Radius of the target location rd::Float64 # Radius of design domain rs::Float64 # Radius of smallest distance circle dpml::Float64 # Thickness of the PML # Characteristic length (controls the resolution, smaller the finer) l1::Float64 # Normal region l2::Float64 # Design domain end function MeshGenerator(geo_param::RecCirGeometry, meshfile_name::String) gmsh.initialize() gmsh.option.setNumber("General.Terminal", 1) gmsh.option.setNumber("Mesh.Algorithm", 6) gmsh.clear() gmsh.model.add("geometry") # Add points gmsh.model.geo.addPoint(-geo_param.L/2-geo_param.dpml, -geo_param.dpml, 0, geo_param.l1, 1) gmsh.model.geo.addPoint( geo_param.L/2+geo_param.dpml, -geo_param.dpml, 0, geo_param.l1, 2) gmsh.model.geo.addPoint( geo_param.L/2+geo_param.dpml, geo_param.h1, 0, geo_param.l1, 3) gmsh.model.geo.addPoint( geo_param.L/2+geo_param.dpml, geo_param.h1+geo_param.h2+geo_param.dpml, 0, geo_param.l1, 4) gmsh.model.geo.addPoint(-geo_param.L/2-geo_param.dpml, geo_param.h1+geo_param.h2+geo_param.dpml, 0, geo_param.l1, 5) gmsh.model.geo.addPoint(-geo_param.L/2-geo_param.dpml, geo_param.h1, 0, geo_param.l1, 6) gmsh.model.geo.addPoint(0, geo_param.h1/2, 0, geo_param.l2, 7) gmsh.model.geo.addPoint(-geo_param.rs, geo_param.h1/2, 0, geo_param.l2, 8) gmsh.model.geo.addPoint( geo_param.rs, geo_param.h1/2, 0, geo_param.l2, 9) gmsh.model.geo.addPoint(-geo_param.rd, geo_param.h1/2, 0, geo_param.l2, 10) gmsh.model.geo.addPoint( geo_param.rd, geo_param.h1/2, 0, geo_param.l2, 11) # Add lines gmsh.model.geo.addLine( 1, 2, 1) gmsh.model.geo.addLine( 2, 3, 2) gmsh.model.geo.addLine( 3, 4, 3) gmsh.model.geo.addLine( 4, 5, 4) gmsh.model.geo.addLine( 6, 5, 5) gmsh.model.geo.addLine( 1, 6, 6) gmsh.model.geo.addLine( 6, 3, 7) gmsh.model.geo.addCircleArc( 8, 7, 9, 8) gmsh.model.geo.addCircleArc( 9, 7, 8, 9) gmsh.model.geo.addCircleArc(10, 7,11,10) gmsh.model.geo.addCircleArc(11, 7,10,11) # Construct curve loops and surfaces gmsh.model.geo.addCurveLoop([8, 9], 1) gmsh.model.geo.addPlaneSurface([1], 1) gmsh.model.geo.addCurveLoop([10, 11], 2) gmsh.model.geo.addPlaneSurface([2, 1], 2) gmsh.model.geo.addCurveLoop([1, 2, -7,-6], 4) gmsh.model.geo.addPlaneSurface([4, 2], 4) gmsh.model.geo.addCurveLoop([3, 4,-5, 7], 5) gmsh.model.geo.addPlaneSurface([5], 5) # Physical groups gmsh.model.addPhysicalGroup(0, [1, 2, 4, 5], 1) gmsh.model.setPhysicalName(0, 1, "DirichletNodes") gmsh.model.addPhysicalGroup(1, [1, 4], 2) gmsh.model.setPhysicalName(1, 2, "DirichletEdges") gmsh.model.addPhysicalGroup(0, [8, 9, 10, 11], 3) gmsh.model.setPhysicalName(0, 3, "DesignNodes") gmsh.model.addPhysicalGroup(1, [8, 9, 10, 11], 4) gmsh.model.setPhysicalName(1, 4, "DesignEdges") gmsh.model.addPhysicalGroup(2, [2], 5) gmsh.model.setPhysicalName(2, 5, "Design") gmsh.model.addPhysicalGroup(2, [1], 6) gmsh.model.setPhysicalName(2, 6, "Center") gmsh.model.addPhysicalGroup(2, [3,4,5], 7) gmsh.model.setPhysicalName(2, 7, "Air") gmsh.model.addPhysicalGroup(1, [7], 9) gmsh.model.setPhysicalName(1, 9, "Source") gmsh.model.geo.synchronize() # Set periodic mesh on the left and right side gmsh.model.mesh.setPeriodic(1, [2], [6], [1, 0, 0, geo_param.L+2*geo_param.dpml, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]) gmsh.model.mesh.setPeriodic(1, [3], [5], [1, 0, 0, geo_param.L+2*geo_param.dpml, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]) # We can then generate a 2D mesh... gmsh.model.mesh.generate(2) # ... and save it to disk gmsh.write(meshfile_name) gmsh.finalize() end L = 600 h1 = 600 h2 = 200 rt = 150 rd = 100 rs = 10 dpml = 300 l1 = 20 l2 = 1 meshfile = "RecCirGeometry.msh" geo_param = RecCirGeometry(L, h1, h2, rt, rd, rs, dpml, l1, l2) MeshGenerator(geo_param, meshfile) # -
Focusing2D/Focusing2DGridap.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import pandas as pd import numpy as np df = pd.read_csv('Desktop/titanic/train.csv') # - df.head(10) df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1) df.info() df = df.dropna() df['Sex'].unique() df['Gender'] = df['Sex'].map({'female': 0, 'male':1}).astype(int) df['Embarked'].unique() df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int) df = df.drop(['Sex', 'Embarked'], axis=1) cols = df.columns.tolist() print(cols) cols = [cols[1]] + cols[0:1] + cols[2:] df = df[cols] df.head(10) df.info() train_data = df.values #Using Random Forest to predict data from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators = 100) print np.shape(train_data) model = model.fit(train_data[0:,2:],train_data[0:,0]) df_test = pd.read_csv('Desktop/titanic/test.csv') df_test.head(10) # + df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1) df_test = df_test.dropna() df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male':1}) df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3}) df_test = df_test.drop(['Sex', 'Embarked'], axis=1) test_data = df_test.values # - df_test.head(10) output = model.predict(test_data[:,1:]) result = np.c_[test_data[:,0].astype(int), output.astype(int)] df_result = pd.DataFrame(result[:,0:2], columns=['PassengerId', 'Survived']) df_result.head(10) df_result.to_csv('Desktop/titanic/RandomForestresults.csv', index=False) df_result.shape
RandomForest.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: kaggle36 # language: python # name: kaggle36 # --- from torchvision.models.vgg import vgg13 vggNet=vgg13(pretrained=True) # + import numpy as np from skimage import io import matplotlib.pyplot as plt # %matplotlib inline # - img=io.imread('demo_3.jpg') fig=plt.figure(figsize=(7,13)) plt.imshow(img) plt.axis('off') # + import torch import torchvision.transforms as T print(img.shape) trsfm=T.ToTensor() img1=io.imread('demo_3.jpg') img2=io.imread('demo_4.jpg') img=np.array((img1,img2)) x=trsfm(img1).unsqueeze(0) print(x.shape) # y=vggNet(x) # -
.ipynb_checkpoints/filter-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from nose.tools import * from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.preprocessing import MinMaxScaler # Write your imports here # - # # Linear and Logistic Regression Lab # ## Getting acquainted with the tools. Performing some common tasks and creating our first models # You will receive labs in this format. Edit the file to make everything work. # # You can add some cells as you wish. Some cells are read-only - you won't be able to edit them. # # **Notes:** # 1. **DO NOT** copy everything in a new file. Edit this one (.ipynb), save it and submit it. **DO NOT** rename the file. # 2. Be careful what is asked of you - all problems have checks that you need to pass in order to get the points. # 3. There are tests that you can see, as well as hidden tests. You'll have to perform well on both the visible and the hidden tests. **In this assignment only**, there are no hidden tests. This is just for your convenience. # 4. If you have used other files, upload them too. You don't need to upload any files supplied with the lab assignment. # 5. Each lab is scored on a scale from 0 to 10. You can get partial credit (e. g. 5 / 10). # ### Problem 1. Read the data (1 point) # The dataset comes from [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00222/). It contains information about the marketing of a Portuguese bank. # # The data you need to read is the `bank.csv` file in the `data` folder (use ";" as the column separator). The `bank-names.txt` file contains information about the dataset. Read it and you'll get some information about what it contains. # # Read the dataset using `pandas` (you can use the library with the alias `pd`). Save it in the `bank_data` variable. # + deletable=false nbgrader={"checksum": "6f01f6b16d4cc0c6d70623ffabbb26a3", "grade": false, "grade_id": "cell-1d1926bb7ca098b5", "locked": false, "schema_version": 1, "solution": true} bank_data = pd.read_csv("./data/bank.csv", delimiter=";") bank_data.head() # + deletable=false editable=false nbgrader={"checksum": "04646f4e1d61554f24896f2580a9c6f6", "grade": true, "grade_id": "cell-f5eca6423dc08236", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_is_not_none(bank_data) assert_equal(bank_data.shape, (4521, 17)) # - # ### Problem 2. Separate features and labels (2 points) # Separate the explanatory variables and the output variable (it's called `y` in this case). Create two new variables. # + deletable=false nbgrader={"checksum": "4ca3bea52dd3a9545de67ec525ab76ab", "grade": false, "grade_id": "cell-37165798a822868a", "locked": false, "schema_version": 1, "solution": true} bank_features = bank_data.drop("y",axis=1) bank_output = bank_data["y"] # + deletable=false editable=false nbgrader={"checksum": "55f252f336e71ee415afaf1e5c70dada", "grade": true, "grade_id": "cell-bcdd5d7fa2460962", "locked": true, "points": 2, "schema_version": 1, "solution": false} assert_equal(bank_features.shape, (4521, 16)) assert_equal(bank_output.shape, (4521,)) # - # ### Problem 3. Convert categorical variables (2 points) # Convert all categorical variables in `bank_features` into indicator variables (dummies). Save the result in the same variable. (1 point) # + deletable=false nbgrader={"checksum": "eea54c44bc2385c397b31f95b4236228", "grade": false, "grade_id": "cell-e08709f9c53b50e0", "locked": false, "schema_version": 1, "solution": true} bank_features = pd.get_dummies(bank_features) # + deletable=false editable=false nbgrader={"checksum": "78d4866a669be1693501dec677182162", "grade": true, "grade_id": "cell-526e429563d680df", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_equal(bank_features.shape, (4521, 51)) # - # Convert the `bank_output` variable to an indicator variable. This can be done in many ways. Look up how in StackOverflow if you get stuck. # # The goal is to **rewrite the column** (replace the values): it should be numeric, and be equal to 1 if the original value was "yes" and 0 otherwise. (1 point) # + deletable=false nbgrader={"checksum": "d22b12e35316410cff3d988a7ba30358", "grade": false, "grade_id": "cell-78040e5a440b5171", "locked": false, "schema_version": 1, "solution": true} bank_output = bank_output.replace("yes",1) bank_output = bank_output.replace("no",0) bank_output = bank_output.astype(np.int64) # + deletable=false editable=false nbgrader={"checksum": "ad86b5c5be9567ceca42d0d6c1ccf558", "grade": true, "grade_id": "cell-280b855388c11990", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_equal(bank_output.dtype, np.int64) # - # ### Problem 4. Perform logistic regression on the original features (1 point) # Perform logistic regression. Save the model in the variable `bank_model`. # # Use all the data. This is not generally recommended but we'll think of a workaround next time. # # Pass a large number for the parameter `C = 1e6` (which is equivalent to `C = 1000000`). # + deletable=false nbgrader={"checksum": "4c2a3af88dc6e6dec25f82993e9d04c0", "grade": false, "grade_id": "cell-46045c65058e5e8b", "locked": false, "schema_version": 1, "solution": true} bank_model = LogisticRegression(C=1e6) # + deletable=false editable=false nbgrader={"checksum": "b342c65cc5749cea353896d940905921", "grade": true, "grade_id": "cell-17cefb4e8081fcdb", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_is_not_none(bank_model) assert_equal(bank_model.C, 1e6) # - # ### Problem 5. Get an estimate of the model performance (1 point) # Use `bank_model.score()` to get an accuracy score. We'll talk about what it represents later in the course. Save the resulting score in the variable `accuracy_score`. To generate the score, use all data. Once again, this is not what we do usually but it's a good start anyway. # + deletable=false nbgrader={"checksum": "d1c437ca23c62db5c52ef7dd52827f0d", "grade": false, "grade_id": "cell-c1ccd2f4394c67ee", "locked": false, "schema_version": 1, "solution": true} bank_model.fit(bank_features,bank_output) accuracy_score = bank_model.score(bank_features,bank_output) print(accuracy_score) # + deletable=false editable=false nbgrader={"checksum": "f36e16bbd9113c991a34051fb6d4e3f2", "grade": true, "grade_id": "cell-52c9269442900910", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_almost_equal(accuracy_score, 0.9042247290422473, delta = 0.001) # - # We have to make a note here. If we explore how the output classes are distributed, we can see that "class 1" is about 11.5% of all samples, i.e. very few clients actually subscribed after the call, which is expected. This means the data is **highly imbalanced**. In this case, accuracy is not a good measure of the overall model performance. We have to look at other scoring measures to get a better estimate of what's going on. # # But once again, we're just getting started. # + # There's nothing to do here, just execute the cell and view the plot and print results. # Cells like these are here only for your convenience and to help you understand the task better plt.bar([0, 1], [len(bank_output[bank_output == 0]), len(bank_output[bank_output == 1])]) plt.xticks([0, 1]) plt.xlabel("Class") plt.ylabel("Count") plt.show() print("Positive cases: {:.3f}% of all".format(bank_output.sum() / len(bank_output) * 100)) # - # ### Problem 6. More features (1 point) # The score is pretty high. But can we improve it? One way to try and improve it is to use polynomial features. As we saw, this creates all possible multiples of input features. In the real world, this corresponds to **feature interaction**. # # Create a model for quadratic features (`degree = 2`). Save it in the variable `quad_feature_transformer`. Also, set `interaction_only` to True: let's suppose we don't want to square each feature. This means that we have all single features $x_1, x_2, \dots$ and all interactions $x_1x_2, x_1x_3, \dots$ but no $x_1^2, x_2^2, \dots$ # # Using it, transform all `bank_features`. Save them in the variable `bank_features_quad`. # # Note how the number of features exploded: from 51 we get more than 1300. # + deletable=false nbgrader={"checksum": "1d9e945981589431cb60fb23f3e292a4", "grade": false, "grade_id": "cell-f4b5c98c2c3d7ef3", "locked": false, "schema_version": 1, "solution": true} quad_feature_transformer = PolynomialFeatures(degree=2,interaction_only=True) bank_features_quad = quad_feature_transformer.fit_transform(bank_features) # + deletable=false editable=false nbgrader={"checksum": "7dc305e61d9755d1fbd8fcab1157e6cd", "grade": true, "grade_id": "cell-b42599d51988eda2", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_equal(quad_feature_transformer.degree, 2) assert_equal(quad_feature_transformer.interaction_only, True) assert_equal(bank_features_quad.shape, (4521, 1327)) # - # ### Problem 7. Train a model on the quadratic features (1 point) # You know the drill. Fit a logistic regression model with all data in `bank_features_quad` and `bank_output`. Use `C = 1e6`. Save it in `bank_model_quad`. Score it and save the score in the variable `accuracy_score_quad`. # + deletable=false nbgrader={"checksum": "352a0967d85055d7231829c734ee88af", "grade": false, "grade_id": "cell-13ea36255860f15b", "locked": false, "schema_version": 1, "solution": true} bank_model_quad = LogisticRegression(C=1e6) bank_model_quad.fit(bank_features_quad, bank_output) accuracy_score_quad = bank_model_quad.score(bank_features_quad, bank_output) print("Accuracy: {:.3f}".format(accuracy_score_quad)) # + deletable=false editable=false nbgrader={"checksum": "7913ad0cba092aec2bcaa500fc677e96", "grade": true, "grade_id": "cell-4718eb80c10d4a16", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_is_not_none(bank_model_quad) assert_equal(bank_model_quad.C, 1e6) assert_equal(len(bank_model_quad.coef_[0]), bank_features_quad.shape[1]) # This is a simple check that the model has been trained assert_almost_equal(accuracy_score_quad, 0.8986949789869498, delta = 0.001) # - # Interesting... we have many more features but the accuracy actually dropped a little. We would observe the same behaviour if we took polynomials of degree 3: more than 20 000 features and accuracy less than 0.87. # # This is our first example of model selection. Why is the seemingly more complex model less accurate? There are two main reasons: # * As we said, the default score (accuracy) is not good for this dataset, so its values aren't too relevant. # * The number of features is alarmingly high. This leads to what we call "overfitting": our model is too complex. We can't quite catch it with this scoring scheme but we will be able to do that later. # # We can try a lot of things: test our model better, improve our scoring schemes, come up with better features, etc. In general, we need to take care of several things: # * Are all parameters relevant? Can we discard some of them and how? # * How do we deal with imbalanced data? # * Is logistic regression the best type of model overall? Are there models that do better on this data? # * What are the best hyperparameters for the model? We chose `C = 1e6` arbitrarily. # # We'll continue to do this next time. Let's try just one more thing. # ### Problem 8. Perform normalization and compare results (1 point) # We saw very strange results. A part of the problem might be that our data isn't normalized. # # Use the `MinMaxScaler` to scale all values in `bank_features_quad`. Save them in `bank_features_quad_scaled`. This will take several seconds. # # Perform a logistic regression on the new, scaled features: `bank_features_quad_scaled` and `bank_output`. Use the same parameters to score it. # # You should observe that the score improved the score significantly. # + deletable=false nbgrader={"checksum": "703dd691c73f0b5a7202380746383250", "grade": false, "grade_id": "cell-972ff9771d00156b", "locked": false, "schema_version": 1, "solution": true} bank_model_quad_scaled = LogisticRegression(C=1e6) scaler = MinMaxScaler(copy=True, feature_range=(0,1)) bank_features_quad_scaled = scaler.fit_transform(bank_features_quad) bank_model_quad_scaled.fit(bank_features_quad_scaled, bank_output) accuracy_score_quad_scaled = bank_model_quad_scaled.score(bank_features_quad_scaled, bank_output) # + deletable=false editable=false nbgrader={"checksum": "01d5030b116296f7babfc308517b8f15", "grade": true, "grade_id": "cell-617300ee8ad8e106", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert_is_not_none(bank_model_quad) assert_equal(bank_model_quad.C, 1e6) assert_equal(len(bank_model_quad_scaled.coef_[0]), bank_features_quad_scaled.shape[1]) assert_almost_equal(accuracy_score_quad_scaled, 0.969033399690334, delta = 0.001) # - # Also, if you do the test, scaling the original features (instead of the quadratic ones) doesn't improve the score much. This is partly because it isn't the best score. Also, our results are a great reminder that **if we have many uncorrelated features, it's almost always a good idea to rescale them**. You can read some papers online, or use the forums to ask if you're interested why exactly this happens. # # The main takeaway from this lab is: working with `scikit-learn` is easy but in order to get meaningful results, you need to understand what you're doing.
Linear_and_logistic_regression/02. Linear-and-Logistic-Regression-Lab/Linear and Logistic Regression Lab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (myenv3.6) # language: python # name: myenv3.6 # --- import numpy import scipy.io.wavfile import matplotlib.pyplot as plt import os # + # Compute MFSC features for CNN training dir_string = './data/' file_list = os.listdir(dir_string) file_list = [file for file in file_list if (file[0] == '0' or file[0] == '1')] for file in file_list: sample_rate, signal = scipy.io.wavfile.read(dir_string + file) # create shorter-term frame for signal frame_size = 0.025 # number of seconds of each frame frame_stride = 0.01 # size of stride between two frames (frame_size - frame_stride = overlap between frames) frame_length, frame_step = frame_size * sample_rate, frame_stride * sample_rate signal_length = len(signal) frame_length = int(round(frame_length)) frame_step = int(round(frame_step)) if (signal_length > frame_length): num_steps = int(numpy.ceil(float(signal_length - frame_length) / frame_step)) else: num_steps = 1 num_frames = num_steps + 1 pad_signal_length = num_steps * frame_step + frame_length # number of zeros to pad at the end of signal pad_vector = numpy.zeros((pad_signal_length - signal_length)) pad_signal = numpy.append(signal, pad_vector) indices = numpy.tile(numpy.arange(0, frame_length), (num_frames, 1)) + \ numpy.tile(numpy.arange(0, num_frames * frame_step, frame_step), (frame_length, 1)).T # indices in emphasized_signal to slice to form frames frames = pad_signal[indices.astype(numpy.int32, copy=False)] # apply hamming function for FFT frames *= numpy.hamming(frame_length) # Fourier Transform and Power Spectrum NFFT = 512 mag_frames = numpy.absolute(numpy.fft.rfft(frames, NFFT)) # Magnitude of the FFT pow_frames = ((1.0 / NFFT) * ((mag_frames) ** 2)) # Power Spectrum # apply triangular filter nfilt = 40 low_freq_mel = 0 high_freq_mel = (2595 * numpy.log10(1 + (sample_rate / 2) / 700)) # Convert Hz to Mel mel_points = numpy.linspace(low_freq_mel, high_freq_mel, nfilt + 2) # Equally spaced in Mel scale (incl. low&high freq) hz_points = (700 * (10**(mel_points / 2595) - 1)) # Convert Mel to Hz bin = numpy.floor((NFFT + 1) * hz_points / sample_rate) fbank = numpy.zeros((nfilt, int(numpy.floor(NFFT / 2 + 1)))) for m in range(1, nfilt + 1): f_m_minus = int(bin[m - 1]) # left f_m = int(bin[m]) # center f_m_plus = int(bin[m + 1]) # right for k in range(f_m_minus, f_m): fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1]) for k in range(f_m, f_m_plus): fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m]) filter_banks = numpy.dot(pow_frames, fbank.T) filter_banks = numpy.where(filter_banks == 0, numpy.finfo(float).eps, filter_banks) # Numerical Stability filter_banks = 20 * numpy.log10(filter_banks) # dB #filter_banks = filter_banks.T # 40 nfilt * 63 nframes filter_banks1d = filter_banks[:-1] - filter_banks[1:] filter_banks2d = filter_banks1d[:-1] - filter_banks1d[1:] #filter_banks_concat = numpy.zeros((61*3, 40)) filter_banks_concat = numpy.zeros((40,61,3)) filter_banks = filter_banks.T # 40 * 63 filter_banks1d = filter_banks1d.T # 40 * 62 filter_banks2d = filter_banks2d.T # 40 * 61 frame_limit = min(61, filter_banks2d.shape[1]) for i in range(frame_limit): filter_banks_concat[:,:frame_limit,0] = filter_banks[:,:frame_limit] filter_banks_concat[:,:frame_limit,1] = filter_banks1d[:,:frame_limit] filter_banks_concat[:,:frame_limit,2] = filter_banks2d[:,:frame_limit] filter_banks_concat = filter_banks_concat.flatten() #print(filter_banks_concat.shape) #plt.imshow(filter_banks_concat, cmap='hot', interpolation='nearest') #plt.show() ''' frame_limit = min(61, filter_banks2d.shape[0]) for i in range(frame_limit*3): ind_frame = int(i/3) ind_deriv = i % 3 if(ind_deriv == 0): filter_banks_concat[i] = filter_banks[ind_frame] if(ind_deriv == 1): filter_banks_concat[i] = filter_banks1d[ind_frame] else: filter_banks_concat[i] = filter_banks2d[ind_frame] filter_banks_concat = filter_banks_concat.T ''' with open('./cnn_data/' + file[:-4], 'wb') as f: numpy.save(f, filter_banks_concat) # - # %matplotlib inline plt.imshow(frames, cmap='hot') # + # Compute MFCC features for direct classification from scipy.fftpack import dct dir_string = './data/' file_list = os.listdir(dir_string) file_list = [file for file in file_list if (file[0] == '0' or file[0] == '1')] for file in file_list: sample_rate, signal = scipy.io.wavfile.read(dir_string + file) # create shorter-term frame for signal frame_size = 0.025 # number of seconds of each frame frame_stride = 0.01 # size of stride between two frames (frame_size - frame_stride = overlap between frames) frame_length, frame_step = frame_size * sample_rate, frame_stride * sample_rate signal_length = len(signal) frame_length = int(round(frame_length)) frame_step = int(round(frame_step)) if (signal_length > frame_length): num_steps = int(numpy.ceil(float(signal_length - frame_length) / frame_step)) else: num_steps = 1 num_frames = num_steps + 1 pad_signal_length = num_steps * frame_step + frame_length # number of zeros to pad at the end of signal pad_vector = numpy.zeros((pad_signal_length - signal_length)) pad_signal = numpy.append(signal, pad_vector) indices = numpy.tile(numpy.arange(0, frame_length), (num_frames, 1)) + \ numpy.tile(numpy.arange(0, num_frames * frame_step, frame_step), (frame_length, 1)).T # indices in emphasized_signal to slice to form frames frames = pad_signal[indices.astype(numpy.int32, copy=False)] # apply hamming function for FFT frames *= numpy.hamming(frame_length) # Fourier Transform and Power Spectrum NFFT = 512 mag_frames = numpy.absolute(numpy.fft.rfft(frames, NFFT)) # Magnitude of the FFT pow_frames = ((1.0 / NFFT) * ((mag_frames) ** 2)) # Power Spectrum # apply triangular filter nfilt = 40 low_freq_mel = 0 high_freq_mel = (2595 * numpy.log10(1 + (sample_rate / 2) / 700)) # Convert Hz to Mel mel_points = numpy.linspace(low_freq_mel, high_freq_mel, nfilt + 2) # Equally spaced in Mel scale (incl. low&high freq) hz_points = (700 * (10**(mel_points / 2595) - 1)) # Convert Mel to Hz bin = numpy.floor((NFFT + 1) * hz_points / sample_rate) fbank = numpy.zeros((nfilt, int(numpy.floor(NFFT / 2 + 1)))) for m in range(1, nfilt + 1): f_m_minus = int(bin[m - 1]) # left f_m = int(bin[m]) # center f_m_plus = int(bin[m + 1]) # right for k in range(f_m_minus, f_m): fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1]) for k in range(f_m, f_m_plus): fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m]) filter_banks = numpy.dot(pow_frames, fbank.T) filter_banks = numpy.where(filter_banks == 0, numpy.finfo(float).eps, filter_banks) # Numerical Stability filter_banks = 20 * numpy.log10(filter_banks) # dB #filter_banks = filter_banks.T # 40 nfilt * 63 nframes #filter_banks1d = filter_banks[:-1] - filter_banks[1:] #filter_banks2d = filter_banks1d[:-1] - filter_banks1d[1:] num_ceps = 12 mfcc = dct(filter_banks, type=2, axis=1, norm='ortho')[:, 1 : (num_ceps + 1)] # Keep 2-13 cep_lifter = 23 (nframes, ncoeff) = mfcc.shape n = numpy.arange(ncoeff) lift = 1 + (cep_lifter / 2) * numpy.sin(numpy.pi * n / cep_lifter) mfcc *= lift # mean normalization mfcc -= (numpy.mean(mfcc, axis=0)) mfcc_result = numpy.zeros((63,12)) dim1 = len(mfcc) if (dim1 <= 63): mfcc_result[:dim1, :] = mfcc else: mfcc_result[:,:] = mfcc[:63, :] with open('./mfcc_data/' + file[:-4], 'wb') as f: numpy.save(f, mfcc_result.T) # + from scipy.fftpack import dct sample_rate, signal = scipy.io.wavfile.read('./data/0_jackson_20.wav') # create shorter-term frame for signal frame_size = 0.025 # number of seconds of each frame frame_stride = 0.01 # size of stride between two frames (frame_size - frame_stride = overlap between frames) frame_length, frame_step = frame_size * sample_rate, frame_stride * sample_rate signal_length = len(signal) frame_length = int(round(frame_length)) frame_step = int(round(frame_step)) if (signal_length > frame_length): num_steps = int(numpy.ceil(float(signal_length - frame_length) / frame_step)) else: num_steps = 1 num_frames = num_steps + 1 pad_signal_length = num_steps * frame_step + frame_length # number of zeros to pad at the end of signal pad_vector = numpy.zeros((pad_signal_length - signal_length)) pad_signal = numpy.append(signal, pad_vector) indices = numpy.tile(numpy.arange(0, frame_length), (num_frames, 1)) + \ numpy.tile(numpy.arange(0, num_frames * frame_step, frame_step), (frame_length, 1)).T # indices in emphasized_signal to slice to form frames frames = pad_signal[indices.astype(numpy.int32, copy=False)] # apply hamming function for FFT frames *= numpy.hamming(frame_length) # Fourier Transform and Power Spectrum NFFT = 512 mag_frames = numpy.absolute(numpy.fft.rfft(frames, NFFT)) # Magnitude of the FFT pow_frames = ((1.0 / NFFT) * ((mag_frames) ** 2)) # Power Spectrum # apply triangular filter nfilt = 40 low_freq_mel = 0 high_freq_mel = (2595 * numpy.log10(1 + (sample_rate / 2) / 700)) # Convert Hz to Mel mel_points = numpy.linspace(low_freq_mel, high_freq_mel, nfilt + 2) # Equally spaced in Mel scale (incl. low&high freq) hz_points = (700 * (10**(mel_points / 2595) - 1)) # Convert Mel to Hz bin = numpy.floor((NFFT + 1) * hz_points / sample_rate) fbank = numpy.zeros((nfilt, int(numpy.floor(NFFT / 2 + 1)))) for m in range(1, nfilt + 1): f_m_minus = int(bin[m - 1]) # left f_m = int(bin[m]) # center f_m_plus = int(bin[m + 1]) # right for k in range(f_m_minus, f_m): fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1]) for k in range(f_m, f_m_plus): fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m]) filter_banks = numpy.dot(pow_frames, fbank.T) filter_banks = numpy.where(filter_banks == 0, numpy.finfo(float).eps, filter_banks) # Numerical Stability filter_banks = 20 * numpy.log10(filter_banks) # dB #filter_banks = filter_banks.T # 40 nfilt * 63 nframes #filter_banks1d = filter_banks[:-1] - filter_banks[1:] #filter_banks2d = filter_banks1d[:-1] - filter_banks1d[1:] num_ceps = 12 mfcc = dct(filter_banks, type=2, axis=1, norm='ortho')[:, 1 : (num_ceps + 1)] # Keep 2-13 cep_lifter = 23 (nframes, ncoeff) = mfcc.shape n = numpy.arange(ncoeff) lift = 1 + (cep_lifter / 2) * numpy.sin(numpy.pi * n / cep_lifter) mfcc *= lift # mean normalization mfcc -= (numpy.mean(mfcc, axis=0)) mfcc_result = numpy.zeros((63,12)) dim1 = len(mfcc) if (dim1 <= 63): mfcc_result[:dim1, :] = mfcc else: mfcc_result[:,:] = mfcc[:63, :] # + with open('./mfcc_data/1_jackson_0', 'rb') as f: mfcc = numpy.load(f) plt.imshow(mfcc, cmap='hot') print(mfcc.shape)
MFCC_forCNN_v2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cvxpygen # language: python # name: cvxpygen # --- # ## Resource Allocation Example # # Assume we have to assign resources of $m$ classes to $n$ kinds of jobs. This resource allocation is encoded in $X \in \mathbb{R}^{n \times m}$, with $X_{i,j}$ denoting the amount of resource $j$ allocated to job $i$. Given the utility matrix $W \in \mathbb{R}^{n \times m}$, we want to solve the optimization problem # # \begin{equation} # \begin{array}{ll} # \text{maximize} \quad &\mathrm{tr} \left( \min \left( X W^T, S\right) \right)\\ # \text{subject to} \quad &X^\mathrm{min} \leq X \leq X^\mathrm{max} \\ # &X^T \mathbb{1} \leq r, # \end{array} # \end{equation} # # with variable $X \in \mathbb{R}^{n \times m}$. The utility for some job $i$ cannot be increased beyond the saturation value $S_{ii}$, with $S \in \mathbb{S}_+^{n}$ being diagonal. The minimum and maximum amounts of resources to be allocated are denoted by $X^\mathrm{min} \geq 0$ and $X^\mathrm{max} \geq X^\mathrm{min}$, respectively, while $r$ is the vector of available resources. The problem is feasible if $\left(X^\mathrm{min}\right)^T \mathbb{1} \leq r$ and $X^\mathrm{min} \leq X^\mathrm{max}$. # # Let's define the corresponding CVXPY problem. # + import cvxpy as cp import numpy as np # define dimensions n, m = 30, 10 # define variable X = cp.Variable((n, m), name='X') # define parameters W = cp.Parameter((n, m), name='W') S = cp.Parameter((n, n), diag=True, name='S') X_min = cp.Parameter((n, m), name='X_min') X_max = cp.Parameter((n, m), name='X_max') r = cp.Parameter(m, name='r') # define objective objective = cp.Maximize(cp.trace(cp.minimum(X@W.T, S))) # define constraints constraints = [X_min <= X, X<= X_max, X.T@np.ones(n) <= r] # define problem problem = cp.Problem(objective, constraints) # - # Assign parameter values and solve the problem. # + np.random.seed(0) W.value = np.ones((n, m)) + 0.1*np.random.rand(n, m) S.value = 100*np.eye(n) X_min.value = np.random.rand(n, m) X_max.value = 10 + np.random.rand(n, m) r.value = np.matmul(X_min.value.T, np.ones(n)) + 10*np.random.rand(m) val = problem.solve() # - # Generating C source for the problem is as easy as: # + from cvxpygen import cpg cpg.generate_code(problem, code_dir='resource_code') # - # Now, you can use a python wrapper around the generated code as a custom CVXPY solve method. # + from resource_code.cpg_solver import cpg_solve import numpy as np import pickle import time # load the serialized problem formulation with open('resource_code/problem.pickle', 'rb') as f: prob = pickle.load(f) # assign parameter values np.random.seed(0) prob.param_dict['S'].value = 100*np.eye(n) prob.param_dict['W'].value = 0.8*np.ones((n, m)) + 0.2*np.random.rand(n, m) prob.param_dict['X_min'].value = np.zeros((n, m)) prob.param_dict['X_max'].value = np.ones((n, m)) prob.param_dict['r'].value = np.matmul(prob.param_dict['X_min'].value.T, np.ones(n)) + np.random.rand(m) # solve problem conventionally t0 = time.time() # CVXPY chooses eps_abs=eps_rel=1e-5, max_iter=10000, polish=True by default, # however, we choose the OSQP default values here, as they are used for code generation as well val = prob.solve() t1 = time.time() print('\nCVXPY\nSolve time: %.3f ms' % (1000 * (t1 - t0))) print('Objective function value: %.6f\n' % val) # solve problem with C code via python wrapper prob.register_solve('CPG', cpg_solve) t0 = time.time() val = prob.solve(method='CPG') t1 = time.time() print('\nCVXPYgen\nSolve time: %.3f ms' % (1000 * (t1 - t0))) print('Objective function value: %.6f\n' % val) # + from visualization.resource import create_animation from IPython.display import Image create_animation(prob, 'resource_animation') with open('resource_animation.gif', 'rb') as f: display(Image(f.read()))
examples/resource.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a href="http://cocl.us/pytorch_link_top"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " /> # </a> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" /> # <h1>Linear regression 1D: Training Two Parameter</h1> # + [markdown] slideshow={"slide_type": "slide"} # <h2>Table of Contents</h2> # <p>In this lab, you will train a model with PyTorch by using the data that we created. The model will have the slope and bias. And we will review how to make a prediction in several different ways by using PyTorch.</p> # # <ul> # <li><a href="#Makeup_Data">Make Some Data</a></li> # <li><a href="#Model_Cost">Create the Model and Cost Function (Total Loss) </a></li> # <li><a href="#Train">Train the Model </a></li> # </ul> # <p>Estimated Time Needed: <strong>20 min</strong></ul> # # <hr> # - # <h2>Preparation</h2> # + [markdown] slideshow={"slide_type": "slide"} # We'll need the following libraries: # + slideshow={"slide_type": "slide"} # These are the libraries we are going to use in the lab. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits import mplot3d # - # The class <code>plot_error_surfaces</code> is just to help you visualize the data space and the parameter space during training and has nothing to do with PyTorch. # + # The class for plot the diagram class plot_error_surfaces(object): # Constructor def __init__(self, w_range, b_range, X, Y, n_samples = 30, go = True): W = np.linspace(-w_range, w_range, n_samples) B = np.linspace(-b_range, b_range, n_samples) w, b = np.meshgrid(W, B) Z = np.zeros((30,30)) count1 = 0 self.y = Y.numpy() self.x = X.numpy() for w1, b1 in zip(w, b): count2 = 0 for w2, b2 in zip(w1, b1): Z[count1, count2] = np.mean((self.y - w2 * self.x + b2) ** 2) count2 += 1 count1 += 1 self.Z = Z self.w = w self.b = b self.W = [] self.B = [] self.LOSS = [] self.n = 0 if go == True: plt.figure() plt.figure(figsize = (7.5, 5)) plt.axes(projection='3d').plot_surface(self.w, self.b, self.Z, rstride = 1, cstride = 1,cmap = 'viridis', edgecolor = 'none') plt.title('Cost/Total Loss Surface') plt.xlabel('w') plt.ylabel('b') plt.show() plt.figure() plt.title('Cost/Total Loss Surface Contour') plt.xlabel('w') plt.ylabel('b') plt.contour(self.w, self.b, self.Z) plt.show() # Setter def set_para_loss(self, W, B, loss): self.n = self.n + 1 self.W.append(W) self.B.append(B) self.LOSS.append(loss) # Plot diagram def final_plot(self): ax = plt.axes(projection = '3d') ax.plot_wireframe(self.w, self.b, self.Z) ax.scatter(self.W,self.B, self.LOSS, c = 'r', marker = 'x', s = 200, alpha = 1) plt.figure() plt.contour(self.w,self.b, self.Z) plt.scatter(self.W, self.B, c = 'r', marker = 'x') plt.xlabel('w') plt.ylabel('b') plt.show() # Plot diagram def plot_ps(self): plt.subplot(121) plt.ylim plt.plot(self.x, self.y, 'ro', label="training points") plt.plot(self.x, self.W[-1] * self.x + self.B[-1], label = "estimated line") plt.xlabel('x') plt.ylabel('y') plt.ylim((-10, 15)) plt.title('Data Space Iteration: ' + str(self.n)) plt.subplot(122) plt.contour(self.w, self.b, self.Z) plt.scatter(self.W, self.B, c = 'r', marker = 'x') plt.title('Total Loss Surface Contour Iteration' + str(self.n)) plt.xlabel('w') plt.ylabel('b') plt.show() # - # <!--Empty Space for separating topics--> # <h2 id="Makeup_Data">Make Some Data</h2> # Import PyTorch: # + # Import PyTorch library import torch # - # Start with generating values from -3 to 3 that create a line with a slope of 1 and a bias of -1. This is the line that you need to estimate. # + # Create f(X) with a slope of 1 and a bias of -1 X = torch.arange(-3, 3, 0.1).view(-1, 1) f = 1 * X - 1 # - # Now, add some noise to the data: # + # Add noise Y = f + 0.1 * torch.randn(X.size()) # - # Plot the line and <code>Y</code> with noise: # + # Plot out the line and the points with noise plt.plot(X.numpy(), Y.numpy(), 'rx', label = 'y') plt.plot(X.numpy(), f.numpy(), label = 'f') plt.xlabel('x') plt.ylabel('y') plt.legend() # - # <h2 id="Model_Cost">Create the Model and Cost Function (Total Loss)</h2> # Define the <code>forward</code> function: # + # Define the forward function def forward(x): return w * x + b # - # Define the cost or criterion function (MSE): # + # Define the MSE Loss function def criterion(yhat,y): return torch.mean((yhat-y)**2) # - # Create a <code> plot_error_surfaces</code> object to visualize the data space and the parameter space during training: # + # Create plot_error_surfaces for viewing the data get_surface = plot_error_surfaces(15, 15, X, Y, 30) # - # <!--Empty Space for separating topics--> # <h2 id="Train">Train the Model</h2> # Create model parameters <code>w</code>, <code>b</code> by setting the argument <code>requires_grad</code> to True because we must learn it using the data. # + # Define the parameters w, b for y = wx + b w = torch.tensor(-15.0, requires_grad = True) b = torch.tensor(-10.0, requires_grad = True) # - # Set the learning rate to 0.1 and create an empty list <code>LOSS</code> for storing the loss for each iteration. # + # Define learning rate and create an empty list for containing the loss for each iteration. lr = 0.1 LOSS = [] # - # Define <code>train_model</code> function for train the model. # + # The function for training the model def train_model(iter): # Loop for epoch in range(iter): # make a prediction Yhat = forward(X) # calculate the loss loss = criterion(Yhat, Y) # Section for plotting get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist()) if epoch % 3 == 0: get_surface.plot_ps() # store the loss in the list LOSS LOSS.append(loss) # backward pass: compute gradient of the loss with respect to all the learnable parameters loss.backward() # update parameters slope and bias w.data = w.data - lr * w.grad.data b.data = b.data - lr * b.grad.data # zero the gradients before running the backward pass w.grad.data.zero_() b.grad.data.zero_() # - # Run 15 iterations of gradient descent: <b>bug</b> data space is 1 iteration ahead of parameter space # + # Train the model with 15 iterations train_model(15) # - # Plot total loss/cost surface with loss values for different parameters in red: # + # Plot out the Loss Result get_surface.final_plot() plt.plot(LOSS) plt.tight_layout() plt.xlabel("Epoch/Iterations") plt.ylabel("Cost") # - # <!--Empty Space for separating topics--> # <h3>Practice</h3> # Experiment using s learning rates 0.2 and width the following parameters. Run 15 iterations. # + # Practice: train and plot the result with lr = 0.2 and the following parameters w = torch.tensor(-15.0, requires_grad = True) b = torch.tensor(-10.0, requires_grad = True) lr = 0.1 LOSS2 = [] def my_train_model(iter): for epoch in range(iter): Yhat = forward(X) loss = criterion(Yhat, Y) get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist()) if epoch % 3 == 0: get_surface.plot_ps() LOSS2.append(loss) loss.backward() w.data = w.data - lr * w.grad.data b.data = b.data - lr * b.grad.data w.grad.data.zero_() b.grad.data.zero_() my_train_model(15) # - # Double-click <b>here</b> for the solution. # <!-- # # # --> # Plot the <code>LOSS</code> and <code>LOSS2</code> # + # Practice: Plot the LOSS and LOSS2 in order to compare the Total Loss # Type your code here # - # Double-click <b>here</b> for the solution. # <!-- # plt.plot(LOSS, label = "LOSS") # plt.plot(LOSS2, label = "LOSS2") # plt.tight_layout() # plt.xlabel("Epoch/Iterations") # plt.ylabel("Cost") # plt.legend() # --> # <!--Empty Space for separating topics--> # <a href="http://cocl.us/pytorch_link_bottom"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" /> # </a> # # <h2>About the Authors:</h2> # # <a href="https://www.linkedin.com/in/joseph-s-50398b136/"><NAME></a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. # Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/"><NAME></a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a"><NAME></a> # <hr> # Copyright &copy; 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
IBM_AI/4_Pytorch/2.3_training_slope_and_bias_v3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # %load_ext watermark # %watermark -v -n -m -p numpy,scipy,sklearn,pandas # + # %matplotlib inline # %load_ext autoreload # %autoreload 2 import pandas as pd import numpy as np import seaborn as sns import os PROJ_ROOT = os.path.abspath(os.path.join(os.pardir)) print(PROJ_ROOT) import sys sys.path.append(os.path.join(PROJ_ROOT, 'src')) import sys sys.path.append(os.path.join(PROJ_ROOT, 'src')) from data.preprocess import read_raw_data, preprocess_data from visualization.exploratory import exploratory_visualization # - data_fname = os.path.join(PROJ_ROOT, 'data', 'raw', 'iris.csv') raw_data = read_raw_data(data_fname) raw_data.head() preprocessed_data = preprocess_data(raw_data) preprocessed_data.head() exploratory_visualization(preprocessed_data)
notebooks/00-initial-exploration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GPyOpt: armed bandits optimization # # ### Written by <NAME>, Amazon Reseach Cambridge, UK. # # # *Last updated Monday, 22 May 2016.* # # In this notebook we will see how to do armed bandits optimization with GPyOpt. To do this will use data of weather forecasts at weather stations across more that 10.000 locations in the United States. The project [OpenWeatherMap project](http://openweathermap.org/) provides an API service to download this information and at that [dataset](https://github.com/WeatherStudy/weather_study) it is possible to find the weather forecasts for these stations. In this notebook we will use the file target_day_20140422.dat that contains the weather forecasts for each station in the United States for the April 22, 2014. The latitude and longitude of the stations is available as well as the forecasts for the next 7 days. # # We start by loading the packages that we will need in out analysis. # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import GPyOpt # Next, we load the data. filename='./data/target_day_20140422.dat' f = open(filename, 'r') contents = f.readlines() # Now, we process the dataset. We selected only the data corresponding to the day April 22, 2014 and we remove the stations in Alaska and the US islands. The fist part of the next cell was taken from this [matplotlib tutorial](https://github.com/ginaschmalzle/pyladies_matplotlib_ipython_notebooks/blob/master/Matplotlib_tutorial.ipynb). # + ## Create a dictionary for the forecasted forecast_dict = {} for line in range(1, len(contents)): line_split = contents[line].split(' ') try: forecast_dict[line_split[0], line_split[1]][line_split[2]] = {'MaxT':float(line_split[3]), 'MinT':float(line_split[4][:-1])} except: forecast_dict[line_split[0], line_split[1]] = {} forecast_dict[line_split[0], line_split[1]][line_split[2]] = {'MaxT':float(line_split[3]), 'MinT':float(line_split[4][:-1])} keys = forecast_dict.keys() day_out = '0' # 0-7 temp = 'MaxT' # MaxT or MinT temperature = []; lat = []; lon = [] for key in keys: temperature.append(float(forecast_dict[key][day_out][temp])) lat.append(float(key[0])) lon.append(float(key[1])) ## Create numpy arrays for the analyisis and remove Alaska and the islands lon = np.array(lon) lat = np.array(lat) sel = np.logical_and(np.logical_and(lat>24,lat<51),np.logical_and(lon> -130, lon <-65)) stations_coordinates_all = np.array([lon,lat]).T stations_maxT_all = np.array([temperature]).T stations_coordinates = stations_coordinates_all[sel,:] stations_maxT = stations_maxT_all[sel,:] # - # Check the total number of stations. stations_maxT.shape[0] # The array *stations_coordinates* contains the longitude and latitude of the weather stations and *stations_maxT* contains the maximum temperature value recorded in those locations on the April 22, 2014. Next we make a plot of all available stations. plt.figure(figsize=(12,7)) sc = plt.scatter(stations_coordinates[:,0],stations_coordinates[:,1], c='b', s=2, edgecolors='none') plt.title('US weather stations',size=25) plt.xlabel('Logitude',size=15) plt.ylabel('Latitude',size=15) plt.ylim((25,50)) plt.xlim((-128,-65)) # Our goal is to find the **coldest stations** in this map using the **minumum number of queries**. We use the full dataset to create this objective function. # Class that defines the function to optimize given the available locations class max_Temp(object): def __init__(self,stations_coordinates,stations_maxT): self.stations_coordinates = stations_coordinates self.stations_maxT = stations_maxT def f(self,x): return np.dot(0.5*(self.stations_coordinates == x).sum(axis=1),self.stations_maxT)[:,None] # The class *max_Temp* returns the temperature of each station everytime is queried with the coordinates of one of the available stations. To use it for this optimization example we create and instance of it. # Objective function given the current inputs func = max_Temp(stations_coordinates,stations_maxT) # Our design space is now the coordinates of the weather stations. We crete it: domain = [{'name': 'stations', 'type': 'bandit', 'domain':stations_coordinates }] # armed bandit with the locations # Now we create the GPyOpt object. We will initilize the process with 50 stations, assume that the data are noisy, and we won't normalize the outputs. A seed is used for reproducibility from numpy.random import seed seed(123) myBopt = GPyOpt.methods.BayesianOptimization(f=func.f, # function to optimize domain=domain, initial_design_numdata = 5, acquisition_type='EI', exact_feval = True, normalize_Y = False, optimize_restarts = 10, acquisition_weight = 2, de_duplication = True) myBopt.model.model # We run the optimization for a maximum of 50 iterations # Run the optimization max_iter = 50 # evaluation budget myBopt.run_optimization(max_iter) # GPyOpt prints a message to say that the optimization was stopped because the same location was selected twice. Let's have a look to the results. We plot the map with the true temperature of the stations, the coldest one and the best found location. # + plt.figure(figsize=(15,7)) jet = plt.cm.get_cmap('jet') sc = plt.scatter(stations_coordinates[:,0],stations_coordinates[:,1], c=stations_maxT, vmin=0, vmax =35, cmap=jet, s=3, edgecolors='none') cbar = plt.colorbar(sc, shrink = 1) cbar.set_label(temp) plt.plot(myBopt.x_opt[0],myBopt.x_opt[1],'ko',markersize=10, label ='Best found') plt.plot(myBopt.X[:,0],myBopt.X[:,1],'k.',markersize=8, label ='Observed stations') plt.plot(stations_coordinates[np.argmin(stations_maxT),0],stations_coordinates[np.argmin(stations_maxT),1],'k*',markersize=15, label ='Coldest station') plt.legend() plt.ylim((25,50)) plt.xlim((-128,-65)) plt.title('Max. temperature: April, 22, 2014',size=25) plt.xlabel('Longitude',size=15) plt.ylabel('Latitude',size=15) plt.text(-125,28,'Total stations =' + str(stations_maxT.shape[0]),size=20) plt.text(-125,26.5,'Sampled stations ='+ str(myBopt.X.shape[0]),size=20) # - # The coldest and the selected locations are very close. Note that, in total, only three evaluations were necesary to find this stations. Of course, different results can be found with different initilizations, models, acquisition, etc. To finish, we plot the value of the temperature in the best found station over the histogram of all temperatures. plt.figure(figsize=(8,5)) xx= plt.hist(stations_maxT,bins =50) plt.title('Distribution of max. temperatures',size=25) plt.vlines(min(stations_maxT),0,1000,lw=3,label='Coldest station') plt.vlines(myBopt.fx_opt,0,1000,lw=3,linestyles=u'dotted',label='Best found') plt.legend() plt.xlabel('Max. temperature',size=15) plt.xlabel('Frequency',size=15) # We see that it is indeed one of the coldest stations.
manual/GPyOpt_bandits_optimization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # En este informe se explican las queries realizadas sobre los índices `wine`, `inn`, `maraton`, y `norte`, cuyo indexado en el servidor de Elasticsearch se realiza con el script `document-indexing.py`. # # Se muestra tanto el código necesario para realizarlas con la API REST de Elasticsearch (en Python) como el resultado obtenido. # + import json import requests from datetime import datetime ELASTIC = "http://localhost:9200" #dirección en la que está funcionando elastic # - # # Índice Wine # Este índice contiene datos sobre distintos vinos. # ## Mejores 5 vinos de cada color # Esta query busca los 5 vinos con mejor puntuación de cada color (*White* y *Red*). Se ha realizado mediante dos queries, una que incluye los vinos tintos y otra que incluye los blancos. Los resultados se limitan a los primeros 5 elementos ordenados según el campo **Score**. # + query = { "size" : 5, "query":{ "match" : { "Color" : "Red" } }, "sort" : { "Score" : "desc" } } reds = json.loads(requests.get(ELASTIC+"/wine/_search",json=query).text)["hits"]["hits"] query = { "size" : 5, "query": { "match" : { "Color" : "White" } }, "sort" : { "Score" : "desc" } } whites = json.loads(requests.get(ELASTIC+"/wine/_search",json=query).text)["hits"]["hits"] # + print("Vinos tintos:") print("----------------") for i, w in enumerate(reds): wine = w["_source"] print("#%d: %s, %d/100" % (i+1, wine["Name"], wine["Score"])) print("") print("Vinos blancos:") print("----------------") for i, w in enumerate(whites): wine = w["_source"] print("#%d: %s, %d/100" % (i+1, wine["Name"], wine["Score"])) print("") # - # ## Precio medio y puntuación máxima del vino por tipo de uva # Esta query divide los vinos por su tipo de uva, y para cada uva calcula la media de su precio y el máximo de puntuación obtenido por los vinos de dicha uva. Se ha realizado mediante un *aggregation* que primero divide los elementos por su campo **Grape**, y dentro de cada división realiza dos *aggregations*, una calcula *avg* del campo **Price** y otra *max* del campo **Score**. # + query = { "aggs" : { "uvas" : { "terms" : { "field" : "Grape.keyword" }, "aggs":{ "precio-medio-por-uva": { "avg" : { "field" : "Price" } }, "puntuacion-maxima-por-uva" : { "max" : { "field" : "Score" } } } } } } uvas = json.loads(requests.get(ELASTIC+"/wine/_search",json=query).text)["aggregations"]["uvas"]["buckets"] # - for obj in uvas: print("%s: %d vinos" % (obj["key"], obj["doc_count"])) print("Precio medio: $%0.2f" % (obj["precio-medio-por-uva"]["value"])) print("Puntuación máxima: %d/100" % (obj["puntuacion-maxima-por-uva"]["value"])) print("") # ## Precio medio de vinos producidos antes de 2007 # Esta query encuentra los vinos producidos en años anteriores a 2007, y calcula el precio medio. Se ha realizado mediante una *query* que restrinje el campo **Year** a valores menores que 2007, y una *aggregation* que calcula la media del campo **Price**. # + query = { "query":{ "range" : { "Year" : { "lt" : 2007 } } }, "aggs":{ "precio-medio": { "avg" : { "field" : "Price" } } } } precio = json.loads(requests.get(ELASTIC+"/wine/_search",json=query).text)["aggregations"]["precio-medio"]["value"] # - print("Precio medio de los vinos anteriores a 2007: $%0.2f" % precio) # # Índice Inn # Este índice contiene información sobre distintas reservas realizadas en un motel. # ## Reservas de duración mayor a 10 días # Esta query busca aquellas reservas que hayan durado más de una semana y media (redondeado a 10 días). Para ello se ha utilizado una query con script, un tipo especial de query que permite la utilización de código que devuelva verdadero o falso, condición que será aplicada a cada documento, devolviendo aquellos donde la condición es cierta. En este caso la condición es que la diferencia entre la fecha **CheckOut** y **CheckIn** sea mayor que 10, expresada en días. # # Elasticsearch no permite operaciones directamente con fechas, por lo que pasamos los valores a milisegundos, y tras la resta lo volvemos a convertir en días. # + query = { "size": 10000, "query": { "script" : { "script" : { "source": "return (doc['CheckOut'].value.getMillis() - doc['CheckIn'].value.getMillis())/(1000*60*60*24) > 10" } } } } res = json.loads(requests.get(ELASTIC+"/inn/_search",json=query).text)["hits"]["hits"] # + for r in res: reservation = r["_source"] ckin = datetime.strptime(reservation["CheckIn"], "%d-%b-%y") ckout = datetime.strptime(reservation["CheckOut"], "%d-%b-%y") print("%s %s: Entró %s, Salió %s, duración = %d días" % (reservation["FirstName"], reservation["LastName"],reservation["CheckIn"], reservation["CheckOut"],(ckout-ckin).days)) print("") # - # ## Personas que se hayan alojado más de una vez # Esta query busca personas con más de una reserva en el sistema. Para ello se ha utilizado una *aggregation* que clasifica los documentos en grupos por el nombre completo (realizable mediante un script que une los campos **FirstName** y **LastName** en uno), y para cada clasificación, implementa una *aggregation* de tipo *bucket_selector*, que permite obtener la cuenta de documentos en cada grupo, y quedarse sólo con aquellos grupos donde dicha cuenta sea mayor que 1. # + query = { "size":0, "aggs": { "persona":{ "terms" : { "script": "params['_source'].FirstName+' '+params['_source'].LastName" }, "aggs":{ "reservas":{ "bucket_selector": { "buckets_path": { "cuenta" : "_count" }, "script": { "source" : "params.cuenta > 1" } } } } } } } res = json.loads(requests.get(ELASTIC+"/inn/_search",json=query).text)["aggregations"]["persona"]["buckets"] # - for r in res: print("%s se ha alojado %d veces" % (r["key"], r["doc_count"])) # ## Número de reservas al mes para camas de tipo *Queen* # Esta query devuelve el número de reservas al mes en habitaciones que tengan cama tipo *Queen*. Para ello se realiza una búsqueda de las reservas cuya habitación tenga el tipo de cama *Queen* mediante el campo **bedType** dentro del objeto **Room**, y sobre estos resultados se realiza una *aggregation* en la que se agrupan los resultados en grupos según el mes de la reserva, que se puede obtener mediante un script, que llama al método *getMonth* aplicado al campo de tipo fecha **CheckIn**. # + query = { "size" : 0, "query" : { "match" : { "Room.bedType" : "Queen" } }, "aggs" : { "reservas-al-mes" : { "terms" : { "script" : "doc['CheckIn'].value.getMonth()" } } } } res = json.loads(requests.get(ELASTIC+"/inn/_search",json=query).text)["aggregations"]["reservas-al-mes"]["buckets"] # - for r in res: print("En el mes %s: %d reservas de camas de tipo Queen" % (r["key"],r["doc_count"])) # ## Precio total de las reservas realizadas por *EMERY VOLANTE* # Esta query calcula el precio total de las reservas del cliente EMERY VOLANTE, que como vimos anteriormente se ha alojado 3 veces. Esta query consiste en buscar las reservas cuyo campo **FirstName** sea EMERY y **LastName** sea VOLANTE, y sobre el resultado aplicar una *aggregation* que sume el coste de las reservas, calculado con un script que multiplica el campo **Rate** (precio de habitación por día) por los días que se ha alojado (calculado de la misma manera que en la primera query). # + query = { "query" : { "bool" : { "must" : [{"match" : { "FirstName" : "EMERY"}}, {"match" : { "LastName" : "VOLANTE"}}] } }, "aggs" : { "precio-total" : { "sum" : { "script" : { "source" : "(doc.CheckOut.value.getMillis() - doc.CheckIn.value.getMillis())/(1000*60*60*24) * doc.Rate.value" } } } } } res = json.loads(requests.get(ELASTIC+"/inn/_search",json=query).text)["aggregations"] # - print("Precio total de todas las reservas realizadas por EMERY VOLANTE: $%0.2f" % res["precio-total"]["value"]) print("") # # Índice Maraton # Este índice contiene varios datos sobre los corredores de una maratón. # ## Media, mejor y peor tiempo por grupo de edad # Esta query devuelve, por cada grupo de edad, la media, mínimo, y máximo tiempo obtenido. Para ello se realiza una *aggregation* que divide los documentos en los distintos grupos de edad, y para cada grupo se aplica, sobre el campo **Time** operaciones *avg*, *min* y *max*. # + query = { "size" : 0, "aggs" : { "tiempos_por_grupo" : { "terms" : { "field" : "Group.keyword" }, "aggs" : { "tiempos" : { "avg" : { "field" : "Time" } }, "mejor-tiempo" : { "min" : { "field" : "Time" } }, "peor-tiempo" : { "max" : { "field" : "Time" } } } } } } res = json.loads(requests.get(ELASTIC+"/maraton/_search",json=query).text)["aggregations"]["tiempos_por_grupo"]["buckets"] # - for r in res: print("Grupo de edad %s: %d corredores, tiempo medio %s, \ tiempo perdedor: %s, tiempo ganador: %s" % (r["key"], r["doc_count"], r["tiempos"]["value_as_string"], r["peor-tiempo"]["value_as_string"], r["mejor-tiempo"]["value_as_string"])) # ## Mejor posición por Estado # Esta query devuelve, por cada estado, la mínima posición obtenida. Para ello se realiza una *aggregation* que divide los documentos en los distintos estados, y para cada grupo se calcula el *min* del campo **Place**. # + query = { "size" : 0, "aggs" : { "estados" : { "terms" : { "field" : "State.keyword" }, "aggs" : { "mejor" : { "min" : { "field" : "Place" } }, } } } } res = json.loads(requests.get(ELASTIC+"/maraton/_search",json=query).text)["aggregations"]["estados"]["buckets"] # - for r in res: print("%s: %d corredores, mejor posición %d" % (r["key"], r["doc_count"], r["mejor"]["value"])) # ## Corredores con ritmo entre 6 y 8 minutos/km que hayan quedado por encima del puesto 5 en su grupo # Esta query encuentra los corredores con ritmos en el rango entre 6 y 8 minutos que además hayan quedado por encima del puesto 5 para su grupo de edad. Para ello se utilizan dos condiciones en la búsqueda, que el campo **Pace** sea mayor o igual que 6 minutos y menor o igual que 8 minutos, y que **GroupPlace** sea menor o igual que 5. # + query = { "size": 10000, "query" : { "bool" : { "must" : [{ "range" : { "Pace" : { "gte" : "0:6:00", "lt" : "0:8:00" } }}, {"range" : { "GroupPlace" : { "lte" : 5 } } }] } } } res = json.loads(requests.get(ELASTIC+"/maraton/_search",json=query).text)["hits"]["hits"] # - for re in res: r = re["_source"] print("%s, Pace: %s, Puesto en su grupo(%s) %d" % (r["FirstName"]+" "+r["LasName"], r["Pace"], r["Group"], r["GroupPlace"])) # ## Número de BIB de los diez mejores corredores # Esta query devuelve el número de BIB de los 10 mejores corredores. Para ello se realiza una búsqueda de los documentos donde el campo **Place** sea menor o igual que 10. # + query = { "size": 10, "query" : { "range" : { "Place" : { "lte" : "10" } } } } res = json.loads(requests.get(ELASTIC+"/maraton/_search",json=query).text)["hits"]["hits"] # - for re in res: r = re["_source"] print("%s, puesto %s, BIB: %s" % (r["FirstName"]+" "+r["LasName"], r["Place"], r["BIBNumber"])) # # Índice norte # Este índice contiene noticias publicadas por el Norte de Castilla durante el año 2006. # ## Artículos de la sección "Internacional" que contengan la frase "crisis económica" # En esta query se buscan los artículos cuya sección sea Internacional que en el cuerpo de la noticia contengan la frase "crisis económica". Para ello se realiza una búsqueda con dos condiciones, que el campo **seccion** sea Internacional, y que el campo **cuerpo** contenga la frase "crisis económica". # + query = { "size" : 10000, "query" : { "bool" : { "must" : [ {"match" : {"seccion" : "Internacional"}}, {"match_phrase" : {"cuerpo" : "crisis económica"}} ] } } } res = json.loads(requests.get(ELASTIC+"/norte/_search",json=query).text)["hits"]["hits"] # - for re in res: r = re["_source"] print('"%s" [score %0.2f]' % (r["titulo"],re["_score"])) if "resumen" in r: print("\t"+r["resumen"]) print("") # ## Artículos publicados tal día como hoy en 2006 dentro de la sección Televisión # En esta query se buscan los artículos que se hayan publicado en dd-MM-2006, siendo dd y MM el día y mes correspondiente al actual, y estén en la sección de Televisión. Para ello se obtiene el día y mes actual, y se realiza una búsqueda con dos condiciones, que el campo **seccion** sea Televisión, y que el campo **fecha** sea dd-MM-2006. # + day = datetime.now().date().day month = datetime.now().date().month query = { "size" : 10000, "query" : { "bool" : { "must" : [ {"match" : { "seccion" : "Televisión" }}, {"match" : { "fecha" : str(day).zfill(2)+"-"+str(month).zfill(2)+"-2006" }} ] } } } res = json.loads(requests.get(ELASTIC+"/norte/_search",json=query).text)["hits"]["hits"] # - for re in res: r = re["_source"] print('"%s"' % (r["titulo"])) if "resumen" in r: print("\t"+r["resumen"]) print("") # ## Artículos publicados durante el mes de mayo que contengan la palabra "Eurovisión" # En esta query se buscan los artículos que contengan la palabra "eurovisión" entre los artículos publicados en Mayo. Para ello se realiza una búsqueda con dos condiciones, que el campo **cuerpo** contenga la palabra eurovisión y que la fecha sea mayor o igual que 05-2006, lo que se traduce a mayor o igual que el 1 de Mayo de 2006, y menor que 06-2006, es decir, el 1 de Junio de 2006. # + query = { "size" : 10000, "query" : { "bool" : { "must" : [ {"match" : { "cuerpo" : "eurovisión" }}, {"range" : { "fecha" : { "gte" : "05-2006", "lt" : "06-2006", "format" : "MM-yyyy" } }} ] } } } res = json.loads(requests.get(ELASTIC+"/norte/_search",json=query).text)["hits"]["hits"] # - for re in res: r = re["_source"] print('"%s"' % (r["titulo"])) if "resumen" in r: print("\t"+r["resumen"]) print("")
Queries Elasticsearch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c)) # -
test GPU.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 数学函数、字符串和对象 # ## 本章介绍Python函数来执行常见的数学运算 # - 函数是完成一个特殊任务的一组语句,可以理解为一个函数相当于一个小功能,但是在开发中,需要注意一个函数的长度最好不要超过一屏 # - Python中的内置函数是不需要Import导入的 # <img src="../Photo/15.png"></img> a,b,c,d,e = eval (input ("输入一串数字")) choose_method = input("请选择你要执行的方式") if choose_method == "max": max_ = max(a,b,c,d,e) print(max_) elif choose_method == "min": min_ = min(a,b,c,c,d,e) print(min_) elif choose_method == "pow_sum": pow_sum_ = pow(a,2) + pow(b,2) + pow(c,2) + pow(d,2) + pow(e,2) print(pow_sum_) # ## 尝试练习Python内置函数 # ## Python中的math模块提供了许多数学函数 # <img src="../Photo/16.png"></img> # <img src="../Photo/17.png"></img> # import random random.randint(1,3) round(0.025) import matplotlib.pyplot as plt import math list_ = [ ] for z in range (-50 , 50): res = 1.0/(1.0 + math.exp(-z)) list_.append(res) plt.plot(list_) plt.show( ) import random import math sj = random.randint(-50 , 50) res = 1.0/(1.0 + math.exp(sj)) round(res) math.sqrt(4) # ## 两个数学常量PI和e,可以通过使用math.pi 和math.e调用 # ## EP: # - 通过math库,写一个程序,使得用户输入三个顶点(x,y)返回三个角度 # - 注意:Python计算角度为弧度制,需要将其转换为角度 # <img src="../Photo/18.png"> x1,y1 = eval (input("输入三角形的第一个顶点的坐标")) x2,y2 = eval (input("输入三角形的第二个顶点的坐标")) x3,y3 = eval (input("输入三角形的第三个顶点的坐标")) a = math.sqrt( (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) ) b = math.sqrt( (x3 - x1) * (x3 - x1) + (y3 - y1) * (y3 - y1) ) c = math.sqrt( (x3 - x2) * (x3 - x2) + (y3 - y2) * (y3 - y2) ) A = math.acos((a * a - b * b - c * c ) / ( -2 * b * c)) B = math.acos((b * b - a * a - c * c ) / ( -2 * a * c)) C = math.acos((c * c - b * b - a * a ) / ( -2 * b * a)) print(math.degrees(A),math.degrees(B),math.degrees(C)) # ## 字符串和字符 # - 在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“ # - 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用 # ## ASCII码与Unicode码 # - <img src="../Photo/19.png"></img> # - <img src="../Photo/20.png"></img> # - <img src="../Photo/21.png"></img> # ## 函数ord、chr # - ord 返回ASCII码值 # - chr 返回字符 ord("a") #接受或者返回的值是十进制 chr(100) # ## EP: # - 利用ord与chr进行简单邮箱加密 title = "<EMAIL>" for i in title : print(chr( ord (i)+10),end=" ") title = "<EMAIL>" result = " " for i in title : result += (chr( ord (i)+10)) print(result) # ## 转义序列 \ # - a = "He said,"Johon's program is easy to read" # - 转掉它原来的意思 # - 一般情况下只有当语句与默认方法相撞的时候,就需要转义 import hashlib str_ = 'cxr123456789' hl = hashlib.md5() hl.update(str_.encode(encoding='utf-8')) print('MD5加密前为 :' + str_) print('MD5加密后为 :' + hl.hexdigest()) import hashlib login = "8<PASSWORD>" password = input ("<PASSWORD>") h1 = hashlib.md5() hl.update(str_.encode(encoding='utf-8')) if login == h1.hexdigest(): print("登陆成功") else: print("登录失败") # ## 高级print # - 参数 end: 以什么方式结束打印 # - 默认换行打印 print("a","b",sep = "",end = "!") # ## 函数str # - 将类型强制转换成字符串类型 # - 其他一些以后会学到(list,set,tuple...) # ## 字符串连接操作 # - 直接使用 “+” # - join() 函数 # %time "!" .join(["1","2","3","4"]) # ## EP: # - 将 “Welcome” “to” "Python" 拼接 # - 将int型 100 与 “joker is a bad man” 拼接 # - 从控制台读取字符串 # > 输入一个名字返回夸奖此人 print("Welcome" + " " + "to" + " " + "Python") print(str(100) + " " + "joker is a bad man" ) shuru = input( ">>") print (shuru + " " + "你真帅") # ## 实例研究:最小数量硬币 # - 开发一个程序,让用户输入总金额,这是一个用美元和美分表示的浮点值,返回一个由美元、两角五分的硬币、一角的硬币、五分硬币、以及美分个数 # <img src="../Photo/22.png"></img> # - Python弱项,对于浮点型的处理并不是很好,但是处理数据的时候使用的是Numpy类型 # <img src="../Photo/23.png"></img> # ## id与type # - id 查看内存地址,在判断语句中将会使用 # - type 查看元素类型 id(1) type(i) id (2) # ## 其他格式化语句见书 # # Homework # - 1 # <img src="../Photo/24.png"><img> # <img src="../Photo/25.png"><img> import math r = eval (input("输入到圆心的距离")) s = 2 * r * math.sin( math.pi / 5) area = 5 * s * s / ( 4 * math.tan( math.pi / 5)) print("五边形的面积为" + str(area)) # - 2 # <img src="../Photo/26.png"><img> x1,y1 = eval (input("请输入第一个点的经度和纬度")) x2,y2 = eval (input("请输入第二个点的经度和纬度")) radians = 6371.01 d = radians * math.acos(math.sin(math.radians(x1)) * math.sin(math.radians(x2)) + math.cos(math.radians(x1)) * math.cos(math.radians(x2)) * math.cos(math.radians(y1) - math.radians(y2))) print("最大距离是" + str(d)) # - 3 # <img src="../Photo/27.png"><img> s = eval (input("输入边长 ")) area = 5 * s * s / ( 4 * math.tan( math.pi / 5)) print("五边形的面积为 " + str(area)) # - 4 # <img src="../Photo/28.png"><img> n = eval (input ("输入边数 ")) s = eval (input("输入边长 ")) area = n * s * s / ( 4 * math.tan( math.pi / 5)) print("五边形的面积为 " + str(area)) # - 5 # <img src="../Photo/29.png"><img> # <img src="../Photo/30.png"><img> shu = eval(input("输入一个0-127之间的整数")) shu = chr(shu) print("它对应的字符是 " + str(shu)) # - 6 # <img src="../Photo/31.png"><img> name = (input("姓名:")) time = eval (input( "一周工作时间:")) money = eval (input( "每小时报酬:")) cess = eval (input( "联邦预扣税率:")) shui = eval (input( "州预扣税率:")) sum_ = money * time guoshui = sum_ * cess zhoushui = sum_ * shui sum_shui_ = guoshui + zhoushui sum_money_ = sum_ - sum_shui_ print ("姓名:" + name) print ("一周工作时间:" + str(time)) print ("每小时报酬:" + str(money)) print ("收入:" + str(sum_)) print ("国税:" + str(guoshui)) print ("州税:" + str(zhoushui)) print ("总税:" + str(sum_shui_)) print ("总收入:" + str(sum_money_)) # - 7 # <img src="../Photo/32.png"><img> int_ = eval (input("输入一个四位整数")) ge = int_ % 10 a = int_ // 10 shi = a % 10 b = a // 10 bai = b % 10 c = b // 10 qian = c % 10 print(str(ge) + str(shi) + str(bai) + str(qian)) # - 8 进阶: # > 加密一串文本,并将解密后的文件写入本地保存 import hashlib text = input("输入一个文本") hl = hashlib.md5() hl.update(str_.encode(encoding='utf-8')) print('MD5加密前为 :' + text) print('MD5加密后为 :' + hl.hexdigest()) file_handle=open('1.text',mode='w') file_handle.write('输入的文本' + text)
9.11.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # <a href="https://colab.research.google.com/github/kunkaweb/medinfo2019/blob/master/Medinfo2019_Modeling_Techniques.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="2j08gpeKMo-S" # # Model Development & Evaluation # # ## Medinfo 2019 # ### Data Science Workshop # #### August 26, 2019 # # ### Content Development: <NAME>, <NAME>, <NAME>, <NAME> # + [markdown] colab_type="text" id="cpoyAvqLMo-U" # **Objectives:** # 1. Describe at least 3 modeling/machine learning techniques used in biomedical data science. # 2. Develop a machine learning model for predicting a healthcare outcome. # + [markdown] colab_type="text" id="VABepFILMo-V" # ### 6.1 What is Machine Learning (ML)? # # Machine Learning (ML) is about coding programs that automatically adjust their performance from exposure to information encoded in data. This learning is achieved via **tunable parameters** that are automatically adjusted according to performance criteria. # # Machine Learning can be considered a subfield of Artificial Intelligence (AI). # # There are three major classes of ML: # + [markdown] colab_type="text" id="pEWGPVufMo-W" # **Supervised learning**: Algorithms which learn from a training set of *labeled* examples (exemplars) to generalize to the set of all possible inputs. # Examples: regression, support vector machines # + [markdown] colab_type="text" id="-ievHo6uMo-X" # **Unsupervised learning**: Algorithms which learn from a training set of *unlableled* examples, using the features of the inputs to categorize inputs together according to some statistical criteria. # Examples: k-means clustering, kernel density estimation # + [markdown] colab_type="text" id="GI2HeYNtMo-X" # **Reinforcement learning**: Algorithms that learn via reinforcement from a *critic* that provides information on the quality of a solution, but not on how to improve it. Improved solutions are achieved by iteratively exploring the solution space. # + [markdown] colab_type="text" id="__tm8tnzMo-Y" # ### 6.2 Introduction to `Scikit-learn` # + [markdown] colab_type="text" id="1ah86-q3Mo-Z" # The `scikit-learn` package is an open-source library that provides a robust set of machine learning algorithms for Python. It is built upon the core Python scientific stack (*i.e.* NumPy, SciPy, Cython), and has a simple, consistent interface, making it useful for many data science applications. # + [markdown] colab_type="text" id="oGDeXNAzMo-a" # <img src="http://1.bp.blogspot.com/-ME24ePzpzIM/UQLWTwurfXI/AAAAAAAAANw/W3EETIroA80/s1600/drop_shadows_background.png" width="90%"/> # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="LuP1RWUXMo-b" outputId="e23f9000-6da9-4b7f-df41-b06ad2856896" # previously loaded modules import pandas as pd import numpy as np from matplotlib import pyplot as plt import seaborn as sns import matplotlib as mplot # %matplotlib inline import IPython from IPython.core.display import HTML from IPython.core.debugger import set_trace from distutils.version import StrictVersion import xlrd print("numpy version: %s" % np.__version__) print("pandas version: %s" % pd.__version__) print("matplotlib version: %s" % mplot.__version__) print("IPython version: %s" % IPython.__version__) print("seaborn version: %s" % sns.__version__) if StrictVersion(np.__version__) >= StrictVersion('1.13.0') and \ StrictVersion(pd.__version__) >= StrictVersion('0.20.0') and \ StrictVersion(mplot.__version__) >= StrictVersion('2.0.0') and \ StrictVersion(IPython.__version__) >= StrictVersion('5.5.0') and \ StrictVersion(sns.__version__) >= StrictVersion('0.7.0'): print('\nCongratulations, your environment is setup correctly!') else: print('\nEnvironment is NOT setup correctly!') # + colab={} colab_type="code" id="C0y-UdW7Mo-d" # load scikit-learn modules from sklearn import preprocessing from sklearn import metrics from sklearn import model_selection from sklearn.model_selection import train_test_split import random as rnd from random import random, randint # + [markdown] colab_type="text" id="FxgQ842dMo-f" # ### 6.3 Representing Data in `scikit-learn` # # Most machine learning algorithms implemented in scikit-learn expect data to be stored in a # **two-dimensional array or matrix**. The arrays can be # either ``numpy`` arrays, or in some cases ``scipy.sparse`` matrices. # The size of the array is expected to be `[n_samples, n_features]` # # - **n_samples:** The number of samples: each sample is an item to process (e.g. classify). # A sample can be a document, a picture, a sound, a video, an astronomical object, # a row in database or CSV file, # or whatever you can describe with a fixed set of quantitative traits. # - **n_features:** The number of features or distinct traits that can be used to describe each # item in a quantitative manner. Features are generally real-valued, but may be boolean or # discrete-valued in some cases. # # The number of features must be fixed in advance. However it can be very high dimensional # (e.g. millions of features) with most of them being zeros for a given sample. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ytqQyNnRMo-g" outputId="08d292bd-9611-422b-f469-664a68993941" # load data created from previous steps import os, shutil cwd = os.getcwd() datadir = cwd + '/data_oh' print('Data directory is: {}'.format(datadir)) # + colab={"base_uri": "https://localhost:8080/", "height": 666} colab_type="code" id="mZ-Efw4iQWvW" outputId="80c824f8-92c5-4f54-e575-77dd3546e958" # See if the data exists. If not, try to download it from github. if not os.path.exists(datadir+'/patients.csv'): print("Data directory doesn't exist!") print("Checking out the data from github...") # !git clone https://github.com/kunkaweb/Medinfo2019.git # Move the checked-out files into the /data directory files = os.listdir('Medinfo2019') for f in files: print('Moving %s...' % (f,)) try: shutil.move('Medinfo2019/'+f,'.') except: print(" Unable to move %s" % (f,)) try: shutil.rmtree('Medinfo2019') # Remove the version control (git) information except: pass # Ignore errors. On Windows, this sometimes fails and leaves the .git directory print('Data directory contains:\n',os.listdir(datadir)) df = pd.read_pickle(datadir + '/data_cleaned_oh.pkl') df.head() # + [markdown] colab_type="text" id="PDt2rmVGMo-k" # ### 6.4 Encoding Categorical Variables # Most of the models in scikit-learn require the categorical variables be turned into numeric variables. There are two approaches to this: # 1. One Hot Encoding - each item in the categorical variable is turned into its own variable represetnting the presence or abscence of that item. For example, 'Gender' would turn into 2 variables: 'Gender_M' and 'Gender_F' # 2. Label Encoding - assign an integer to each item. For the 'Gender' variable, 0 might mean Male and 1 might mean Female. # # We will use the `LabelEncoder` transformation to change categorical variables into integers. As a convenience, we can also keep the original (human readable) variable in the Dataframe. # + colab={"base_uri": "https://localhost:8080/", "height": 131} colab_type="code" id="B1Bv9s6EMo-l" outputId="c02c295b-d94d-4d8e-a2dc-750816e2876d" # Let's use the following variables as our initial set of predictors cat_cols = ['gender', 'marital', 'race', 'ethnicity'] cat_cols_encoded = [c + '_encoded' for c in cat_cols] numeric_cols = ['prior_opioid_abuse_diag', 'age', 'opioid_discharge_days_supply'] pred_cols = numeric_cols + cat_cols_encoded target_col = 'overdose' all_cols = cat_cols+numeric_cols+[target_col] df_opioids = df[df['prescribed_opioids'] == 1] # Encode the categorical variables dfe = df_opioids[cat_cols] # Replace missing data with an 'Unknown' category # so the missing data will also be encoded dfe = dfe.replace(np.NaN,'Unknown') # Encode the categorical variables encoded = dfe.apply(preprocessing.LabelEncoder().fit_transform) # Append the non-categorical variables and the encoded variables # into a single Dataframe # Name the new variables as <name>_encoded dfe = pd.concat([df_opioids[all_cols], encoded.add_suffix('_encoded')],axis=1) display(dfe.head(2)) # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="5-eoWlxsMo-o" outputId="01328f91-bcc8-41f7-806e-00978ec35cc4" # let's build a model using this set of variables... pred_cols = ['age', 'opioid_discharge_days_supply', 'prior_opioid_abuse_diag', \ 'gender_encoded', 'marital_encoded', 'race_encoded', \ 'ethnicity_encoded'] #pred_cols = ['prior_abuse_diag', 'adult', 'age_at_visit', \ # 'opioid_discharge_days_supply', 'gender_encoded', \ # 'marital_encoded', 'race_encoded', 'ethnicity_encoded'] LR_pred_cols = pred_cols X = dfe[pred_cols].to_numpy() y = dfe['overdose'].to_numpy() print('Using predictor variables of:',pred_cols) # + [markdown] colab_type="text" id="AtH2jwG3Mo-q" # ### 6.5 How do we approach problems from a Data Science perspective? # + [markdown] colab_type="text" id="dAwy3tikMo-r" # Imagine a set of observational (empirical data) that we want to *learn* from... # <img src="https://www.learnopencv.com/wp-content/uploads/2017/02/data-points.png", width='60%'/> # + [markdown] colab_type="text" id="Gi6ubxEOMo-s" # We can fit a variety of models ranging from extremely *simple* to highly *complex* models, e.g., # <img src='https://www.learnopencv.com/wp-content/uploads/2017/02/bias-variance-tradeoff.png' width='60%'/> # # **What are potential problems with each of these cases?** # + [markdown] colab_type="text" id="XTgL2RbAMo-s" # When applying these same models to a *new* set of data we held out for testing... # <img src='https://www.learnopencv.com/wp-content/uploads/2017/02/bias-variance-tradeoff-test-error.png' width='60%'/> # **We were overfit with the more complex, polynomial model.** # + [markdown] colab_type="text" id="9RCCw-vCMo-t" # ### Bias vs. Variance # <img src='https://www.learnopencv.com/wp-content/uploads/2017/02/Bias-Variance-Tradeoff-In-Machine-Learning-1.png' width='50%'/> # # **Rule of Thumb:** Fit model complexity to the data resources (not the target complexity) # + [markdown] colab_type="text" id="OiRw6_hPMo-u" # ### 6.6 Practical Approaches # In Data Science, we tend to do the following with a data set: # 1. Learn a model based on training data (e.g., 60% of data) # 2. Iteratively modify model based on a validation set: # 2a. Cross-validation/bootstrap with the training data # 2b. Separate validation set (e.g., 20% of data) # 3. Estimate generalization error with a test set (e.g., 20% of data) that you only look at once # # *Food for Thought:* With small validation sets, error measure is a bad estimator of the best hypothesis. With large validation sets, error measure is a great estimator of a terrible hypothesis. # + [markdown] colab_type="text" id="DtTuekGYMo-v" # ### 6.7 Let's Build Some Models! # # Let's begin preparing our pain data for machine learning algorithms. # + [markdown] colab_type="text" id="2zA520wWMo-w" # ### How do these models work? # + [markdown] colab_type="text" id="1n8uM4kRMo-x" # #### Logistic Regression # <img src='http://www.saedsayad.com/images/LogReg_1.png' width='80%'/> # + [markdown] colab_type="text" id="spVySm42Mo-y" # #### Linear Discriminant Analysis # <img src='http://sebastianraschka.com/images/blog/2014/linear-discriminant-analysis/lda_1.png' width='80%'/> # + [markdown] colab_type="text" id="4ZWD2aVUMo-z" # #### K-Nearest Neighbors # <img src='https://upload.wikimedia.org/wikipedia/commons/thumb/e/e7/KnnClassification.svg/220px-KnnClassification.svg.png' width='30%'/> # The test sample (green circle) should be classified either to the first class of blue squares or to the second class of red triangles. If k = 3 (solid line circle) it is assigned to the second class because there are 2 triangles and only 1 square inside the inner circle. If k = 5 (dashed line circle) it is assigned to the first class (3 squares vs. 2 triangles inside the outer circle). # + [markdown] colab_type="text" id="F5-leWiCMo-0" # #### Decision Trees # <img src='https://qph.fs.quoracdn.net/main-qimg-b17755d2e0ffb326d8c39b7f3e07e03b-c' width='80%'/> # + [markdown] colab_type="text" id="7CBThVlLMo-1" # #### Random Forest # <img src='https://i.ytimg.com/vi/ajTc5y3OqSQ/hqdefault.jpg' width='65%'/> # + [markdown] colab_type="text" id="JpRQg-9RMo-3" # #### Gaussian Naive Bayes # <img src='https://chrisalbon.com/images/machine_learning_flashcards/Gaussian_Naive_Bayes_Classifier_print.png' width='80%'/> # + [markdown] colab_type="text" id="_uhULC42Mo-4" # ### 6.8 Let's (Really) Build Some Models! # + colab={} colab_type="code" id="nUYOO14UMo-5" # models we'll consider from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB # + colab={"base_uri": "https://localhost:8080/", "height": 428} colab_type="code" id="1VXZNYRyMo-8" outputId="14255df9-c07f-4943-f319-04e63376ce85" # Let's try a simple logistic regression model to see how predictive our data is # perform a model fit on the training set LR = LogisticRegression() result = LR.fit(X, y) # calculate predicted values from the model to compare with actual outcomes expected = y predicted = LR.predict(X) print('\nClassification Report\n',metrics.classification_report(expected, predicted)) print('\nConfusion Matrix\n',metrics.confusion_matrix(expected, predicted)) print('\nAccuracy score =',metrics.accuracy_score(expected, predicted)) print('\nAUC score =',metrics.roc_auc_score(expected, predicted)) print('\nf1 score =',metrics.f1_score(expected, predicted)) # + [markdown] colab_type="text" id="P3h5WuRCMo-_" # ### What do these numbers mean? # According to Wikipedia: # # ![Confusion Matrix](https://github.com/joh06288/AMIA2019_W07/blob/master/images/Sensitivity-Wikipedia.png?raw=1) # + [markdown] colab_type="text" id="S4stkwq7Mo_A" # Precision = $\frac{tp}{tp + fp}$ , # where $tp$ is the number of true positives and $fp$ the number of false positives. The precision is intuitively the ability of the classifier to not label a sample as positive if it is negative. # + [markdown] colab_type="text" id="sSBxe3JEMo_B" # Recall = $\frac{tp}{tp + fn}$ # The recall is intuitively the ability of the classifier to find all the positive samples. # + [markdown] colab_type="text" id="UVMSLHJ6Mo_C" # Accuracy = $\frac{tp + tn}{N}$ # + [markdown] colab_type="text" id="Ds7tWSs_Mo_C" # $F1 = 2 * \frac{precision * recall}{precision + recall}$ # The $F1$ score can be interpreted as a weighted average of the precision and recall, where an $F1$ score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the $F1$ score are equal. # + [markdown] colab_type="text" id="Gf0yVePSMo_D" # The support is the number of occurrences of each class in $y_{test}$. # + colab={"base_uri": "https://localhost:8080/", "height": 314} colab_type="code" id="5-cf1gm2Mo_E" outputId="6d7cc9ba-f6a0-4925-8e61-9947689eb346" # Lets graph an ROC curve for the model from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # we'll use the test set (rather than training) for this evaluation auc = roc_auc_score(y, LR.predict(X)) probs = LR.predict_proba(X)[:,1] fpr, tpr, thresholds = roc_curve(y, probs) tpr[1] = tpr[0] plt.figure() plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve for Model') plt.legend(loc="lower right") plt.show() # + [markdown] colab_type="text" id="ul-XZ5hlMo_G" # # 7.0 Model Performance and Evaluation # # ### Goals of Performance Evaluation # # * To determine the most accurate predictive model # # * To determine generalizability of the model # # * To prevent overlearning # # * To quantify the performance of a model and how we can improve it # # + [markdown] colab_type="text" id="6fwF4WlrMo_H" # ## Cost of Unreliable Predictive Models # # - Patient Safety and Risk # # - Poorer Quality and Patient Outcomes # # - Time Waste (development and application) # # - Financial # # + [markdown] colab_type="text" id="O5ku01IFMo_I" # ## Problems to Mitigate # # The model development process must strike a balance between learning too much and learning too little # # * Overlearning # * It's quite easy to create a model the "memorizes" the data # * It perfectly fits your training data, but is useless when shown new data # * The number of possible ways to train a model is exponential # * Model parameters are too specific to training data # * Insufficient training data # * Inadequate data set aside for testing and cross-validation # * Application of improper model (e.g., linear model on variables with non-linear relationships). # # + [markdown] colab_type="text" id="vEEGEPobMo_J" # ## Dataset Partitioning # # Experimental validation using an external data set is the best method of validating a model and ensuring generalizability. # # * Training Dataset - used to build the model via a learning algorithm and to identify discriminating features of the predictor variables # # * Test Dataset - used to assess the prediction error of the final model # # * Validation Dataset - used to assess how well the model perform against real data, to ensure stability and, in some cases, to fine-tune the model # # + [markdown] colab_type="text" id="bQpju4TtMo_J" # ## Evaluating Performance # # * It’s impossible to design a learning method that’s guaranteed not to overlearn¹. # * Hold aside some data to test the model (20-30%) # * The remaining portion of data (training set) will be used to create the model. # * Use a model evaluation metric to compare the performance of predictive models # * Metrics include f-statistic, Lift, Area Under the ROC Curve (AUC), etc. # # <sub>¹Siegel, E (2016). Predictive Analytics: the power to predict who will click, buy, lie or die. Wiley</sub> # + [markdown] colab_type="text" id="r6G20UBdMo_L" # ### 7.1 Preventing Overfitting # # - One way to prevent overfitting is to set aside some portion of the data to test with the model. Thus the data is split into Training and Test partitions. # # - We can build a Logistic Regression model that is predictive of overdose risk. But we trained the model and tested it with our entire data set, which isn't exactly a fair test of the model. # # - We should split our dataset into a training dataset and a test dataset. After we train the model using just the training dataset, we will evaluate the model using the test dataset which has data that the model has never seen before. # # - `scikit-learn` has a helpful function called train_test_split that randomly splits our dataset. # # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="q_opJjvjMo_L" outputId="097d1020-a8db-4a18-9f0d-4e211086e7eb" # Create the training and test datasets # Partition 30% of the data to the Test set train, test = train_test_split(dfe, test_size=0.3, random_state=987) X_train = train[pred_cols].to_numpy() y_train = train['overdose'].to_numpy() X_test = test[pred_cols].to_numpy() y_test = test['overdose'].to_numpy() print('X_train shape = ',X_train.shape) print('y_train shape = ',y_train.shape) print('X_test shape = ',X_test.shape) print('y_test shape = ',y_test.shape) # + [markdown] colab_type="text" id="Ms2lg07dMo_Q" # ### 7.2 Training and Test Errors # # - Training Error: calculated by applying the statistical learning method to the observations used in the training. # # - Test Error: the average error that results from using a statistical learning method to predict the response on a new observation. # # + [markdown] colab_type="text" id="j0RdbttoMo_Q" # ## Training vs Test Performance # # The validation estimate of the test error can be highly variable, depending on precisely which observations are included in the training set and which observations are included in the validation set. # # <img src='https://github.com/joh06288/AMIA2019_W07/blob/master/images/TrainingTestPerformance.png?raw=1' width='70%'> # # <sub>¹Cross-validation and bootstrap. Stanford Lagunita. Humanities Science. Statistical Learning.</sub> # # + [markdown] colab_type="text" id="wpGUQ2H3Mo_R" # ### 7.3 Example: Evaluating Performance of a Decision Tree # # There is always a tension (tug-of-war) between learning and overlearning. The recommended approach is to overlearn and then cut-back on the tree. # # Let's create a decision tree that overlearns the data # # # + colab={} colab_type="code" id="O2iGg_CsMo_S" # Create Decision Tree models with tree depths of 1 to 25 from sklearn.metrics import roc_curve, auc tree_depths = np.linspace(1, 25, 25, endpoint=True) train_auc = [] test_auc = [] for d in tree_depths: dt = DecisionTreeClassifier(max_depth=d) dt.fit(X_train, y_train) # Fit a model to a tree at the current tree depth # Calc the false positive rate and the true positive rates by comparing the training answers to the model pred = dt.predict(X_train) # Compute the AUC from the rates and append it to the training auc data fpr, tpr, thresholds = roc_curve(y_train, pred) train_auc.append(auc(fpr, tpr)) # Calc the false positive rate and the true positive rates by comparing the test answers to the model pred = dt.predict(X_test) # Compute the AUC from the rates and append it to the test auc data fpr, tpr, thresholds = roc_curve(y_test, pred) test_auc.append(auc(fpr, tpr)) # + colab={"base_uri": "https://localhost:8080/", "height": 299} colab_type="code" id="38CtlBFSMo_W" outputId="ba24ceee-2201-414f-d737-4f63c44a44a5" # Graph the AUCs of the training and test models at various tree depths # Notice that the training scores continue to improve until they hit 1.0 (Perfect model, complete memorization) # But the test scores get worse as the training model improves from matplotlib import pyplot as plt plt.plot(tree_depths, train_auc, 'g', label='Train AUC') plt.plot(tree_depths, test_auc, 'r', label='Test AUC') plt.legend(loc=3) plt.ylim((0.5,1.01)) plt.ylabel('AUC') plt.xlabel('Tree Depth') plt.show() # + [markdown] colab_type="text" id="Y1Qcw3YXMo_Z" # ### 7.4 Resampling Methods # # Splitting the data into a training dataset and a test dataset has a drawback in that it doesn't allow us to train the model on all of the data. # # Two common resampling methods are k-fold cross-validation and bootstrap # # - Involve repeated drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model and use all of the data for training while preventing overfitting # # - Provide estimates for test-set prediction error, standard deviation and bias of parameter estimates # # + [markdown] colab_type="text" id="-Epgpyp7Mo_Z" # ### 7.41 K-fold Cross Validation # # - K-fold cross validation randomly divides the data into k equal sized subsets and then trains the model on the remaining k-1 segments of the data and tests it on the data that was held out. # - The process is repeated k times so that in the end all of the data is used at some point to train the model. # - The k model results are averaged to produce a single estimate of model performance. # # + [markdown] colab_type="text" id="L5B3rjhLMo_a" # ## K-fold Cross Validation # <img src=images/kfold.png width='70%'> # # # A schematic display of 5-fold CV. # A set of n observations is randomly split into five non-overlapping groups. Each of these fifths acts as a validation set (shown in beige), and the remainder as a training set (shown in blue). The test error is estimated by averaging the five resulting MSE estimates. # # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="9CNDLP-LMo_a" outputId="e473585f-3ff8-4e86-8894-e5550595c4b7" # Split the data using KFold cross validation kfold = model_selection.KFold(n_splits=3, random_state=123) data = list(range(0,9)) print("Data:",data) for train, test in kfold.split(data): print("Train: ", train, 'Test:', test) # + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="fCjB1QfWMo_d" outputId="652edee1-318a-4395-f00f-fc8977dd6e3f" # Perform k-fold cross validation LR = LogisticRegression(solver='liblinear') result = LR.fit(X,y) print(LR) kfold = model_selection.KFold(n_splits=10) results = model_selection.cross_val_score(LR, X, y, cv=kfold, scoring='roc_auc') print('Model score = %.4f (%.4f)' %(results.mean(), results.std())) # + [markdown] colab_type="text" id="rO8CE5F6Mo_g" # ### 7.42 Bootstrap # # Bootstrap theory says that the distance between the population mean and sample mean is similar to the distance between sample mean and bootstrap ‘subsample’ mean. # # # - “To Pull Oneself up by one’s bootstraps” – <NAME> # # - A flexible and powerful statistical tool that can be used to quantify the uncertainty associated with a given estimator or statistical learning method¹ # # - Allows us to use a computer to mimic the process of obtaining new data set without generating additional samples (sampling with replacement). # # - A new model is trained on a subset of the data and uses the remaining test data to score the model. The scores are averaged over many iterations to produce the model score. # # + [markdown] colab_type="text" id="yDwA6AnAMo_g" # ## Bootstrap Example # # A graphical illustration of the bootstrap approach on a small sample containing n = 3 observations. Each bootstrap data set contains n observations, sampled with replacement from the original data set. Each bootstrap data set is used to obtain an estimate of α¹. # # ![Bootstrap](https://github.com/joh06288/AMIA2019_W07/blob/master/images/bootstrap.png?raw=1) # # <sub>¹<NAME>., <NAME>., & <NAME>. (2013). An introduction to statistical learning with applications in R. New York: Springer New York.</sub> # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="ADNsavaFMo_h" outputId="c4376310-f532-43ac-9dbc-4cf72d04e9e8" # Bootstrap with replacement data = np.array(range(0,9)) print("Data:",data) bootstrap = [] for i in [0,3]: # 3 splits, 70% sample indexes = np.random.choice(len(data),int(0.7*len(data)),replace=True) bootstrap.append(list(data[indexes])) for train in bootstrap: print("Train: ", train) # + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="x0zMDtZOMo_k" outputId="83d15ea9-d82a-4139-f1b0-78b47d889f83" # Perform bootstrap (with replacement) from sklearn.ensemble import BaggingClassifier bootstrap = BaggingClassifier(LogisticRegression(solver='liblinear'), n_estimators=3, max_samples=0.7, bootstrap=True, random_state=123) fit = bootstrap.fit(X,y) print(bootstrap) score = bootstrap.score(X,y) print("Model score = ",score) # + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="Vchq3siTMo_n" outputId="c9c74da9-fd4e-4854-8ef6-8b933ecd0bfa" # Perform bootstrap (without replacement) LR = LogisticRegression(solver='liblinear') result = LR.fit(X,y) print(LR) bootstrap = model_selection.ShuffleSplit(10, test_size=0.3) results = model_selection.cross_val_score(LR, X, y, cv=bootstrap, scoring='roc_auc') print('Model score = %.4f (%.4f)' %(results.mean(), results.std())) # + [markdown] colab_type="text" id="W8U_ew8EMo_q" # # 8.0 Ok Finally, Let's Build Lots of Models! # # Now that we now how to avoid some of the predictive model building pitfalls, lets build a number of different types of models and meaasure their performance. We will use "AUC" as our performance measure. # + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="xVJqv69cMo_r" outputId="cc100fa5-5f67-4391-cf41-d9a333a5f532" # prepare configuration for cross validation test harness # (from https://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/) seed = 123 # prepare models models = [] models.append(('LR', LogisticRegression(solver='liblinear'))) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('RF', RandomForestClassifier(n_estimators=10))) models.append(('NB', GaussianNB())) # evaluate each model in turn results = [] names = [] scoring = 'roc_auc' # others include: 'accuracy', 'f1', 'roc_auc', # or found here: http://scikit-learn.org/stable/modules/model_evaluation.html for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # + colab={"base_uri": "https://localhost:8080/", "height": 294} colab_type="code" id="ts_sONlPMo_u" outputId="398fa6b2-a4eb-47ae-fb07-85b1562575d5" # boxplot algorithm comparison fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) plt.ylabel(scoring) ax.set_xticklabels(names) plt.show() # + [markdown] colab_type="text" id="ElA6Su2wMo_x" # ### Which model did "best"? # # We'll tackle this more in the next section on **Model Performance and Evaluation.** # # For now, just let's say *higher* is *better*. # # ### *Exercise:* Look at F1 scores instead of AUC scores. # # Hint: You can specify different scores like 'roc_auc', 'f1', 'precision', 'recall', etc # + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="xmuky0VRMo_y" outputId="25079a0a-9d3c-4936-d1d4-6f23320eefd0" # evaluate each model in turn results = [] names = [] ####### EDIT HERE ####### #scoring = '???' ######################### for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # + colab={"base_uri": "https://localhost:8080/", "height": 314} colab_type="code" id="Egou12ohMo_z" outputId="53bc0e39-9575-43da-bb9d-24e4a10a92bd" # boxplot algorithm comparison fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) plt.ylabel(scoring) ax.set_xticklabels(names) plt.show() # + [markdown] colab_type="text" id="CEbNIKtZMo_1" # ### 8.1 Hyperparameter Tuning # + colab={"base_uri": "https://localhost:8080/", "height": 513} colab_type="code" id="GaIFU7pcMo_2" outputId="75429737-fb20-44b3-bf47-d67f17cfbc32" # look at the default settings you used models # + [markdown] colab_type="text" id="d2TSpugOMo_4" # ### *Exercise:* Attempt parameter tuning on your own # # # + colab={} colab_type="code" id="oHS4hyJ6Mo_5" # use Logistic Regression & Random Forests models = [] ####### EDIT HERE ####### #models.append(('LR', LogisticRegression(solver='liblinear'))) #models.append(('RF', RandomForestClassifier(n_estimators=10))) ######################### # evaluate each model in turn results = [] names = [] ####### EDIT HERE ####### #scoring = '?????' ######################### for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # + [markdown] colab_type="text" id="wKwNzSLpMo_6" # ### scikit-learn can help with parameter tuning # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" id="NiMy57OhMo_7" outputId="db67b9ef-a5fd-4dd1-9b5e-bd50647dc7a5" ### automated grid search from sklearn.model_selection import GridSearchCV param_grid = [ {'n_estimators': [50, 100, 250], 'class_weight': [None, 'balanced'], 'max_features': [2, 'sqrt', None]} ] rfc = RandomForestClassifier() grid_search = GridSearchCV(rfc, param_grid, cv=5, scoring='f1') grid_search.fit(X_train, y_train) cvres = grid_search.cv_results_ # + colab={"base_uri": "https://localhost:8080/", "height": 343} colab_type="code" id="-B0YO8cqMo_8" outputId="4823794f-23e7-444f-ec68-1ebc711e3f04" # print results for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]): print(round(np.sqrt(mean_score), 3), params) # + [markdown] colab_type="text" id="1ic9ZAFAMo_-" # ### 8.2 Finalizing the Model # + colab={} colab_type="code" id="0XcxxgLOMo__" # assign parameters from best fit final_fit = RandomForestClassifier(class_weight=None, max_features=2, n_estimators=100) final_fit.fit(X_train, y_train) # store predicted values using the final model pred_train = final_fit.predict(X_train) pred_test = final_fit.predict(X_test) # + colab={"base_uri": "https://localhost:8080/", "height": 142} colab_type="code" id="gcP_tW8rMpAA" outputId="02e49069-ce38-4be9-b0ea-ca6c51519cc0" # explore performance on training data pd.crosstab(y_train, pred_train, rownames=["Actual"], colnames=["Predicted"]) # + colab={"base_uri": "https://localhost:8080/", "height": 142} colab_type="code" id="pWYfKpLbMpAD" outputId="d22d3bc8-cdbd-471c-a00b-a46284070475" # explore performance on testing data pd.crosstab(y_test, pred_test, rownames=["Actual"], colnames=["Predicted"]) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="slXfAy1IMpAF" outputId="23eb11a4-b92b-45d2-dca1-6b337bfb4337" # Show how to use the resulting model to predict opioid overdose # age, opioid_discharge_days_supply, prior_opioid_abuse_diag, # gender (F), marital (M), race (white), ethnicity (english) new_patient = [45,10,1,0,1,4,7] pred = final_fit.predict(np.asmatrix(new_patient)) if pred[0] == 0: print('Patient has no overdose risk.') elif pred[0] == 1: print('Patient has overdose risk.') # + [markdown] colab_type="text" id="8fq4l1IQMpAH" # --- # ## References # # - [`scikit-learn` user's guide](http://scikit-learn.org/stable/user_guide.html) # - <NAME>. (2016) [Python Data Science Handbook: Essential Tools for Working with Data](http://shop.oreilly.com/product/0636920034919.do). O'Reilly Media. # - Much of this content can be attributed to the work of <NAME> with source data found at: https://github.com/fonnesbeck/Bios8366
Medinfo2019_Modeling_Techniques.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## A dataset of 200 structured product labels annotated for adverse drug reactions # The annotations were performed by <NAME>'s team in order to create a standard set to encourage the development of NLP tools for ADR annotation from FDA Structured Product labels. # # The publication can be found here: https://www.nature.com/articles/sdata20181 # # We have been given explicit permission to import this data set so long as we make it clear that it is not an NLM-sanctioned gold standard database and that the curation was done by a small team of biocurators to the best of their ability and not by experts verse in pharmacology. # # It should be noted that the dataset is not expected to change in the future; however, coverage of corresponding entries in Wikidata may grow over time so re-running the dataset on a schedule may help to improve representation of the dataset in Wikidata over time. # + from wikidataintegrator import wdi_core, wdi_login, wdi_helpers from wikidataintegrator.ref_handlers import update_retrieved_if_new_multiple_refs import pandas as pd from pandas import read_csv import requests from tqdm.notebook import trange, tqdm import ipywidgets import widgetsnbextension import time datasrc = 'data/FinalReferenceStandard200Labels.csv' exppath = 'results/' # - print("Logging in...") import wdi_user_config ## Credentials stored in a wdi_user_config file login_dict = wdi_user_config.get_credentials() login = wdi_login.WDLogin(login_dict['WDUSER'], login_dict['WDPASS']) spl_adr_raw = read_csv(datasrc, delimiter="|", header=0, dtype={'Index':int,'PT ID':str,'LLT ID':str}).fillna('None') print(spl_adr_raw.head(n=2)) # ## Retrieve and map WDIDs # The Risk Factor property is how adverse effects appear to currently be modeled in Wikidata. The disease entity is the subject, with risk factor as a predicate and the drug as the object. The diseases in this data set appear to be normalized to UMLS CUIs which aren't great due to one-to-many and many-to-one mappings, but we can filter those out and limit our dataset to just the ones that are unique. # # Unfortunately, the DailyMed drug ID's don't appear to be in use in Wikidata yet, which means that the drugs will still need to be mapped to some extent. That said, there were only 200 drug labels that were annotated in this data set, so manual mapping is entirely not unreasonable. We'll try mapping via sparql query which can be quite stringent, and then attempt to manually map anything that fails. # + ## Retrieve the QIDs for each UMLS CUI ID (The property for UMLS CUI IDs is P2892) sparqlQuery = "SELECT * WHERE {?topic wdt:P2892 ?CUI}" result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery) ## Format the data for analysis wdmap = [] i=0 while i < len(result["results"]["bindings"]): umls_qid = result["results"]["bindings"][i]["topic"]["value"].replace("http://www.wikidata.org/entity/", "") cui_id = result["results"]["bindings"][i]["CUI"]["value"] tmpdict = {'UMLS CUI':cui_id,'disease_WDID':umls_qid} wdmap.append(tmpdict) i=i+1 wdid_umls_all = pd.DataFrame(wdmap) ## Drop any entries that are not of interest umls_cui_list = spl_adr_raw['UMLS CUI'].unique().tolist() wdid_umls_df = wdid_umls_all.loc[wdid_umls_all['UMLS CUI'].isin(umls_cui_list)] wdid_umls_df.to_csv(exppath+'cui_wdid_xref.tsv',sep='\t',header=True) # - wdid_umls_df = read_csv(exppath+'cui_wdid_xref.tsv',delimiter='\t',header=0,index_col=0) # + ## Exclude entities with one to many OR many to one mappings wdid_umls_df_unique = wdid_umls_df.drop_duplicates(subset='disease_WDID').copy() wdid_umls_df_unique.drop_duplicates(subset='UMLS CUI',inplace=True) print("initial mapping table size: ",len(wdid_umls_df), " de-duplicated: ",len(wdid_umls_df_unique)) # - ## Merge the mapping table to the original table spl_with_disease_wdids = spl_adr_raw.merge(wdid_umls_df_unique, on='UMLS CUI', how='left') print(len(spl_adr_raw),len(spl_with_disease_wdids)) # ## Query Wikidata for instances of drugs whose names match to product label names # We can limit the query by selecting for instances of Pharmaceutical products, medications, or chemical compounds. The queries should be run in that order...only search for medications of a label doesn't match a pharmaceutical product. Only search for chemical compounds if a label doesn't match a medication OR pharmaceutical product: # # * pharm_wid = 'Q28885102' # * chem_wdid = 'Q11173' # * medi_wdid = 'Q12140' """ ## Unit test query_start = 'SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q28885102; rdfs:label ?itemLabel. FILTER(CONTAINS(LCASE(?itemLabel), "' query_subject = 'NUCYNTA' query_end = '"@en)).}' sparqlQuery = query_start+query_subject.lower()+query_end result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery) drug_qid = result["results"]["bindings"][0]["item"]["value"].replace("http://www.wikidata.org/entity/", "") drug_label = result["results"]["bindings"][0]["itemLabel"]["value"] print(drug_qid, drug_label) print(len(result["results"]["bindings"])) """ # + #drug_list = ['NUCYNTA','Natazia','EDURANT'] ## Loop test drug_list = spl_with_disease_wdids['Drug Name'].unique().tolist() pharm_start = 'SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q28885102; rdfs:label ?itemLabel. FILTER(CONTAINS(LCASE(?itemLabel), "' med_start = 'SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q12140; rdfs:label ?itemLabel. FILTER(CONTAINS(LCASE(?itemLabel), "' chem_start = 'SELECT ?item ?itemLabel WHERE {?item wdt:P31 wd:Q11173; rdfs:label ?itemLabel. FILTER(CONTAINS(LCASE(?itemLabel), "' query_end = '"@en)).}' drug_wdid_list = [] drug_match_failed = [] for i in tqdm(range(len(drug_list))): query_subject = drug_list[i].lower() try: sparqlQuery = pharm_start+query_subject+query_end result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery) drug_qid = result["results"]["bindings"][0]["item"]["value"].replace("http://www.wikidata.org/entity/", "") drug_label = result["results"]["bindings"][0]["itemLabel"]["value"] drug_wdid_list.append({'Drug Name':drug_list[i],'drug_WDID':drug_qid,'drug_wd_label':drug_label,'instance_of':'pharmaceutical product'}) except: try: sparqlQuery = med_start+query_subject+query_end result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery) drug_qid = result["results"]["bindings"][0]["item"]["value"].replace("http://www.wikidata.org/entity/", "") drug_label = result["results"]["bindings"][0]["itemLabel"]["value"] drug_wdid_list.append({'Drug Name':drug_list[i],'drug_WDID':drug_qid,'drug_wd_label':drug_label,'instance_of':'medication'}) except: try: sparqlQuery = chem_start+query_subject+query_end result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery) drug_qid = result["results"]["bindings"][0]["item"]["value"].replace("http://www.wikidata.org/entity/", "") drug_label = result["results"]["bindings"][0]["itemLabel"]["value"] drug_wdid_list.append({'Drug Name':drug_list[i],'drug_WDID':drug_qid,'drug_wd_label':drug_label,'instance_of':'chemical'}) except: drug_match_failed.append(drug_list[i]) drug_wdid_df = pd.DataFrame(drug_wdid_list) drug_wdid_df.to_csv(exppath+'drug_wdid_df.tsv',sep='\t',header=True) print(i) # - print(drug_match_failed) ## In the future, consider only running these with open(exppath+'drug_match_failed.txt','w') as store_it: for eachfailure in drug_match_failed: store_it.write(eachfailure+'\n') drug_match_failed = [] with open(exppath+'drug_match_failed.txt','r') as stored_it: for eachline in stored_it: drug_match_failed.append(eachline.strip()) drug_wdid_df = read_csv(exppath+'drug_wdid_df.tsv',delimiter='\t',header=0, index_col=0) print(drug_wdid_df.head(n=2)) print(drug_match_failed) print(len(drug_wdid_df)+len(drug_match_failed)) # ## Merge tables to convert drug names to WDID products # Filter out the entries that could not be mapped to Wikidata. # + df_to_write = spl_with_disease_wdids.merge(drug_wdid_df, on='Drug Name',how = 'left') print(len(df_to_write)) all_data_available = df_to_write.loc[(~df_to_write['disease_WDID'].isnull()) & (~df_to_write['drug_WDID'].isnull())] not_attempted = df_to_write.loc[(df_to_write['disease_WDID'].isnull()) | (df_to_write['drug_WDID'].isnull())] print(len(all_data_available)) #print(not_attempted.head(n=2)) print(all_data_available.head(n=1)) ## Save the Failures not_attempted.to_csv(exppath+'qid_missing_not_attempted.tsv',sep='\t',header=True) # - # ## Convert triples to WD statements # # The Adverse Effect of "lactic acidosis" from metformin use was modeled on the Risk Factor property page and discussed there. These adverse effects can be expected to modeled similarly. # # We can use rank as a means to indicate severity of the warning. For example, a Black Box Warning would get a higher priority rank than text mined from 'adverse effect'. Alternatively, we can try to include a reference statement that would indicate where the ADR was derived. # Eg- using "P958" Paragraph/section/clause # in conjunction with: # * Q879952 (Boxed Warning) # * Q45959 (Adverse Drug Reactions) # * Q21010924 (Safety Precautions) # # edit--P958 takes a string as an input instead of a QID, so the source can be directly added from datetime import datetime import copy def create_reference(spl_url,source_type): timeStringNow = datetime.now().strftime("+%Y-%m-%dT00:00:00Z") archived_date = datetime.strptime('9/29/2015','%m/%d/%Y').strftime("+%Y-%m-%dT00:00:00Z") refStatedIn = wdi_core.WDItemID(value="Q73670648", prop_nr="P248", is_reference=True) refRetrieved = wdi_core.WDTime(timeStringNow, prop_nr="P813", is_reference=True) refRetrieved2 = wdi_core.WDTime(archived_date, prop_nr="P2960", is_reference=True) refURL = wdi_core.WDUrl(value=spl_url, prop_nr="P854", is_reference=True) reftype = wdi_core.WDString(value=source_type, prop_nr="P958", is_reference=True) return [refStatedIn, refRetrieved, refRetrieved2, refURL, reftype] # + ## Unit test -- write a statement fda_base_spl_url = 'https://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?setid=' i=0 drug_qid = all_data_available.iloc[i]['drug_WDID'] #disease_qid = all_data_available.iloc[i]['disease_WDID'] disease_qid = 'Q4115189' #sandbox run spl_drug_id = all_data_available.iloc[i]['Drug ID'] spl_url = fda_base_spl_url+spl_drug_id source_type = all_data_available.iloc[i]['Section Display Name'] reference = create_reference(spl_url,source_type) statement = [wdi_core.WDItemID(value=drug_qid, prop_nr="P5642", references=[copy.deepcopy(reference)])] wikidata_item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value="P5642", global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs) #wikidata_item.get_wd_json_representation() wikidata_item.write(login) print(i,disease_qid,drug_qid) # + wd_revision_list = [] run_list = all_data_available[0:3] ## test run #run_list = all_data_available while i < len(run_list): drug_qid = all_data_available.iloc[i]['drug_WDID'] disease_qid = all_data_available.iloc[i]['disease_WDID'] spl_drug_id = all_data_available.iloc[i]['Drug ID'] spl_url = fda_base_spl_url+spl_drug_id source_type = all_data_available.iloc[i]['Section Display Name'] reference = create_reference(spl_url,source_type) statement = [wdi_core.WDItemID(value=drug_qid, prop_nr="P5642", references=[copy.deepcopy(reference)])] wikidata_item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value="P5642", global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs) wikidata_item.write(login, edit_summary='added ADR relationship from FDA SPLs') wd_revision_list.append({'drug':drug_qid,'disease':disease_qid,'wd_revid':wikidata_item.lastrevid}) i=i+1 wd_edit_results = pd.DataFrame(wd_revision_list) print(wd_edit_results) wd_edit_results.to_csv(exppath+'run_results.tsv',sep='\t',header=True) # -
scheduled_bots/SPL_ADR_standard_dataset/SPL ADR Standard Data set.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from datetime import datetime import os import urllib.request SHUTDOWN_EVENT = 'Shutdown initiated' # prep: read in the logfile logfile = os.path.join('/tmp', 'log') urllib.request.urlretrieve('http://bit.ly/2AKSIbf', logfile) def convert_to_datetime(line): """TODO 1: Extract timestamp from logline and convert it to a datetime object. For example calling the function with: INFO 2014-07-03T23:27:51 supybot Shutdown complete. returns: datetime(2014, 7, 3, 23, 27, 51) """ timestamp = get_timestamp(line) return parse_timestamp(timestamp) def time_between_shutdowns(loglines): """TODO 2: Extract shutdown events ("Shutdown initiated") from loglines and calculate the timedelta between the first and last one. Return this datetime.timedelta object. """ shutdown_lines = extract_shutdown_lines(loglines) first_shutdown = convert_to_datetime(shutdown_lines[0]) last_shutdown = convert_to_datetime(shutdown_lines[-1]) return last_shutdown - first_shutdown with open(logfile) as f: loglines = f.readlines() print(time_between_shutdowns(loglines)) # - # `ERROR 2014-07-03T23:24:31 supybot Invalid user dictionary file, resetting to empty.` # + test_line = "ERROR 2014-07-03T23:24:31 supybot Invalid user dictionary file, resetting to empty." def get_timestamp(line): """ Takes full log line and returns just the timestamp column (as string) """ return line.split(" ")[1] print(get_timestamp(test_line)) # - print("a b c".split(" ")) # + test_input = "2014-07-03T23:24:31" def parse_timestamp(timestamp_string): """ Takes timestamp as string and returns corresponding instance of datetime. """ return datetime.strptime(timestamp_string, "%Y-%m-%dT%H:%M:%S") parse_timestamp(test_input) # + test_ipt_1 = ";lksadjf;lkajsdf Shutdown initiated s;ldkjf;alkjds" test_ipt_2 = ";lksdja;flkjasd;flkjsd" def is_shutdown_initiated_line(line): """ Return true if line contains shutdown initiated event. Otherwise false. """ return "Shutdown initiated" in line print(is_shutdown_initiated_line(test_ipt_1)) print(is_shutdown_initiated_line(test_ipt_2)) # + test_input = [ "WARNING 2014-07-03T23:24:31 supybot Couldn't open ignore database: [Errno 2] No such file or directory: 'conf/ignores.conf'", "INFO 2014-07-03T23:27:51 supybot Shutdown initiated.", "INFO 2014-07-03T23:27:51 supybot Killing Driver objects.", "INFO 2014-07-03T23:31:22 supybot No more Irc objects, exiting.", "INFO 2014-07-03T23:31:22 supybot Shutdown initiated.", "INFO 2014-07-03T23:31:22 supybot Killing Driver objects." ] def extract_shutdown_lines(lines): """ Filters input and returns only the lines that contain shutdown initiated events. """ return list(filter(is_shutdown_initiated_line, test_input)) print(extract_shutdown_lines(test_input)) # -
Collections_module/timedelta_exercise1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Commentary on Twitter: # # https://twitter.com/pjhawron/status/1447740094438416389 # # Import data and libraries # # #### Data from https://basketballdao.com and https://kongstats.com # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import datetime as dt import warnings warnings.filterwarnings("ignore") % matplotlib inline # - data = pd.read_csv("rkl.csv", nrows=10000) data.head() data.info() # # Boost Distributions sns.set_style('darkgrid') sns.set_palette('plasma') fig, axs = plt.subplots(ncols=2, nrows=3) fig.set_size_inches(10, 10) sns.distplot(data['Shooting'], ax=axs[0][0], bins=25) sns.distplot(data['Vision'], ax=axs[0][1], bins=25) sns.distplot(data['Finish'], ax=axs[1][0], bins=25) sns.distplot(data['Defense'], ax=axs[1][1], bins=25) sns.distplot(data['Cumulative'], ax=axs[2][0], bins=25) # # Formatting Sales Columns np.where(data['Current Price'].str.contains('ETH')) np.where(data['Current Price'] != " ") data.iloc[np.where(data['Current Price'].str.contains('ETH'))].head() print("Listed:", len(data.iloc[np.where(data['Current Price'].str.contains('ETH'))])) print("Has Sold:", len(data.iloc[np.where(data['Last Sale Price'].str.contains('ETH'))])) temp1 = data.iloc[np.where(data['Current Price'].str.contains('ETH'))] temp2 = data.iloc[np.where(data['Last Sale Price'].str.contains('ETH'))] join = temp1.merge(temp2, how='inner') join.head() formatted = join formatted.head() formatted['Current Price'] = formatted['Current Price'].str.replace(' ETH', '').astype(float) formatted['Last Sale Price'] = formatted['Last Sale Price'].str.replace(' ETH', '').astype(float) formatted.head() formatted = formatted.drop(columns='Price ') formatted.head() # # Graphing relationships between boosts and last sale price sns.distplot(formatted['Last Sale Price']) fig, axs = plt.subplots(ncols=2, nrows=3) fig.set_size_inches(10, 10) sns.lineplot(x=formatted['Shooting'], y=formatted['Last Sale Price'], ax=axs[0][0]) sns.lineplot(x=formatted['Vision'], y=formatted['Last Sale Price'], ax=axs[0][1]) sns.lineplot(x=formatted['Finish'], y=formatted['Last Sale Price'], ax=axs[1][0]) sns.lineplot(x=formatted['Defense'], y=formatted['Last Sale Price'], ax=axs[1][1]) sns.lineplot(x=formatted['Cumulative'], y=formatted['Last Sale Price'], ax=axs[2][0]) sns.lineplot(x=formatted['Last Sale Date'], y=formatted['Last Sale Price'], ax=axs[2][1]) # # Normalize with transformations print(formatted['Last Sale Price'].mean()) print(formatted['Last Sale Price'].median()) print(formatted['Last Sale Price'].std()) no_outliers = formatted[formatted['Last Sale Price'] <= 3] sns.distplot(no_outliers[['Last Sale Price']]) transformed = np.sqrt(no_outliers[['Last Sale Price', 'Cumulative', 'Vision', 'Shooting', 'Defense', 'Finish']]) transformed['Last Sale Date'] = no_outliers['Last Sale Date'] transformed.head() sns.distplot(transformed['Last Sale Price']) print(transformed['Last Sale Price'].mean()) print(transformed['Last Sale Price'].median()) print(transformed['Last Sale Price'].std()) fig, axs = plt.subplots(ncols=2, nrows=3) fig.set_size_inches(10, 10) sns.lineplot(x=transformed['Shooting'], y=transformed['Last Sale Price'], ax=axs[0][0]) sns.lineplot(x=transformed['Vision'], y=transformed['Last Sale Price'], ax=axs[0][1]) sns.lineplot(x=transformed['Finish'], y=transformed['Last Sale Price'], ax=axs[1][0]) sns.lineplot(x=transformed['Defense'], y=transformed['Last Sale Price'], ax=axs[1][1]) sns.lineplot(x=transformed['Cumulative'], y=transformed['Last Sale Price'], ax=axs[2][0]) sns.lineplot(x=transformed['Last Sale Date'], y=transformed['Last Sale Price'], ax=axs[2][1]) # # Regression to find the importance of cumulative boost value on sales price # ## Raw values X = np.array(formatted['Cumulative']).reshape(-1, 1) y = formatted['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred)) # ## Square root values X = np.array(transformed['Cumulative']).reshape(-1, 1) y = transformed['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred)) # # Regression to find the importance of each boost value on sales price # ## Raw Values X = formatted[['Shooting', 'Vision', 'Defense', 'Finish']] y = formatted['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred)) # ## Square root values X = transformed[['Shooting', 'Vision', 'Defense', 'Finish']] y = transformed['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred)) # # Regression to find the importance of last sale date on price # ## Raw Values formatted['Last Sale Date'] = pd.to_datetime(formatted['Last Sale Date']) formatted['Last Sale Date'] = formatted['Last Sale Date'].map(dt.datetime.toordinal) X = np.array(formatted['Last Sale Date']).reshape(-1, 1) y = formatted['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred)) # ## Square root values transformed['Last Sale Date'] = pd.to_datetime(transformed['Last Sale Date']) transformed['Last Sale Date'] = transformed['Last Sale Date'].map(dt.datetime.toordinal) X = np.array(transformed['Last Sale Date']).reshape(-1, 1) y = transformed['Last Sale Price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LinearRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("R-square:", clf.score(X_test, y_test)) print("MSE:", mean_squared_error(y_test, y_pred))
rkl_boost_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # ## Interface to different programs # # PPQM has interfaces to different QM programs, making it easy to calculate properties with different programs on the same RDKit molobj # # # %load_ext autoreload # %autoreload 2 # %matplotlib inline import logging import sys import pandas as pd from rdkit import Chem from rdkit.Chem.Draw import MolsToGridImage try: import ppqm except ModuleNotFoundError: import pathlib cwd = pathlib.Path().resolve().parent sys.path.append(str(cwd)) import ppqm # ## Set logging level # + logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger("ppqm").setLevel(logging.INFO) logging.getLogger("xtb").setLevel(logging.DEBUG) logging.getLogger("gamess").setLevel(logging.DEBUG) logging.getLogger("mopac").setLevel(logging.DEBUG) show_progress = False # - # ## Define a molecule you like smiles = "NCC(=O)N[C@H]1CO[C@@H](c2ccc([N+](=O)[O-])cc2)OC1" # CHEMBL260511 molobj = Chem.MolFromSmiles(smiles) molobj # ## Get some 3D conformers (RDKit) molobj = ppqm.tasks.generate_conformers(molobj) molobj.GetNumConformers() # ## Different programs, requires different settings # + # Define different programs calculator_options = {"scr": "_tmp_directory_", "n_cores": 2, "show_progress": show_progress} mopac = ppqm.MopacCalculator(cmd="mopac", **calculator_options) gamess = ppqm.GamessCalculator( cmd="rungms", gamess_userscr="~/scr", gamess_scr="~/scr", **calculator_options ) xtb = ppqm.XtbCalculator(cmd="xtb", **calculator_options) # - mopac xtb gamess # ## Different input and different output # + # %%time mopac_options = { "pm3": None, "precise": None, "mullik": None, "eps": 78.4, } results_mopac = mopac.calculate(molobj, mopac_options) # + # %%time xtb_options = { "gfn": 1, "alpb": "water", "opt": None, } results_xtb = xtb.calculate(molobj, xtb_options) # + # %%time gamess_options = { "basis": {"gbasis": "pm3"}, "contrl": {"runtyp": "optimize"}, "statpt": {"opttol": 0.0005, "nstep": 300, "projct": False}, "system": {"mwords": 125}, "pcm": { "solvnt": "water", "mxts": 15000, "icav": 1, "idisp": 1, }, "tescav": {"mthall": 4, "ntsall": 60}, } results_gamess = gamess.calculate(molobj, gamess_options) # - # ## Results df_gamess = pd.DataFrame(results_gamess) df_mopac = pd.DataFrame(results_mopac) df_xtb = pd.DataFrame(results_xtb) df_mopac df_xtb.head() df_gamess # ## TODO # # - properties # - timings # - n_steps # - rmsd # # - conformer ranking # #
notebooks/example_differentprograms.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (AIMS SA) # language: python # name: aims # --- # + [markdown] colab_type="text" id="JndnmDMp66FL" # #### Copyright 2017 Google LLC. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="hMqWDc_m6rUC" # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="4f3CKqFUqL2-" slideshow={"slide_type": "slide"} # # First Steps with TensorFlow # + [markdown] colab_type="text" id="Bd2Zkk1LE2Zr" # **Learning Objectives:** # * Learn fundamental TensorFlow concepts # * Use the `LinearRegressor` class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature # * Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE) # * Improve the accuracy of a model by tuning its hyperparameters # + [markdown] colab_type="text" id="MxiIKhP4E2Zr" # The [data](https://developers.google.com/machine-learning/crash-course/california-housing-data-description) is based on 1990 census data from California. # + [markdown] colab_type="text" id="6TjLjL9IU80G" # ## Setup # In this first cell, we'll load the necessary libraries. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="rVFf5asKE2Zt" from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format # + [markdown] colab_type="text" id="ipRyUHjhU80Q" # Next, we'll load our data set. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="9ivCDWnwE2Zx" california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",") # + [markdown] colab_type="text" id="vVk_qlG6U80j" # We'll randomize the data, just to be sure not to get any pathological ordering effects that might harm the performance of Stochastic Gradient Descent. Additionally, we'll scale `median_house_value` to be in units of thousands, so it can be learned a little more easily with learning rates in a range that we usually use. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="r0eVyguIU80m" california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) california_housing_dataframe["median_house_value"] /= 1000.0 california_housing_dataframe # + [markdown] colab_type="text" id="HzzlSs3PtTmt" slideshow={"slide_type": "-"} # ## Examine the Data # # It's a good idea to get to know your data a little bit before you work with it. # # We'll print out a quick summary of a few useful statistics on each column: count of examples, mean, standard deviation, max, min, and various quantiles. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "test": {"output": "ignore", "timeout": 600}} colab_type="code" id="gzb10yoVrydW" slideshow={"slide_type": "slide"} california_housing_dataframe.describe() # + [markdown] colab_type="text" id="Lr6wYl2bt2Ep" slideshow={"slide_type": "-"} # ## Build the First Model # # In this exercise, we'll try to predict `median_house_value`, which will be our label (sometimes also called a target). We'll use `total_rooms` as our input feature. # # **NOTE:** Our data is at the city block level, so this feature represents the total number of rooms in that block. # # To train our model, we'll use the [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearRegressor) interface provided by the TensorFlow [Estimator](https://www.tensorflow.org/get_started/estimator) API. This API takes care of a lot of the low-level model plumbing, and exposes convenient methods for performing model training, evaluation, and inference. # + [markdown] colab_type="text" id="0cpcsieFhsNI" # ### Step 1: Define Features and Configure Feature Columns # + [markdown] colab_type="text" id="EL8-9d4ZJNR7" # In order to import our training data into TensorFlow, we need to specify what type of data each feature contains. There are two main types of data we'll use in this and future exercises: # # * **Categorical Data**: Data that is textual. In this exercise, our housing data set does not contain any categorical features, but examples you might see would be the home style, the words in a real-estate ad. # # * **Numerical Data**: Data that is a number (integer or float) and that you want to treat as a number. As we will discuss more later sometimes you might want to treat numerical data (e.g., a postal code) as if it were categorical. # # In TensorFlow, we indicate a feature's data type using a construct called a **feature column**. Feature columns store only a description of the feature data; they do not contain the feature data itself. # # To start, we're going to use just one numeric input feature, `total_rooms`. The following code pulls the `total_rooms` data from our `california_housing_dataframe` and defines the feature column using `numeric_column`, which specifies its data is numeric: # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="rhEbFCZ86cDZ" # Define the input feature: total_rooms. my_feature = california_housing_dataframe[["total_rooms"]] # Configure a numeric feature column for total_rooms. # Feature columns store only a description of the feature data; they do not contain the feature data itself. feature_columns = [tf.feature_column.numeric_column("total_rooms")] # + [markdown] colab_type="text" id="K_3S8teX7Rd2" # **NOTE:** The shape of our `total_rooms` data is a one-dimensional array (a list of the total number of rooms for each block). This is the default shape for `numeric_column`, so we don't have to pass it as an argument. # + [markdown] colab_type="text" id="UMl3qrU5MGV6" # ### Step 2: Define the Target # + [markdown] colab_type="text" id="cw4nrfcB7kyk" # Next, we'll define our target, which is `median_house_value`. Again, we can pull it from our `california_housing_dataframe`: # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="l1NvvNkH8Kbt" # Define the label. targets = california_housing_dataframe["median_house_value"] # + [markdown] colab_type="text" id="4M-rTFHL2UkA" # ### Step 3: Configure the LinearRegressor # + [markdown] colab_type="text" id="fUfGQUNp7jdL" # Next, we'll configure a linear regression model using LinearRegressor. We'll train this model using the `GradientDescentOptimizer`, which implements Mini-Batch Stochastic Gradient Descent (SGD). The `learning_rate` argument controls the size of the gradient step. # # **NOTE:** To be safe, we also apply [gradient clipping](https://developers.google.com/machine-learning/glossary/#gradient_clipping) to our optimizer via `clip_gradients_by_norm`. Gradient clipping ensures the magnitude of the gradients do not become too large during training, which can cause gradient descent to fail. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="ubhtW-NGU802" # Use gradient descent as the optimizer for training the model. my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) # Configure the linear regression model with our feature columns and optimizer. # Set a learning rate of 0.0000001 for Gradient Descent. linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) # + [markdown] colab_type="text" id="-0IztwdK2f3F" # ### Step 4: Define the Input Function # + [markdown] colab_type="text" id="S5M5j6xSCHxx" # To import our California housing data into our `LinearRegressor`, we need to define an input function, which instructs TensorFlow how to preprocess # the data, as well as how to batch, shuffle, and repeat it during model training. # # First, we'll convert our *pandas* feature data into a dict of NumPy arrays. We can then use the TensorFlow [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) to construct a dataset object from our data, and then break # our data into batches of `batch_size`, to be repeated for the specified number of epochs (num_epochs). # # **NOTE:** When the default value of `num_epochs=None` is passed to `repeat()`, the input data will be repeated indefinitely. # # Next, if `shuffle` is set to `True`, we'll shuffle the data so that it's passed to the model randomly during training. The `buffer_size` argument specifies # the size of the dataset from which `shuffle` will randomly sample. # # Finally, our input function constructs an iterator for the dataset and returns the next batch of data to the LinearRegressor. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="RKZ9zNcHJtwc" def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels # + [markdown] colab_type="text" id="wwa6UeA1V5F_" # **NOTE:** We'll continue to use this same input function in later exercises. For more # detailed documentation of input functions and the `Dataset` API, see the [TensorFlow Programmer's Guide](https://www.tensorflow.org/programmers_guide/datasets). # + [markdown] colab_type="text" id="4YS50CQb2ooO" # ### Step 5: Train the Model # + [markdown] colab_type="text" id="yP92XkzhU803" # We can now call `train()` on our `linear_regressor` to train the model. We'll wrap `my_input_fn` in a `lambda` # so we can pass in `my_feature` and `target` as arguments (see this [TensorFlow input function tutorial](https://www.tensorflow.org/get_started/input_fn#passing_input_fn_data_to_your_model) for more details), and to start, we'll # train for 100 steps. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="5M-Kt6w8U803" _ = linear_regressor.train( input_fn = lambda:my_input_fn(my_feature, targets), steps=100 ) # + [markdown] colab_type="text" id="7Nwxqxlx2sOv" # ### Step 6: Evaluate the Model # + [markdown] colab_type="text" id="KoDaF2dlJQG5" # Let's make predictions on that training data, to see how well our model fit it during training. # # **NOTE:** Training error measures how well your model fits the training data, but it **_does not_** measure how well your model **_generalizes to new data_**. In later exercises, you'll explore how to split your data to evaluate your model's ability to generalize. # # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="pDIxp6vcU809" # Create an input function for predictions. # Note: Since we're making just one prediction for each example, we don't # need to repeat or shuffle the data here. prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False) # Call predict() on the linear_regressor to make predictions. predictions = linear_regressor.predict(input_fn=prediction_input_fn) # Format predictions as a NumPy array, so we can calculate error metrics. predictions = np.array([item['predictions'][0] for item in predictions]) # Print Mean Squared Error and Root Mean Squared Error. mean_squared_error = metrics.mean_squared_error(predictions, targets) root_mean_squared_error = math.sqrt(mean_squared_error) print("Mean Squared Error (on training data): %0.3f" % mean_squared_error) print("Root Mean Squared Error (on training data): %0.3f" % root_mean_squared_error) # + [markdown] colab_type="text" id="AKWstXXPzOVz" slideshow={"slide_type": "slide"} # Is this a good model? How would you judge how large this error is? # # Mean Squared Error (MSE) can be hard to interpret, so we often look at Root Mean Squared Error (RMSE) # instead. A nice property of RMSE is that it can be interpreted on the same scale as the original targets. # # Let's compare the RMSE to the difference of the min and max of our targets: # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="7UwqGbbxP53O" min_house_value = california_housing_dataframe["median_house_value"].min() max_house_value = california_housing_dataframe["median_house_value"].max() min_max_difference = max_house_value - min_house_value print("Min. Median House Value: %0.3f" % min_house_value) print("Max. Median House Value: %0.3f" % max_house_value) print("Difference between Min. and Max.: %0.3f" % min_max_difference) print("Root Mean Squared Error: %0.3f" % root_mean_squared_error) # + [markdown] colab_type="text" id="JigJr0C7Pzit" # Our error spans nearly half the range of the target values. Can we do better? # # This is the question that nags at every model developer. Let's develop some basic strategies to reduce model error. # # The first thing we can do is take a look at how well our predictions match our targets, in terms of overall summary statistics. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "test": {"output": "ignore", "timeout": 600}} colab_type="code" id="941nclxbzqGH" slideshow={"slide_type": "-"} calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) calibration_data.describe() # + [markdown] colab_type="text" id="E2-bf8Hq36y8" slideshow={"slide_type": "-"} # Okay, maybe this information is helpful. How does the mean value compare to the model's RMSE? How about the various quantiles? # # We can also visualize the data and the line we've learned. Recall that linear regression on a single feature can be drawn as a line mapping input *x* to output *y*. # # First, we'll get a uniform random sample of the data so we can make a readable scatter plot. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="SGRIi3mAU81H" sample = california_housing_dataframe.sample(n=300) # + [markdown] colab_type="text" id="N-JwuJBKU81J" # Next, we'll plot the line we've learned, drawing from the model's bias term and feature weight, together with the scatter plot. The line will show up red. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "test": {"output": "ignore", "timeout": 600}} colab_type="code" id="7G12E76-339G" slideshow={"slide_type": "-"} # Get the min and max total_rooms values. x_0 = sample["total_rooms"].min() x_1 = sample["total_rooms"].max() # Retrieve the final weight and bias generated during training. weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') # Get the predicted median_house_values for the min and max total_rooms values. y_0 = weight * x_0 + bias y_1 = weight * x_1 + bias # Plot our regression line from (x_0, y_0) to (x_1, y_1). plt.plot([x_0, x_1], [y_0, y_1], c='r') # Label the graph axes. plt.ylabel("median_house_value") plt.xlabel("total_rooms") # Plot a scatter plot from our data sample. plt.scatter(sample["total_rooms"], sample["median_house_value"]) # Display graph. plt.show() # + [markdown] colab_type="text" id="t0lRt4USU81L" # This initial line looks way off. See if you can look back at the summary stats and see the same information encoded there. # # Together, these initial sanity checks suggest we may be able to find a much better line. # + [markdown] colab_type="text" id="AZWF67uv0HTG" slideshow={"slide_type": "slide"} # ## Tweak the Model Hyperparameters # For this exercise, we've put all the above code in a single function for convenience. You can call the function with different parameters to see the effect. # # In this function, we'll proceed in 10 evenly divided periods so that we can observe the model improvement at each period. # # For each period, we'll compute and graph training loss. This may help you judge when a model is converged, or if it needs more iterations. # # We'll also plot the feature weight and bias term values learned by the model over time. This is another way to see how things converge. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="wgSMeD5UU81N" def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"): """Trains a linear regression model of one feature. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. input_feature: A `string` specifying a column from `california_housing_dataframe` to use as input feature. """ periods = 10 steps_per_period = steps / periods my_feature = input_feature my_feature_data = california_housing_dataframe[[my_feature]] my_label = "median_house_value" targets = california_housing_dataframe[my_label] # Create feature columns. feature_columns = [tf.feature_column.numeric_column(my_feature)] # Create input functions. training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size) prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False) # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) # Set up to plot the state of our model's line each period. plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.title("Learned Line by Period") plt.ylabel(my_label) plt.xlabel(my_feature) sample = california_housing_dataframe.sample(n=300) plt.scatter(sample[my_feature], sample[my_label]) colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)] # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") root_mean_squared_errors = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period ) # Take a break and compute predictions. predictions = linear_regressor.predict(input_fn=prediction_input_fn) predictions = np.array([item['predictions'][0] for item in predictions]) # Compute loss. root_mean_squared_error = math.sqrt( metrics.mean_squared_error(predictions, targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, root_mean_squared_error)) # Add the loss metrics from this period to our list. root_mean_squared_errors.append(root_mean_squared_error) # Finally, track the weights and biases over time. # Apply some math to ensure that the data and line are plotted neatly. y_extents = np.array([0, sample[my_label].max()]) weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') x_extents = (y_extents - bias) / weight x_extents = np.maximum(np.minimum(x_extents, sample[my_feature].max()), sample[my_feature].min()) y_extents = weight * x_extents + bias plt.plot(x_extents, y_extents, color=colors[period]) print("Model training finished.") # Output a graph of loss metrics over periods. plt.subplot(1, 2, 2) plt.ylabel('RMSE') plt.xlabel('Periods') plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(root_mean_squared_errors) # Output a table with calibration data. calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) display.display(calibration_data.describe()) print("Final RMSE (on training data): %0.2f" % root_mean_squared_error) # + [markdown] colab_type="text" id="kg8A4ArBU81Q" # ## Task 1: Achieve an RMSE of 180 or Below # # Tweak the model hyperparameters to improve loss and better match the target distribution. # If, after 5 minutes or so, you're having trouble beating a RMSE of 180, check the solution for a possible combination. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "test": {"output": "ignore", "timeout": 600}} colab_type="code" id="UzoZUSdLIolF" slideshow={"slide_type": "slide"} train_model( learning_rate=0.00002, steps=500, batch_size=5 ) # + [markdown] colab_type="text" id="ajVM7rkoYXeL" # ### Solution # + [markdown] colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="T3zmldDwYy5c" # Double-click __here__ for one possible solution. # # <!-- Your answer is below: # # train_model( # learning_rate=0.00002, # steps=500, # batch_size=5 # ) # --> # + [markdown] colab_type="text" id="M8H0_D4vYa49" # This is just one possible configuration; there may be other combinations of settings that also give good results. Note that in general, this exercise isn't about finding the *one best* setting, but to help build your intutions about how tweaking the model configuration affects prediction quality. # + [markdown] colab_type="text" id="QU5sLyYTqzqL" slideshow={"slide_type": "slide"} # ### Is There a Standard Heuristic for Model Tuning? # # This is a commonly asked question. The short answer is that the effects of different hyperparameters are data dependent. So there are no hard-and-fast rules; you'll need to test on your data. # # That said, here are a few rules of thumb that may help guide you: # # * Training error should steadily decrease, steeply at first, and should eventually plateau as training converges. # * If the training has not converged, try running it for longer. # * If the training error decreases too slowly, increasing the learning rate may help it decrease faster. # * But sometimes the exact opposite may happen if the learning rate is too high. # * If the training error varies wildly, try decreasing the learning rate. # * Lower learning rate plus larger number of steps or larger batch size is often a good combination. # * Very small batch sizes can also cause instability. First try larger values like 100 or 1000, and decrease until you see degradation. # # Again, never go strictly by these rules of thumb, because the effects are data dependent. Always experiment and verify. # + [markdown] colab_type="text" id="GpV-uF_cBCBU" slideshow={"slide_type": "slide"} # ## Task 2: Try a Different Feature # # See if you can do any better by replacing the `total_rooms` feature with the `population` feature. # # Don't take more than 5 minutes on this portion. # + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="YMyOxzb0ZlAH" # YOUR CODE HERE train_model( learning_rate=0.00002, steps=1000, batch_size=100, input_feature='population' ) # + [markdown] colab_type="text" id="ci1ISxxrZ7v0" # ### Solution # + [markdown] colab_type="text" id="ci1ISxxrZ7v0" # Double-click __here__ for one possible solution. # # <!-- Your answer is below: # # train_model( # learning_rate=0.00002, # steps=1000, # batch_size=5, # input_feature="population" # ) # --> # -
exercises/01 First Steps with TensorFlow/first_steps_with_tensor_flow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # This code is referenced from # https://github.com/probml/pmtk3/blob/master/demos/fisherDiscrimVowelDemo.m # Author:Srikar-Reddy-Jilugu(@always-newbie161) import numpy as np import matplotlib.pyplot as plt from scipy.io import loadmat try: from sklearn.decomposition import PCA except ModuleNotFoundError: # %pip install scikit-learn from sklearn.decomposition import PCA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis try: from probml_utils.fisher_lda_fit import fisher_lda_fit except ModuleNotFoundError: # %pip install git+https://github.com/probml/probml-utils.git from probml_utils.fisher_lda_fit import fisher_lda_fit import probml_utils as pml import requests from io import BytesIO url = "https://github.com/probml/probml-data/blob/main/data/vowelTrain.mat?raw=true" r = requests.get(url, allow_redirects=True) rawdata = BytesIO(r.content) data = loadmat(rawdata) X = data["Xtrain"] y = data["ytrain"] nsamples, ndims = X.shape nclasses = np.max(y) def plot_projection_data(X, y, mu, nclasses, figure_num): """ 2d data is visualized with their respective symbol and color and the centroids of the data are plotted with black filled-in color. """ # To match the Hastie color scheme lightblue = [55, 155, 255] orange = [255, 128, 0] magenta = [255, 0, 128] green2 = [132, 199, 71] cyan = [61, 220, 176] yellow = [255, 255, 0] brown = [128, 64, 0] blue = [0, 0, 255] red = [255, 0, 0] black = [0, 0, 0] gray = [128, 128, 128] colors = [lightblue, blue, brown, magenta, orange, cyan, gray, yellow, black, red, green2] plt.figure(figure_num) for c in range(0, nclasses): colors[c] = [col / 255 for col in colors[c]] ndx = np.where(y == (c + 1)) plt.scatter(X[ndx, 0], X[ndx, 1], marker=symbols[c], s=30, facecolor="none", edgecolor=colors[c]) plt.scatter(mu[c, 0], mu[c, 1], marker=symbols[c], s=40, facecolor="black") # ------------------------ K = 2 # PCA projection pca = PCA(K) X_pca = pca.fit_transform(X) X_pca = -X_pca # make it look like the Hastie figure muC = np.zeros((nclasses, ndims)) for c in range(0, nclasses): muC[c, :] = np.mean((X[np.where(y == (c + 1))[0], :]), axis=0) muC2d_pca = pca.fit_transform(muC) symbols = "+ovd*.xs^d><ph" plot_projection_data(X_pca, y, muC2d_pca, nclasses, figure_num=0) plt.title("PCA projection of vowel data to 2d") pml.savefig("fisherDiscrimVowelPCA.pdf") # ------------------------ # FLDA projection W = fisher_lda_fit(X, y, K) W[:, 0] = -W[:, 0] # make it look like the Hastie figure X_lda = X @ W muC2d_lda = muC @ W plot_projection_data(X_lda, y, muC2d_lda, nclasses, figure_num=1) plt.title("FLDA projection of vowel data to 2d") pml.savefig("fisherDiscrimVowelLDA.pdf") plt.show() # -
notebooks/book1/09/fisher_discrim_vowel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/darshana-22/Music-Genre-Classification/blob/main/Implementing_a_Neuron.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="9dOtwQqVDvOC" import math # + id="5kgten3yDZxj" def sigmoid(x): y = 1.0 / (1 + math.exp(-x)) return y # + id="_AHh3N97C22r" def activate (inputs, weights): h = 0 for x, w in zip(inputs, weights): h += x*w return sigmoid(h) # + colab={"base_uri": "https://localhost:8080/"} id="YFjYOCeA_ucz" outputId="7d1be4c2-9f7e-4dcf-f66b-b021348fc32b" inputs = [0.5, 0.3, 0.2] weights = [0.4, 0.7, 0.3] output = activate(inputs, weights) print(output) # + id="XeT2DsPRJ77G" #multilayer perceptron import numpy as np # + id="QFig5TaUJ_J1" class MLP: def __init__(self, num_inputs=3, num_hidden=[3,5], num_outputs=2): self.num_inputs= num_inputs self.num_hidden = num_hidden self.num_outputs = num_outputs layers = [self.num_inputs] + self.num_hidden + [self.num_outputs] #weights self.weights = [] for i in range(len(layers)-1): w = np.random.rand(layers[i], layers[i+1]) self.weights.append(w) def forward_propagate(self, inputs): activations = inputs for w in self.weights: net_inputs = np.dot(activations, w) activations = self._sigmoid(net_inputs) return activations def _sigmoid(self, x): return 1 /(1+np.exp(-x)) # + colab={"base_uri": "https://localhost:8080/"} id="TrDUVmI-ylXx" outputId="a298a998-34ca-452b-c45a-0e69420b1385" mlp = MLP() inputs = np.random.rand(mlp.num_inputs) output = mlp.forward_propagate(inputs) print("The network input is: {}".format(inputs)) print("The netowrk output is: {}".format(output)) # + id="jqQjtwMhTRwk"
Implementing_a_Neuron.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Linear Regression for image classification with PyTorch # # Credits: \ # https://jovian.ai/aakashns/03-logistic-regression import torch import torchvision from torchvision.datasets import MNIST # Download training dataset dataset = MNIST(root='data/', download=True) len(dataset) test_dataset = MNIST(root='data/', train=False) len(test_dataset) import matplotlib.pyplot as plt # %matplotlib inline image, label = dataset[0] plt.imshow(image, cmap='gray') print('Label:', label) # We need to convert the images into tensors. We can do this by specifying a transform while creating our dataset. PyTorch datasets allow us to specify one or more transformation functions which are applied to the images as they are loaded. `torchvision.transforms` contains many such predefined functions, and we'll use the `ToTensor` transform to convert images into PyTorch tensors. import torchvision.transforms as transforms # MNIST dataset (images and labels) dataset = MNIST(root='data/', train=True, transform=transforms.ToTensor()) img_tensor, label = dataset[0] print(img_tensor.shape, label) # The image is now converted to a 1x28x28 tensor. The first dimension is used to keep track of the color channels. Since images in the MNIST dataset are grayscale, there's just one channel. Other datasets have images with color, in which case there are 3 channels: red, green and blue (RGB). print(img_tensor[:,10:15,10:15]) print(torch.max(img_tensor), torch.min(img_tensor)) # ### Training and Validation Datasets # + from torch.utils.data import random_split train_ds, val_ds = random_split(dataset, [50000, 10000]) len(train_ds), len(val_ds) # - # We can now created data loaders to help us load the data in batches. We'll use a batch size of 128. # + from torch.utils.data import DataLoader batch_size = 128 train_loader = DataLoader(train_ds, batch_size, shuffle=True) val_loader = DataLoader(val_ds, batch_size) # - # ### Model # # Since `nn.Linear` expects the each training example to be a vector, each `1x28x28` image tensor needs to be flattened out into a vector of size `784 (28*28)`, before being passed into the model. # + import torch.nn as nn input_size = 28*28 num_classes = 10 # Logistic regression model model = nn.Linear(input_size, num_classes) # - print(model.weight.shape) model.weight print(model.bias.shape) model.bias # Our images are of the shape `1x28x28`, but we need them to be vectors of size 784 i.e. we need to flatten them out. We'll use the `.reshape` method of a tensor, which will allow us to efficiently 'view' each image as a flat vector, without really chaging the underlying data. To include this additional functionality within our model, we need to define a custom model, by extending the `nn.Module` class from PyTorch # + class MnistModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, xb): xb = xb.reshape(-1, 784) out = self.linear(xb) return out model = MnistModel() # - # Note that the model no longer has `.weight` and `.bias` attributes (as they are now inside the `.linear` attribute), but it does have a `.parameters` method which returns a list containing the weights and bias, and can be used by a PyTorch optimizer. print(model.linear.weight.shape, model.linear.bias.shape) list(model.parameters()) # + for images, labels in train_loader: outputs = model(images) break print('outputs.shape : ', outputs.shape) print('Sample outputs :\n', outputs[:2].data) # - # The softmax function is included in the `torch.nn.functional` package, and requires us to specify a dimension along which the softmax must be applied. # + import torch.nn.functional as F # Apply softmax for each output row probs = F.softmax(outputs, dim=1) # Look at sample probabilities print("Sample probabilities:\n", probs[:2].data) # Add up the probabilities of an output row print("Sum: ", torch.sum(probs[0]).item()) # - max_probs, preds = torch.max(probs, dim=1) print(preds) print(max_probs) labels # ### Evaluation Metric and Loss Function def accuracy(outputs, labels): _, preds = torch.max(outputs, dim=1) return torch.tensor(torch.sum(preds == labels).item() / len(preds)) accuracy(outputs, labels) # + loss_fn = F.cross_entropy # Loss for current batch of data loss = loss_fn(outputs, labels) print(loss) # - # ### Training the model # + class MnistModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, xb): xb = xb.reshape(-1, 784) out = self.linear(xb) return out def training_step(self, batch): images, labels = batch out = self(images) # Generate predictions loss = F.cross_entropy(out, labels) # Calculate loss return loss def validation_step(self, batch): images, labels = batch out = self(images) # Generate predictions loss = F.cross_entropy(out, labels) # Calculate loss acc = accuracy(out, labels) # Calculate accuracy return {'val_loss': loss, 'val_acc': acc} def validation_epoch_end(self, outputs): batch_losses = [x['val_loss'] for x in outputs] epoch_loss = torch.stack(batch_losses).mean() # Combine losses batch_accs = [x['val_acc'] for x in outputs] epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()} def epoch_end(self, epoch, result): print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc'])) model = MnistModel() # + def evaluate(model, val_loader): outputs = [model.validation_step(batch) for batch in val_loader] return model.validation_epoch_end(outputs) def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD): history = [] optimizer = opt_func(model.parameters(), lr) for epoch in range(epochs): # Training Phase for batch in train_loader: loss = model.training_step(batch) loss.backward() optimizer.step() optimizer.zero_grad() # Validation phase result = evaluate(model, val_loader) model.epoch_end(epoch, result) history.append(result) return history # - result0 = evaluate(model, val_loader) result0 ## Train for 5 epochs: history1 = fit(5, 0.001, model, train_loader, val_loader) ## 5 more epochs: history2 = fit(5, 0.001, model, train_loader, val_loader) history3 = fit(5, 0.001, model, train_loader, val_loader) history4 = fit(5, 0.001, model, train_loader, val_loader) history = [result0] + history1 + history2 + history3 + history4 accuracies = [result['val_acc'] for result in history] plt.plot(accuracies, '-x') plt.xlabel('epoch') plt.ylabel('accuracy') plt.title('Accuracy vs. No. of epochs') # ### Testing with individual images test_dataset = MNIST(root='data/', train=False, transform=transforms.ToTensor()) img, label = test_dataset[0] plt.imshow(img[0], cmap='gray') print('Shape:', img.shape) print('Label:', label) img.unsqueeze(0).shape # `img.unsqueeze` simply adds another dimension at the begining of the `1x28x28` tensor, making it a `1x1x28x28` tensor, which the model views as a batch containing a single image. def predict_image(img, model): xb = img.unsqueeze(0) yb = model(xb) _, preds = torch.max(yb, dim=1) return preds[0].item() img, label = test_dataset[0] plt.imshow(img[0], cmap='gray') print('Label:', label, ', Predicted:', predict_image(img, model)) test_loader = DataLoader(test_dataset, batch_size=256) result = evaluate(model, test_loader) result # ### Saving and loading the model torch.save(model.state_dict(), 'mnist-logistic.pth') model.state_dict() # To load the model weights, we can instante a new object of the class `MnistModel`, and use the `.load_state_dict method`. model2 = MnistModel() model2.load_state_dict(torch.load('mnist-logistic.pth')) model2.state_dict() test_loader = DataLoader(test_dataset, batch_size=256) result = evaluate(model2, test_loader) result
Tutorial/03-logistic-regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Diagnostics for drift correction # ### Configure environment # + import os os.chdir("/home/hbabcock/Data/storm_analysis/sa_diagnostics/drift_correction") print(os.getcwd()) import numpy numpy.random.seed(1) # - import storm_analysis.diagnostics.drift_correction.settings as settings import storm_analysis.diagnostics.drift_correction.configure as configure import storm_analysis.diagnostics.drift_correction.make_data as makeData import storm_analysis.diagnostics.drift_correction.analyze_data as analyzeData import storm_analysis.diagnostics.drift_correction.collate as collate # ### Configure configure.configure() # ### Make Data makeData.makeData() # ### Analyze data # %time analyzeData.analyzeData() # ### Reference results # + active="" # ... # # Analysis complete # Analysis completed in 183.81 seconds. # # ... # # Analysis complete # Analysis completed in 179.50 seconds. # # 2 directories analyzed in 363.31 seconds. # CPU times: user 5min 52s, sys: 11.7 s, total: 6min 4s # Wall time: 6min 3s # - # ### Collate data collate.collate() # ### Reference results # + active="" # 2019-10-30 # commit <PASSWORD> # # Drift correction RMS error (nanometers): # drift_xy.txt # X 1.247 # Y 1.473 # Z 0.570 # # drift_xyz.txt # X 1.452 # Y 1.502 # Z 1.834
storm_analysis/diagnostics/jpy_notebooks/drift_correction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # AI4M Course 2 Week 4 lecture notebook # ## Outline # # [One-hot encode categorical variables](#one-hot-encoding) # # [Hazard function](#hazard-function) # # [Permissible pairs with censoring and time](#permissible-pairs) # <a name="one-hot-encoding"></a> # ## One-hot encode categorical variables import pandas as pd # ### Which features are categorical? df = pd.DataFrame({'ascites': [0,1,0,1], 'edema': [0.5,0,1,0.5], 'stage': [3,4,3,4], 'cholesterol': [200.5,180.2,190.5,210.3] }) df # In this small sample dataset, 'ascites', 'edema', and 'stage' are categorical variables # - ascites: value is either 0 or 1 # - edema: value is either 0, 0.5 or 1 # - stage: is either 3 or 4 # # 'cholesterol' is a continuous variable, since it can be any decimal value greater than zero. # ### Which categorical variables to one-hot encode? # # Of the categorical variables, which one should be one-hot encoded (turned into dummy variables)? # # - ascites: is already 0 or 1, so there is not a need to one-hot encode it. # - We could one-hot encode ascites, but it is not necessary when there are just two possible values that are 0 or 1. # - When values are 0 or 1, 1 means a disease is present, and 0 means normal (no disease). # - edema: Edema is swelling in any part of the body. This data set's 'edema' feature has 3 categories, so we will want to one-hot encode it so that there is one feature column for each of the three possible values. # - 0: No edema # - 0.5: Patient has edema, but did not receive diuretic therapy (which is used to treat edema) # - 1: Patient has edeam, despite also receiving diuretic therapy (so the condition may be more severe). # - stage: has values of 3 and 4. We will want to one-hot encode these because they are not values of 0 or 1. # - the "stage" of cancer is either 0, 1,2,3 or 4. # - Stage 0 means there is no cancer. # - Stage 1 is cancer that is limited to a small area of the body, also known as "early stage cancer" # - Stage 2 is cancer that has spread to nearby tissues # - stage 3 is cancer that has spread to nearby tissues, but more so than stage 2 # - stage 4 is cancer that has spread to distant parts of the body, also known as "metastatic cancer". # - We could convert stage 3 to 0 and stage 4 to 1 for the sake of training a model. This would may be confusing for anyone reviewing our code and data. We will one-hot encode the 'stage'. # -You'll actually see that we end up with 0 representing stage 3 and 1 representing stage 4 (see the next section). # ### Multi-collinearity of one-hot encoded features # # Let's see what happens when we one-hot encode the 'stage' feature. # # We'll use [pandas.get_dummies](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html) df_stage = pd.get_dummies(data=df, columns=['stage'] ) df_stage[['stage_3','stage_4']] # What do you notice about the 'stage_3' and 'stage_4' features? # # Given that stage 3 and stage 4 are the only possible values for stage, # If you know that patient 0 (row 0) has stage_3 set to 1, # what can you say about that same patient's value for the stage_4 feature? # - When stage_3 is 1, then stage_4 must be 0 # - When stage_3 is 0, then stage_4 must be 1 # # This means that one of the feature columns is actually redundant. We should drop one of these features to avoid multicollinearity (where one feature can predict another feature). df_stage df_stage_drop_first = df_stage.drop(columns='stage_3') df_stage_drop_first # Note, there's actually a parameter of pandas.get_dummies() that lets you drop the first one-hot encoded column. You'll practice doing this in this week's assignment! # ### Make the numbers decimals # # We can cast the one-hot encoded values as floats by setting the data type to numpy.float64. # - This is helpful if we are feeding data into a model, where the model expects a certain data type (such as a 64-bit float, 32-bit float etc.) import numpy as np df_stage = pd.get_dummies(data=df, columns=['stage'], ) df_stage[['stage_4']] df_stage_float64 = pd.get_dummies(data=df, columns=['stage'], dtype=np.float64 ) df_stage_float64[['stage_4']] # ### This is the end of this practice section. # # Please continue on with the lecture videos! # # --- # <a name="hazard-function"></a> # ## Hazard function # Let's say we fit the hazard function # $$ # \lambda(t, x) = \lambda_0(t)e^{\theta^T X_i} # $$ # # So that we have the coefficients $\theta$ for the features in $X_i$ # # If you have a new patient, let's predict their hazard $\lambda(t,x)$ import numpy as np import pandas as pd lambda_0 = 1 coef = np.array([0.5,2.]) coef X = pd.DataFrame({'age': [20,30,40], 'cholesterol': [180,220,170] }) X # - First, let's multiply the coefficients to the features. # - Check the shapes of the coefficients and the features to decide which one to transpose coef.shape X.shape # It looks like the coefficient is a 1D array, so transposing it won't do anything. # - We can transpose the X so that we're multiplying a (2,) array by a (2,3) dataframe. # # So the formula looks more like this (transpose $X_i$ instead of $\theta$ # $$ # \lambda(t, x) = \lambda_0(t)e^{\theta X_i^T} # $$ # # - Let's multiply $\theta X_i^T$ np.dot(coef,X.T) # Calculate the hazard for the three patients (there are 3 rows in X) lambdas = lambda_0 * np.exp(np.dot(coef,X.T)) patients_df = X.copy() patients_df['hazards'] = lambdas patients_df # ### This is the end of this practice section. # # Please continue on with the lecture videos! # # --- # <a name="permissible-pairs"></a> # ## Permissible pairs with censoring and time import pandas as pd df = pd.DataFrame({'time': [2,4,2,4,2,4,2,4], 'event': [1,1,1,1,0,1,1,0], 'risk_score': [20,40,40,20,20,40,40,20] }) df # We made this data sample so that you can compare pairs of patients visually. # ### When at least one patient is not censored # - A pair may be permissible if at least one patient is not censored. # - If both pairs of patients are censored, then they are definitely not a permissible pair. pd.concat([df.iloc[0:1],df.iloc[1:2]],axis=0) if df['event'][0] == 1 or df['event'][1] == 1: print(f"May be a permissible pair: 0 and 1") else: print(f"Definitely not permissible pair: 0 and 1") pd.concat([df.iloc[4:5],df.iloc[7:8]],axis=0) if df['event'][4] == 1 or df['event'][7] == 1: print(f"May be a permissible pair: 4 and 7") else: print(f"Definitely not permissible pair: 4 and 7") # ### If neither patient was censored: # - If both patients had an event (neither one was censored). This is definitely a permissible pair. pd.concat([df.iloc[0:1],df.iloc[1:2]],axis=0) if df['event'][0] == 1 and df['event'][1] == 1: print(f"Definitely a permissible pair: 0 and 1") else: print(f"May be a permissible pair: 0 and 1") # ### When one patient is censored: # - If we know that one patient was censored and one had an event, then we can check if censored patient's time is at least as great as the uncensored patient's time. If so, it's a permissible pair as well pd.concat([df.iloc[6:7],df.iloc[7:8]],axis=0) if df['time'][7] >= df['time'][6]: print(f"Permissible pair: Censored patient 7 lasted at least as long as uncensored patient 6") else: print("Not a permisible pair") pd.concat([df.iloc[4:5],df.iloc[5:6]],axis=0) if df['time'][4] >= df['time'][5]: print(f"Permissible pair") else: print("Not a permisible pair: censored patient 4 was censored before patient 5 had their event") # ### This is the end of this practice section. # # Please continue on with the lecture videos! # # ---
AI for Medical Prognosis/Week 4/.ipynb_checkpoints/C2_W4_lecture-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import pandas as pd import numpy as np # import seaborn as sns import plotly import plotly_express as px # import plotly.plotly as py from plotly import tools from plotly.offline import plot import plotly.graph_objs as go import plotly.io as pio from os import listdir from os.path import isfile, join from pathlib import Path from os import listdir from os.path import isfile, join import country_converter as coco from plotly.subplots import make_subplots import ipywidgets as widgets import os import glob import matplotlib.pyplot as plt # from floweaver import * # # %matplotlib inline # from IPython.display import display # from ipysankeywidget import SankeyWidget # - # Setting working directory os.chdir ('C:\\Users\\KarlaC\\MAT-DP\\') # + # Make folders for figures if not os.path.exists('figures'): os.makedirs('figures') if not os.path.exists('figures\countries'): os.makedirs('figures\countries') if not os.path.exists('outputs'): os.makedirs('outputs') # - # # Load E+M data # + # Define matrices and load data # country energy projection [kWh] C = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "C:I", nrows = 1) # material per energy technology [g/kWh] M = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "K:AG", nrows = 10) M = M.rename(columns={'[g/kWh]':'tech'}).set_index('tech') # embodied emissions (GHG) per material [gCO2e/g] E = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AJ", nrows = 22) # water usage per material [l/kg] W = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AK", nrows = 22) # recycling rate in current supply per material [%] R = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AL", nrows = 22) # costs per material [€/kg] K = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AM", nrows = 22) K = K/1000 # Calculating all effects # E: emissions per energy technology [gCO2/kWh] # W: water usage per energy technology # R: recycling rate per energy technology # K: costs per energy technology for ef, ne in zip([E, W, R, K],['E', 'W', 'R', 'K'] ): # effect per energy technology e.g. [gCO2/kWh] globals()['{}_tech'.format(ne)] = M.dot(ef.values) globals()['{}_tech_sep'.format(ne)] = M.multiply(ef.T.values) # improve later: add the index to C so the tech's names are used in calc # total factor of country e.g. embodied emissions [gCO2] globals()['{}_country'.format(ne)] = C.dot(M[0:len(C.T)].values).dot(ef.values) globals()['{}_country_sep'.format(ne)] = globals()['{}_tech_sep'.format(ne)][0:len(C.T)].multiply(C.T.values) globals()['country_tech_{}'.format(ne)] = C.T.values*(globals()['{}_tech'.format(ne)][0:len(C.T)]) globals()['country_tech_{}'.format(ne)] = globals()['country_tech_{}'.format(ne)].T # - # # Load E+M data for all countries # + # Df w all countries and scenarios dfC = pd.read_excel(r'data/EnergyProjection.xlsx', sheet_name = 'All countries', skiprows = 2, usecols = "B:R") # material per energy technology [g/kWh] cM = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "R:AN", nrows = 15) cM = cM.rename(columns={'[g/kWh]':'tech'}).set_index('tech') # embodied emissions (GHG) per material [gCO2e/g] cE = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "AQ", nrows = 22) # water usage per material [l/kg] cW = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "AR", nrows = 22) # recycling rate in current supply per material [%] cR = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "AS", nrows = 22) # costs per material [€/kg] cK = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "AT", nrows = 22) cK = cK/1000 # total factor of all countries e.g. embodied emissions [gCO2] # df w all countries, years and scenarios dfC['Scenario'] = dfC['Scenario'].replace(np.nan, 'Baseline') dfC['Year'] = dfC['Year'].replace(np.nan,0).astype('int') # Ordering dfC columns for the same order as M coln = list(cM.index) # + # Df w TEMBA results shnames = ['results_ref','results_1.5deg','results_2.0deg'] power = ['Power Generation (Aggregate)','Power Generation Capacity (Aggregate)','New power generation capacity (Aggregate)'] upow = ['PJ','GW','GW'] results = pd.DataFrame(columns = ['variable', 'scenario', 'country', 'parameter', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023', '2024', '2025', '2026', '2027', '2028', '2029', '2030', '2031', '2032', '2033', '2034', '2035', '2036', '2037', '2038', '2039', '2040', '2041', '2042', '2043', '2044', '2045', '2046', '2047', '2048', '2049', '2050', '2051', '2052', '2053', '2054', '2055', '2056', '2057', '2058', '2059', '2060', '2061', '2062', '2063', '2064', '2065'] ) for sn in shnames: globals()[sn] = pd.read_csv(r'data/{}.csv'.format(sn)) globals()[sn] = globals()[sn].drop(columns ='Unnamed: 0') globals()[sn] = globals()[sn][globals()[sn]['parameter'].isin(power)] results = results.append(globals()[sn]) dtech = pd.DataFrame(list(zip(['Wind (Onshore)', 'Wind (Offshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Geothermal','Gas CCS', 'Oil', 'Gas', 'Coal', 'Biomass','Nuclear', 'BECCS', 'Hydrogen','Coal CCS'], ['Wind','Wind','Solar CSP', 'Solar PV', 'Hydro','Geothermal','Gas with ccs', 'Oil', 'Gas','Coal','Biomass','Nuclear', 'Biomass with ccs', 'Hydrogen' ,'Coal with ccs'])), columns = ['tech','variable'] ) dtech = dtech.set_index('variable')['tech'].to_dict() results['tech']= results['variable'].map(dtech) # 'power_trade' include as non-country embodied emissions results_df = pd.melt(results.drop(columns = 'variable'), id_vars = ['tech', 'scenario','country','parameter'], var_name = 'Year', value_name = 'Value') results_piv = pd.pivot_table(results_df, values = 'Value', index = ['Year', 'scenario', 'country','parameter'], columns = 'tech', aggfunc=np.sum) results_piv = results_piv.reset_index() generation_df = results_df[results_df['parameter']== 'Power Generation (Aggregate)'] generation_df['Value'] = [x*277778000 for x in generation_df['Value']] generation_piv = pd.pivot_table(generation_df, values = 'Value', index = ['Year', 'scenario', 'country','parameter'], columns = 'tech', aggfunc=np.sum) generation_piv = generation_piv.reset_index() generation_piv = generation_piv.drop(columns = 'parameter') generation_piv.rename(columns ={'scenario':'Scenario','country':'Country'}, inplace = True) # - # ## Appending Uganda, UK and TEMBA data dfC['Coal CCS'] = 0 dfC = dfC.append(generation_piv) dfC = dfC[['Year', 'Scenario', 'Country', 'Wind (Onshore)', 'Wind (Offshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Oil', 'Gas CCS', 'Gas', 'Nuclear', 'Geothermal', 'Coal', 'Coal CCS', 'Biomass', 'BECCS', 'Hydrogen']] # ## Calculating mass of materials for all countries mat_country = pd.DataFrame(columns = ['Year', 'Scenario', 'Country','tech', 'Aluminium', 'Bentonite', 'Carbon Fiber', 'Cast Iron', 'Cement', 'Ceramics', 'Concrete', 'Copper', 'Epoxy', 'EVA ', 'Fibre Glass', 'Glass', 'Lubricant', 'Non-Ferrous Metal', 'Paint', 'Plastic', 'PVC', 'Resin', 'Sand', 'Silicon', 'Steel', 'Stainless Steel'] ) coln = [ 'Wind (Onshore)', 'Wind (Offshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Oil', 'Gas CCS', 'Gas', 'Nuclear', 'Geothermal', 'Coal', 'Biomass', 'BECCS', 'Hydrogen', 'Coal CCS'] # [coln] for c in dfC['Country'].unique(): for sc in dfC[dfC['Country']==c]['Scenario'].unique(): for y in dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)]['Year'].unique(): mat_c = dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)&(dfC['Year']==y)][coln].\ T.values*cM # mat_c = mat_c.sum().reset_index() mat_c['Country'] = c mat_c['Year'] = y mat_c['Scenario'] = sc mat_country = mat_country.append(mat_c.reset_index()) # val = pd.DataFrame(row).fillna(0).T.dot(cM.fillna(0)[0:len(dfC.iloc[:,3:].T)].values).dot(cE.values) mat_country = mat_country[['Year', 'Scenario', 'Country', 'tech', 'Aluminium', 'Bentonite', 'Carbon Fiber', 'Cast Iron', 'Cement', 'Ceramics', 'Concrete', 'Copper', 'Epoxy', 'EVA ', 'Fibre Glass', 'Glass', 'Lubricant', 'Non-Ferrous Metal', 'Paint', 'Plastic', 'PVC', 'Resin', 'Sand', 'Silicon', 'Steel', 'Stainless Steel']] mat_country.to_csv(r'outputs/massmat_bytech_bycountry.csv',index=False) # ## Calculating E,W, K, R for all countries # + # Calculating and transforming country data techcols = ['tech','Country','Scenario','Year','Aluminium', 'Bentonite', 'Carbon Fiber', 'Cast Iron', 'Cement','Ceramics', 'Concrete','Copper', 'Epoxy', 'EVA ', 'Fibre Glass', 'Glass', 'Lubricant', 'Non-Ferrous Metal', 'Paint', 'Plastic', 'PVC','Resin', 'Sand', 'Silicon', 'Steel', 'Stainless Steel'] for ef, ne in zip([cE, cW, cR, cK],['E', 'W', 'R', 'K'] ): globals()['df{}_tech'.format(ne)] = cM.fillna(0).dot(cE.values) globals()['df{}_tech_sep'.format(ne)] = cM.fillna(0).multiply(cE.T.values) globals()['df{}c_tech_sep'.format(ne)] = pd.DataFrame(columns = techcols) # 'df{ne}_{country}_{scenario}_{year}_sep' for c in dfC['Country'].unique(): for sc in dfC[dfC['Country']==c]['Scenario'].unique(): for y in dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)]['Year'].unique(): df_sep = globals()['df{}_tech_sep'.format(ne)].reindex(index = coln).\ multiply(dfC[(dfC['Country']==c)&\ (dfC['Scenario']==sc)&\ (dfC['Year']==y)][coln].T.values) df_sep = df_sep.reset_index() df_sep['Country'] = c df_sep['Scenario'] = sc df_sep['Year'] = y # df_sep used to be called globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))] globals()['df{}c_tech_sep'.format(ne)] = globals()['df{}c_tech_sep'.format(ne)].append(df_sep) globals()['df{}c_tech_sep'.format(ne)] = globals()['df{}c_tech_sep'.format(ne)][techcols] globals()['df{}c_tech_sep'.format(ne)].to_csv(r'outputs/{}_matbytech_bycountry.csv'.format(ne)) # 'df{}_country'.format(ne) globals()['df{}_country'.format(ne)] = dfC[['Year','Scenario','Country']] globals()['df{}_country'.format(ne)] ['value'] = 0 # 'dfcountry_tech_{}'.format(ne)' globals()['dfcountry_tech_{}'.format(ne)] = dfC.copy() for col in globals()['dfcountry_tech_{}'.format(ne)].iloc[:,3:].columns: globals()['dfcountry_tech_{}'.format(ne)][col] = 0 # 'df{}_country'.format(ne) for index, row in dfC[coln].iterrows(): val = pd.DataFrame(row).fillna(0).T.dot(cM.fillna(0)[0:len(dfC.iloc[:,3:].T)].values).dot(cE.values) globals()['df{}_country'.format(ne)].loc[index,'value'] = val.values[0] # 'dfcountry_tech_{}'.format(ne)' valt = pd.DataFrame(row).fillna(0).values*(globals()['df{}_tech'.format(ne)]) for col in valt.T.columns: globals()['dfcountry_tech_{}'.format(ne)].loc[index,col] = valt.T.loc[0,col] globals()['df{}_country'.format(ne)].to_csv(r'outputs/df{}_total_bycountry.csv'.format(ne)) globals()['dfcountry_tech_{}'.format(ne)].to_csv(r'outputs/df{}_tech_bycountry.csv'.format(ne)) # + # Socio-economic parameters tecec_df = pd.read_excel(r'data/technoeconomic_params.xlsx') lf_df = tecec_df[['Technology','Load factor']].drop([0]) sec_techs = ['Diesel (centralised)', 'Diesel 1 kW system (decentralised)', 'HFO', 'OCGT', 'CCGT', 'CCGT - CCS', 'Supercritical coal', 'Coal + CCS', 'Hydro (large scale)', 'Hydro (small scale)', 'Hydro (med. scale)', 'Biomass', 'Biomass (CHP small)', 'Biomass CCS', 'Nuclear', 'Geothermal', 'Wind onshore', 'Wind offshore', 'Solar PV (centr.)', 'Solar PV (decentralised)', 'Solar PV with battery', 'Solar CSP', 'Solar CSP with storage'] o_techs = ['Diesel', 'Diesel', 'Oil', 'Gas', 'Gas', 'Gas CCS', 'Coal', 'Coal CCS', 'Hydro', 'Hydro', 'Hydro', 'Biomass', 'Biomass', 'Biomass CCS', 'Nuclear', 'Geothermal', 'Wind (Onshore)', 'Wind (Offshore)', 'Solar PV', 'Solar PV', 'Solar PV', 'Solar CSP', 'Solar CSP'] lf_d = pd.DataFrame(list(zip(sec_techs,o_techs)), columns = ['Technology','tech']) lf_d = lf_d.set_index('Technology')['tech'].to_dict() lf_df['tech']= lf_df['Technology'].map(lf_d) lf_df = lf_df[['tech','Technology','Load factor']] lf_df.loc[lf_df['Load factor']=='Varies','Load factor'] = 0 lf_df['Load factor'] = lf_df['Load factor'].astype('int') # Completing Wind values with data from the UK # Source: https://www.renewableuk.com/page/UKWEDExplained lf_df.loc[lf_df['tech']=='Wind (Onshore)','Load factor'] = 26.62 lf_df.loc[lf_df['tech']=='Wind (Offshore)','Load factor'] = 38.86 lf_df.loc[lf_df['tech']=='Wind (Offshore)','Load factor'] = 58.4 # new build offshore wind (2023/24/25) is 58.4% # Averaging load factors (improve this when technologies are more specific) lf_df = (lf_df[['tech','Load factor']].groupby(['tech'], as_index = False).agg('mean')) lf_df.to_csv(r'outputs/load_factors.csv') # - # # Errors # + # country energy projection [kWh] e_C = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AQ:AW", nrows = 1) # material per energy technology [g/kWh] e_M = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "AY:BU", nrows = 10) # embodied emissions (GHG) per material [gCO2e/g] e_E = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "BX:BX", nrows = 22) # water usage per material [l/kg] e_W = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "BY:BY", nrows = 22) # recycling rate in current supply per material [%] e_R = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "BZ:BZ", nrows = 22) # costs per material [€/kg] e_K = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices', skiprows = 2, usecols = "CA:CA", nrows = 22) e_K = e_K/1000 for ef in [e_C, e_M, e_E, e_W, e_R, e_K]: eco = ef.columns.values ef.columns = [str(x).replace('.1','') for x in eco] e_M = e_M.rename(columns={'[g/kWh]':'tech'}).set_index('tech') # - # # Errors for all countries # + # Error or all countries and scenarios e_dfC = pd.read_excel(r'data/EnergyProjection.xlsx', sheet_name = 'All countries (e)', skiprows = 2, usecols = "B:R") # material per energy technology [g/kWh] e_cM = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "BM:CI", nrows = 15) # embodied emissions (GHG) per material [gCO2e/g] e_cE = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "CL", nrows = 22) # water usage per material [l/kg] e_cW = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "CM", nrows = 22) # recycling rate in current supply per material [%] e_cR = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "CN", nrows = 22) # costs per material [€/kg] e_cK = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Matrices (country)', skiprows = 2, usecols = "CO", nrows = 22) e_cK = e_cK/1000 # e_dfC for ef in [ e_cM, e_cE, e_cW, e_cR, e_cK]: eco = ef.columns.values ef.columns = [str(x).replace('.1','') for x in eco] e_cM = e_cM.rename(columns={'[g/kWh]':'tech'}).set_index('tech') # - # ## Temba errors # + egeneration_df = generation_df egeneration_df['Value'] = [x*0.2 for x in egeneration_df['Value']] egeneration_piv = pd.pivot_table(egeneration_df, values = 'Value', index = ['Year', 'scenario', 'country','parameter'], columns = 'tech', aggfunc=np.sum) egeneration_piv = egeneration_piv.reset_index() egeneration_piv = egeneration_piv.drop(columns = 'parameter') egeneration_piv.rename(columns ={'scenario':'Scenario','country':'Country'}, inplace = True) e_dfC['Coal CCS'] = 0 e_dfC = dfC.append(egeneration_piv) e_dfC = dfC[['Year', 'Scenario', 'Country', 'Wind (Onshore)', 'Wind (Offshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Oil', 'Gas CCS', 'Gas', 'Nuclear', 'Geothermal', 'Coal', 'Coal CCS', 'Biomass', 'BECCS', 'Hydrogen']] # - # ## Error calculations # + # clean calculations trying .dot for ef, ne in zip([e_E, e_W, e_R, e_K],['E', 'W', 'R', 'K'] ): #seperate errors per material per kWh for different impacts globals()['em_'.format(ne)] = ((ef/globals()['{}'.format(ne)]).pow(2)).T globals()['em_'.format(ne)].columns = e_cM.columns globals()['em_'.format(ne)] = globals()['em_'.format(ne)].reset_index().drop(columns='index', axis = 1) t1 = pd.DataFrame([x+y for x in ((e_cM/cM).pow(2)).values for y in globals()['em_'.format(ne)].values]).pow(0.5) t1.columns = globals()['{}_tech_sep'.format(ne)].columns t1.index = e_cM.index # e_E_tech_sep = E_tech_sep.*sqrt((e_M./M).^2+(e_E'./E').^2) globals()['e_{}_tech_sep'.format(ne)] = globals()['{}_tech_sep'.format(ne)].mul(t1, fill_value=0) globals()['e_{}_tot'.format(ne)] = globals()['e_{}_tech_sep'.format(ne)].T.sum() # Do I need to remove any rows with this? [0:len(C.T)] for ef, ne in zip([e_E, e_W, e_R, e_K],['E', 'W', 'R', 'K'] ): #seperate errors per material per kWh for different impacts globals()['em_'.format(ne)] = ((ef/globals()['{}'.format(ne)]).pow(2)).T globals()['em_'.format(ne)].columns = e_M.columns globals()['em_'.format(ne)] = globals()['em_'.format(ne)].reset_index().drop(columns='index', axis = 1) # add index of tech to t2, otherwise the calc gives error tc = (e_C/C).T t2 = pd.DataFrame([x+y+z for x in ((e_M[0:len(C.T)]/M[0:len(C.T)]).pow(2)).values for y in globals()['em_'.format(ne)].values for z in (tc.pow(2))]).pow(0.5) t2.columns = globals()['e_{}_tech_sep'.format(ne)].columns t2.index = tc.index globals()['e_{}_country_sep'.format(ne)] = globals()['{}_country_sep'.format(ne)].mul(t2, fill_value=0) globals()['e_{}c_tot'.format(ne)] = globals()['e_{}_country_sep'.format(ne)].T.sum() # + # Ordering dfC columns for the same order as M coln = list(cM.index) techs = ['Wind (Onshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Geothermal', 'Gas CCS', 'Oil', 'Gas', 'Coal', 'Wind (Offshore)', 'Biomass', 'Nuclear', 'BECCS', 'Hydrogen', 'Coal CCS'] mats = list(cM.columns) for ef, ne in zip([e_cE, e_cW, e_cR, e_cK],['E', 'W', 'R', 'K'] ): #seperate errors per material per kWh for different impacts globals()['em_'.format(ne)] = ((ef/globals()['{}'.format(ne)]).pow(2)).T globals()['em_'.format(ne)].columns = e_cM.columns globals()['em_'.format(ne)] = globals()['em_'.format(ne)].reset_index().drop(columns='index', axis = 1) t1 = pd.DataFrame([x+y for x in ((e_cM/cM).pow(2)).values for y in globals()['em_'.format(ne)].values]).pow(0.5) t1.columns = globals()['df{}_tech_sep'.format(ne)].columns t1.index = e_cM.index # e_E_tech_sep = E_tech_sep.*sqrt((e_M./M).^2+(e_E'./E').^2) globals()['e_df{}_tech_sep'.format(ne)] = globals()['df{}_tech_sep'.format(ne)].mul(t1, fill_value=0) globals()['e_df{}_tot'.format(ne)] = globals()['e_df{}_tech_sep'.format(ne)].T.sum() # Do I need to remove any rows with this? [0:len(C.T)] for ef, ne in zip([e_cE, e_cW, e_cR, e_cK],['E', 'W', 'R', 'K'] ): #seperate errors per material per kWh for different impacts globals()['em_'.format(ne)] = ((ef/globals()['{}'.format(ne)]).pow(2)).T globals()['em_'.format(ne)].columns = e_cM.columns globals()['em_'.format(ne)] = globals()['em_'.format(ne)].reset_index().drop(columns='index', axis = 1) globals()['e_df{0}_countries_sep'.format(ne)] = pd.DataFrame(columns =['Country','Year', 'Scenario','tech']+mats ) globals()['e_df{0}c_tot'.format(ne)] = pd.DataFrame(columns =['Country','Year','Scenario']+techs ) for c in dfC['Country'].unique(): for sc in dfC[dfC['Country']==c]['Scenario'].unique(): for y in dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)]['Year'].unique(): # print(c, sc, y) # add index of tech to t2, otherwise the calc gives error tc = (e_dfC[(e_dfC['Country']==c)&\ (e_dfC['Scenario']==sc)&\ (e_dfC['Year']==y)][coln]/dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)&\ (dfC['Year']==y)])[coln].T t2 = pd.DataFrame([x+y+z for x in ((e_cM/cM).pow(2)).values for y in globals()['em_'.format(ne)].values for z in (tc.pow(2))]).pow(0.5) t2.columns = globals()['e_{}_tech_sep'.format(ne)].columns t2.index = tc.index # news = pd.DataFrame(globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))].mul(t2, fill_value=0)) news = globals()['df{}c_tech_sep'.format(ne)] news = news[(news['Country']==c)&(news['Scenario']==sc)&(news['Year']==y)] news = news.drop(columns=['Country','Scenario','Year']).set_index('tech').mul(t2, fill_value=0) news = news.reset_index() news['Country'] = c news['Scenario'] = sc news['Year'] = y globals()['e_df{0}_countries_sep'.format(ne)] = globals()['e_df{0}_countries_sep'.format(ne)].append(news) globals()['e_df{0}_countries_sep'.format(ne)] = globals()['e_df{0}_countries_sep'.format(ne)][['Country', 'Year', 'Scenario', 'tech']+mats] globals()['e_df{0}_countries_sep'.format(ne)].to_csv(r'outputs/errors{}_bymat_bycountry.csv'.format(ne)) # new = pd.DataFrame(globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))].mul(t2, fill_value=0)) new = globals()['df{}c_tech_sep'.format(ne)] new = new[(new['Country']==c)&(news['Scenario']==sc)&(news['Year']==y)] new = new.drop(columns=['Country','Scenario','Year']).set_index('tech').mul(t2, fill_value=0) new = new.T.sum().reset_index().set_index('tech').T new['Country'] = c new['Scenario'] = sc new['Year'] = y globals()['e_df{0}c_tot'.format(ne)] = globals()['e_df{0}c_tot'.format(ne)].append(new) globals()['e_df{0}c_tot'.format(ne)] =globals()['e_df{0}c_tot'.format(ne)][['Country', 'Year', 'Scenario']+techs] globals()['e_df{0}c_tot'.format(ne)].to_csv(r'outputs/errors{}_total_bycountry.csv'.format(ne)) # - # # Employment and land-use # + # Employment by stage [job-years/MW] Manuf and C&I and Dec, [jobs/MW] for O&M, [jobs/PJ] for fuel jobs = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Employment', skiprows = 37, usecols = "B:G,T", nrows = 23) jobs.columns = ['tech', 'Manufacturing', 'Construction and Installation', 'Operation and Maintenance', 'Fuel', 'Decommissioning', 'Total'] jobs['tech_specific'] = jobs['tech'] # In [jobs-lifetime/kWh] op_jobs = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Employment', skiprows = 65, usecols = "B:G", nrows = 23) op_jobs.columns = ['tech','Capacity (MW)', 'kWh/lifetime', 'Load factor','Lifetime','Operation and Maintenance'] # country_jobs = jobs.merge(op_jobs[['tech','Capacity (MW)','kWh/lifetime']], on = 'tech') # Ram2020 has Regional Employment multipliers that should be useful for country evaluations # Land required [km^2/1000 MW] land = pd.read_excel(r'data/Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Employment', skiprows = 19, usecols = "B:C", nrows = 15) land.columns = ['tech','land'] # country_jobs['Operation and Maintenance (jobs)'] = country_jobs['Capacity (MW)'] * country_jobs['Operation and Maintenance'] # Mapping specific techs to general tech names tech_u = ['Solar CSP', 'Solar PV', 'Solar PV', 'Hydro', 'Hydro', 'Geothermal', 'Gas CCS', 'Oil', 'Gas', 'Gas', 'Coal', 'Wind (Onshore)', 'Wind (Offshore)', 'Nuclear', 'Biomass', 'Biogas', 'BECCS', 'Hydrogen', 'Waste-to-energy', 'Storage', 'Storage', 'Storage', 'Storage', 'Storage', 'Power to Heat', 'Methanation', 'Steam Turbine'] tech_sp = ['Solar CSP', 'Solar PV (utility)', 'Solar PV (rooftop)', 'Hydro (Dam)', 'Hydro (RoR)', 'Geothermal', 'Gas CCS', 'Oil', 'Gas (OCGT)', 'Gas (CCGT)', 'Coal', 'Wind (Onshore)', 'Wind (Offshore)', 'Nuclear', 'Biomass', 'Biogas', 'BECCS', 'Hydrogen', 'Waste-to-energy', 'Storage Pumped Hydro', 'Storage Battery (large scale)', 'Storage Battery (prosumer)', 'Storage Gas', 'Storage Adiabatic Compressed Air Energy', 'Power to Heat (PtH)', 'Methanation', 'Steam Turbine (ST)'] techd = pd.DataFrame(list(zip(tech_u,tech_sp)), columns = ['tech','tech_specific']) techd = techd.set_index('tech_specific')['tech'].to_dict() jobs['tech'] = jobs['tech_specific'].map(techd) # Averaging jobs by tech jobs_m = pd.melt(jobs.drop(columns = 'Total'), id_vars = ['tech_specific','tech'], var_name = 'Type', value_name = 'Value') av_jobs = jobs_m.groupby(['tech','Type']).mean().reset_index() jobs_piv = pd.pivot_table(av_jobs, values = 'Value', index = ['tech'], columns = 'Type', aggfunc=np.sum) jobs_piv.to_csv(r'outputs/jobs_piv.csv') # - # ## Employment calculations using multipliers # + # Regional multipliers job_regfact = pd.read_excel(r'data/1-s2.0-S0040162518314112-mmc1.xlsx', sheet_name = 'Regional Factors', skiprows = 1, usecols = "A:I", nrows = 10) job_regfact = job_regfact.rename(columns = {'Unnamed: 0':'Region'}) job_regfact['Multiplier_name'] = 'Regional_factor' job_expfac = pd.read_excel(r'data/1-s2.0-S0040162518314112-mmc1.xlsx', sheet_name = 'Import-Export Shares', skiprows = 1, usecols = "A:I", nrows = 10) job_expfac = job_expfac.rename(columns = {'Unnamed: 0':'Region'}) job_expfac['Multiplier_name'] = 'Export_factor' job_multipliers = pd.DataFrame(columns = ['Region', 2015, 2020, 2025, 2030, 2035, 2040, 2045, 2050, 'Multiplier_name'] ) for df in [job_regfact,job_expfac]: job_multipliers = job_multipliers.append(df) job_multipliers_m = pd.melt(job_multipliers.reset_index(), id_vars = ['Region','Multiplier_name'], var_name = 'Year', value_name = 'Value') job_multipliers_piv = pd.pivot_table(job_multipliers_m, values = 'Value', index = ['Region','Year'], columns = 'Multiplier_name', aggfunc=np.sum).reset_index() # + # Filling in multipliers for missing years job_multipliers_pred = pd.DataFrame(columns = job_multipliers_m.columns) for r in job_multipliers_m[(job_multipliers_m['Region'].notnull())&\ (job_multipliers_m['Region']!='Global')]['Region'].unique(): for m in job_multipliers_m['Multiplier_name'].unique(): print(r) x = job_multipliers_m[(job_multipliers_m['Year']!= 'index')&\ (job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)]['Year'].astype(str).astype(int) y = job_multipliers_m[(job_multipliers_m['Year']!= 'index')&\ (job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)]['Value'].values # fit function z = np.polyfit(x, y, 2) f = np.poly1d(z) # calculate new x's and y's x_new = list(range(2015, 2066)) y_new = f(x_new) m1 = pd.DataFrame(list(zip(x_new,y_new)), columns = ['Year','Value']) m1['Multiplier_name'] = m m1['Region'] = r if (r == 'Europe' or r == 'Southeast Asia') and m == 'Export_factor': xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2020)]['Value'].values[0] m1.loc[m1['Year']<2035, 'Value'] = xx xx2 = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2035)]['Value'].values[0] m1.loc[m1['Year']>=2035, 'Value'] = xx2 if r == 'North America' and m == 'Export_factor': xx0 = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2015)]['Value'].values[0] m1.loc[m1['Year']<2020, 'Value'] = xx0 xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2020)]['Value'].values[0] m1.loc[m1[(m1['Year']<2035)&(m1['Year']>=2020)].index, 'Value'] = xx xx2 = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2035)]['Value'].values[0] m1.loc[m1['Year']>=2035, 'Value'] = xx2 if r == 'SAARC' and m == 'Export_factor': xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2030)]['Value'].values[0] m1.loc[m1['Year']>=2030, 'Value'] = xx if r == 'Northeast Asia' and m == 'Regional_factor': xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2040)]['Value'].values[0] m1.loc[m1['Year']>=2035, 'Value'] = xx if r == 'South America' and m == 'Export_factor': xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2040)]['Value'].values[0] m1.loc[m1['Year']>=2040, 'Value'] = xx if (r == 'Southeast Asia' or r == 'South America' or r == 'Eurasia' or\ r == 'MENA' or r == 'Sub-Saharan Africa' or r == 'SAARC') and m == 'Regional_factor': xx = job_multipliers_m[(job_multipliers_m['Region']== r)&\ (job_multipliers_m['Multiplier_name']== m)&\ (job_multipliers_m['Year']== 2050)]['Value'].values[0] m1.loc[m1['Year']>2050, 'Value'] = xx job_multipliers_pred = job_multipliers_pred.append(m1) # If graphs are needed # print(m) # plt.plot(x,y,'o', x_new, y_new, m1['Year'],m1['Value']) # plt.xlim([2014, 2066 ]) # plt.show() job_multipliers_pred_piv = pd.pivot_table(job_multipliers_pred, values = 'Value', index = ['Region','Year'], columns = 'Multiplier_name', aggfunc=np.sum).reset_index() job_multipliers_pred_piv.to_csv(r'outputs/job_multipliers.csv') # + # This is for technologies (not regions) job_decfactC = pd.read_excel(r'data/1-s2.0-S0040162518314112-mmc1.xlsx', sheet_name = 'Decline Factors', skiprows = 1, usecols = "A,K:R", nrows = 25) job_decfactC.columns = ['tech_spec',2015,2020,2025,2030,2035,2040,2045,2050] job_decfactC['Multiplier_name'] = 'Capex_declinefactor' job_decfactO = pd.read_excel(r'data/1-s2.0-S0040162518314112-mmc1.xlsx', sheet_name = 'Decline Factors', skiprows = 30, usecols = "A,K:R", nrows = 25) job_decfactO.columns = ['tech_spec',2015,2020,2025,2030,2035,2040,2045,2050] job_decfactO['Multiplier_name'] = 'Opex_declinefactor' job_decfact = job_decfactC.append(job_decfactO) tech_o = ['Wind onshore', 'Wind offshore', 'PV Utility-scale', 'PV rooftop', 'Biomass', 'Hydro Dam', 'Hydro RoR', 'Geothermal', 'CSP', 'CHP Biogas', 'Waste-to-energy', 'Methanation', 'Coal PP (Hard Coal)', 'Nuclear PP', 'OCGT', 'CCGT', 'Steam Turbine (ST)', 'Power to Heat (PtH) ', 'Internal Combustion Engine (ICE)', 'Gas Storage', 'Power to Gas (PtG)', 'Battery Storage large-scale', 'Battery Storage prosumer', 'Pumped Hydro Storage (PHS)', 'Adiabatic Compressed Air Energy Storage (A-CAES)', 'PV Utility scale', 'PV roof top'] tech_sp = ['Wind (Onshore)', 'Wind (Offshore)', 'Solar PV (utility)', 'Solar PV (rooftop)', 'Biomass', 'Hydro (Dam)', 'Hydro (RoR)', 'Geothermal', 'Solar CSP', 'Biogas', 'Waste-to-energy', 'Methanation', 'Coal', 'Nuclear', 'Gas (OCGT)', 'Gas (CCGT)', 'Steam Turbine (ST)', 'Power to Heat (PtH)', 'Internal Combustion Engine (ICE)', 'Storage Gas', 'Power to Gas (PtG)', 'Storage Battery (large-scale)', 'Storage Battery (prosumer)', 'Storage Pumped Hydro', 'Storage Adiabatic Compressed Air Energy', 'Solar PV (utility)', 'Solar PV (rooftop)'] techd2 = pd.DataFrame(list(zip(tech_o,tech_sp)), columns = ['tech_specific','tech_spec']) techd2 = techd2.set_index('tech_specific')['tech_spec'].to_dict() job_decfact['tech_specific'] = job_decfact['tech_spec'].map(techd2) job_decfact['tech'] = job_decfact['tech_specific'].map(techd) job_decfact= job_decfact.drop(columns = ['tech_spec']).set_index('tech') job_decfact_m = pd.melt(job_decfact.reset_index(), id_vars = ['tech','tech_specific','Multiplier_name'], var_name = 'Year', value_name = 'Value') job_decfact_piv = pd.pivot_table(job_decfact_m, values = 'Value', index = ['tech','tech_specific','Year'], columns = 'Multiplier_name', aggfunc=np.sum).reset_index() # + # Filling in declining factors for missing years job_decfact_pred = pd.DataFrame(columns = job_decfact_m.columns) for t in job_decfact_m[job_decfact_m['tech_specific'].notnull()]['tech_specific'].unique(): for m in job_decfact_m['Multiplier_name'].unique(): print(t) x = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)]['Year'].astype(str).astype(int) y = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)]['Value'].values if not x.empty: if t == 'Storage Battery (prosumer)' and m == 'Opex_declinefactor': # fit function z = np.polyfit(x, y, 2) f = np.poly1d(z) # fit function z = np.polyfit(x, y, 3) f = np.poly1d(z) # calculate new x's and y's x_new = list(range(2015, 2051)) y_new = f(x_new) m1 = pd.DataFrame(list(zip(x_new,y_new)), columns = ['Year','Value']) m1['Multiplier_name'] = m m1['tech_specific'] = t m1['tech'] = job_decfact_m[(job_decfact_m['tech_specific']== t)]['tech'].values[0] # Adding 2050 onwards values x_new2 = list(range(2051, 2066)) m2 = pd.DataFrame(x_new2, columns = ['Year']) m2['Value'] = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2050)]['Value'].values[0] m2['Multiplier_name'] = m m2['tech_specific'] = t m2['tech'] = job_decfact_m[(job_decfact_m['tech_specific']== t)]['tech'].values[0] m1 = m1.append(m2) # Fixing defined intervals if t == 'Nuclear' and (m == 'Capex_declinefactor' or m == 'Opex_declinefactor'): # xx0 = job_multipliers_m[(job_decfact_m['tech_specific']== t)&\ # (job_decfact_m['Multiplier_name']== m)&\ # (job_decfact_m['Year']== 2015)]['Value'].values[0] # m1.loc[m1['Year']<2020, 'Value'] = xx0 xx = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2020)]['Value'].values[0] m1.loc[m1[(m1['Year']<2030)&(m1['Year']>=2020)].index, 'Value'] = xx xx1 = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2030)]['Value'].values[0] m1.loc[m1[(m1['Year']<2040)&(m1['Year']>=2030)].index, 'Value'] = xx1 xx2 = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2040)]['Value'].values[0] m1.loc[m1[(m1['Year']<2050)&(m1['Year']>=2040)].index, 'Value'] = xx2 xx3 = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2050)]['Value'].values[0] m1.loc[m1['Year']>=2050, 'Value'] = xx3 if t == 'Power to Heat (PtH)' and (m == 'Capex_declinefactor' or m == 'Opex_declinefactor'): xx = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2025)]['Value'].values[0] m1.loc[m1[(m1['Year']<2035)&(m1['Year']>=2025)].index, 'Value'] = xx xx3 = job_decfact_m[(job_decfact_m['tech_specific']== t)&\ (job_decfact_m['Multiplier_name']== m)&\ (job_decfact_m['Year']== 2035)]['Value'].values[0] m1.loc[m1['Year']>=2035, 'Value'] = xx3 job_decfact_pred = job_decfact_pred.append(m1) # If graphs are needed # print(m) # plt.plot(x,y,'o', x_new, y_new, m1['Year'],m1['Value']) # plt.xlim([2014, 2066 ]) # plt.show() # FUTURE WORK: Fix storage curves to match values better --> if Storage is needed job_decfact_pred_piv = pd.pivot_table(job_decfact_pred, values = 'Value', index = ['tech','tech_specific','Year'], columns = 'Multiplier_name', aggfunc=np.sum).reset_index() job_decfact_pred_piv.to_csv(r'outputs/job_DeclineFact.csv') # - ### Read files again from here if needed # + # Adding country names and regions # Namibia NM in TEMBA, NA in ISO2 results_df.loc[results_df['country']=='NM','country']='NA' temba_codes = list(results_df['country'].unique()) standard_names = coco.convert(names=temba_codes, to='name_short') iso3_codes = coco.convert(names=temba_codes, to='ISO3', not_found=None) ts_d = pd.DataFrame(list(zip(temba_codes,standard_names)), columns = ['country','Country']) ts_d = ts_d.set_index('country')['Country'].to_dict() ti3_d = pd.DataFrame(list(zip(temba_codes,iso3_codes)), columns = ['country','ISO3']) ti3_d = ti3_d.set_index('country')['ISO3'].to_dict() nmresults_df = results_df nmresults_df['ISO3'] = nmresults_df['country'].map(ti3_d) nmresults_df['Country'] = nmresults_df['country'].map(ts_d) # Adding Ramm's country classifications cclassif = pd.read_csv(r'data/UNSD — Methodology.csv', usecols = ['ISO-alpha2 Code', 'ISO-alpha3 Code','Country or Area', 'Ramm_region', 'Ramm_region2','Ramm_region3']) cclassif.rename(columns = {'ISO-alpha2 Code':'country', 'ISO-alpha3 Code':'ISO3'}, inplace = True) rr1_d = cclassif[cclassif['Ramm_region'].notnull()][['ISO3', 'Ramm_region']].set_index('ISO3')['Ramm_region'].to_dict() rr2_d = cclassif[cclassif['Ramm_region2'].notnull()][['ISO3', 'Ramm_region2']].set_index('ISO3')['Ramm_region2'].to_dict() rr3_d = cclassif[cclassif['Ramm_region3'].notnull()][['ISO3', 'Ramm_region3']].set_index('ISO3')['Ramm_region3'].to_dict() # since the regions are not a one-to-one match, I need to group the country data by # regions and then perform the employment opertation on them for totals. # Country employment values can use averages of regions. nmresults_df['Ramm_region'] = nmresults_df['ISO3'].map(rr1_d) nmresults_df['Ramm_region2'] = nmresults_df['ISO3'].map(rr2_d) nmresults_df['Ramm_region3'] = nmresults_df['ISO3'].map(rr3_d) # + # From Ram2020 # Manuf jobs (local) = Installed Capacity * EF (Manufacturing)*decline factor of the technology* # local manufacturing factor*regional employment multiplier # Manuf jobs (export) = Exported Capacity * EF (Manufacturing)*decline factor of the technology* # *regional employment multiplier # C&I jobs = Installed Capacity * EF (C&I)*decline factor of the technology*regional multiplier # O&M jobs = Installed Capacity * EF (O&M)*decline factor of the technology*regional multiplier # Fuel = Electricty Generation*EF (Fuel)/Efficiency of technology*regional multiplier # Transmission jobs = Investments in Grids*EF (Grids)*regional multiplier # Decommisioning jobs = Decommisioned Capacity*EF (Decommissioning)*regional employment multiplier # + # Calculation of employment based on new capacity and multipliers cap = ['Power Generation Capacity (Aggregate)','New power generation capacity (Aggregate)'] employment_df = nmresults_df[nmresults_df['parameter'].isin(cap)] employment_df = employment_df.drop(columns = ['Ramm_region2', 'Ramm_region3']).rename(columns = {'Ramm_region':'Region', 'Value':'Capacity'}) # power = ['Power Generation Capacity (Aggregate)','New power generation capacity (Aggregate)'] # upow = ['GW','GW'] # + # Using multipliers # For now only using Ramm_region since data is mostly for SSA employment_df['Year'] = employment_df['Year'].astype('int') job_multipliers_pred_piv['Year'] = job_multipliers_pred_piv['Year'].astype('int') job_decfact_pred_piv['Year'] = job_decfact_pred_piv['Year'].astype('int') employment_df = employment_df.merge(jobs_piv.reset_index()[['tech','Construction and Installation', 'Decommissioning', 'Fuel','Manufacturing', 'Operation and Maintenance']], on = 'tech') employment_df = employment_df.merge(job_multipliers_pred_piv, on = ['Region','Year']) employment_df = employment_df.merge(job_decfact_pred_piv.groupby(['tech','Year']).mean().reset_index(), on = ['tech','Year']) # - for j in [ 'Construction and Installation', 'Manufacturing']: # ,'Decommissioning', 'Fuel' 'Operation and Maintenance']: if j =='Manufacturing': employment_df['{} (local jobs)'.format(j)] = [x*1000*y*z*(1-a)*b for x, y, z, a, b in zip(employment_df['Capacity'], employment_df['{}'.format(j)], employment_df['Capex_declinefactor'], employment_df['Export_factor'], employment_df['Regional_factor'])] employment_df['{} (external jobs)'.format(j)] = [x*1000*y*z*a*b for x, y, z, a,b in zip(employment_df['Capacity'], employment_df['{}'.format(j)], employment_df['Capex_declinefactor'], employment_df['Export_factor'], employment_df['Regional_factor'])] if j =='Construction and Installation': employment_df['{} (jobs)'.format(j)] = [x*1000*y*z*b for x, y,z,b in zip(employment_df['Capacity'], employment_df['{}'.format(j)], employment_df['Capex_declinefactor'], employment_df['Regional_factor'])] # O&M jobs need cumulative capacity # Fuel jobs need primary energy generation # Decommissioning jobs need decommissioned capacity # + jobs_forplot = pd.melt(employment_df, id_vars = ['tech','scenario','country','ISO3','Country','Year','parameter'], value_vars = ['Capacity','Construction and Installation (jobs)', 'Manufacturing (local jobs)', 'Manufacturing (external jobs)'], var_name = 'Indicator', value_name = 'Value') jobs_forplot.to_csv(r'outputs/jobs_forplot.csv') # the values from this df will include new jobs from new power generation and "old jobs" from existing capcity. # In plots, ignore old jobs, because those would now be only "fuel" and decommissioning that are not in the data yet # - # # Plots for single country values # ## Per kWh # + # per KWH colors = px.colors.qualitative.Alphabet labels = ['Embodied Material Emissions [gCO<sub>2</sub>/kWh]','Embodied Material Water Usage [L/kWh]', 'Material Costs [Euro‚cent/kWh]','Material Recycling Rate [g/kWh]'] for ne, l in zip(['E', 'W', 'K', 'R'], labels): fig = go.Figure() if ne == 'E' or ne == 'R': data = globals()['{}_tech_sep'.format(ne)] datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/2 datae.index = data.index if ne == 'W': data = globals()['{}_tech_sep'.format(ne)]/1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/2000 datae.index = data.index if ne == 'K': data = globals()['{}_tech_sep'.format(ne)]*1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/0.002 datae.index = data.index for i, c in zip(data.columns, colors): # print(i) fig.add_trace(go.Bar(x = data[i], y = data.index, name = i, marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae[i]) )) fig.update_layout( xaxis_title = l, template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/kWh_{}.pdf'.format(ne), width = 500, height = 450) pio.write_image(fig, r'figures/kWh_{}.eps'.format(ne), width = 500, height = 450) # plotly.offline.plot(fig, filename = r'figures/kWh_{}.html'.format(ne), auto_open=False) # fig.show() # - # ## For Country, all factors # + # for Country colors = px.colors.qualitative.Alphabet labels = ['Embodied Material Emissions [MtCO<sub>2</sub>]','Embodied Material Water Usage [L]', 'Material Costs [million Euro]','Material Recycling Rate [t]'] for ne, l in zip(['E', 'W', 'K', 'R'], labels): fig = go.Figure() if ne == 'E': data = globals()['{}_country_sep'.format(ne)]/1e12 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_country_sep'.format(ne)])), columns=globals()['{}_country_sep'.format(ne)].columns) error = globals()['e_{}c_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/1e12/2 datae.index = data.index if ne == 'W': data = globals()['{}_country_sep'.format(ne)]/1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_country_sep'.format(ne)])), columns=globals()['{}_country_sep'.format(ne)].columns) error = globals()['e_{}c_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/1000/2 datae.index = data.index if ne == 'K' or ne =='R': data = globals()['{}_country_sep'.format(ne)]/1e6 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_country_sep'.format(ne)])), columns=globals()['{}_country_sep'.format(ne)].columns) error = globals()['e_{}c_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/1e6/2 datae.index = data.index for i, c in zip(data.columns, colors): # print(i) fig.add_trace(go.Bar(x = data[i], y = data.index, name = i, marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae[i]) )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", # traceorder="reversed", y=1.01, xanchor="center", x=0.4), template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/country_{}.pdf'.format(ne), width = 900, height = 550) pio.write_image(fig, r'figures/country_{}.eps'.format(ne), width = 900, height = 550) # plotly.offline.plot(fig, filename = r'figures/country_{}.html'.format(ne), auto_open=False) # fig.show() # - # ## Totals and percentages in country # + # Bar with totals and total percentages for Country colors = px.colors.qualitative.Alphabet labels = ['Total material mass accross all technologies for Uganda (%)', 'Total CO <sub>2</sub> emissions accross all technologies for Uganda (%)', 'Total material costs accross all technologies for Uganda [million Euro]'] names = ['material_mass', 'total_emiss', 'total_mat'] colu = ['pct','pct','value'] # emissions per energy technology [gCO2/kWh] mat_country = C.T.values*M[0:len(C.T)] for l, n, co in zip( labels, names, colu): fig = go.Figure() if n == 'material_mass': totaal = mat_country.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) if n == 'total_emiss': totaal = E_country_sep.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) if ne == 'total_mat': totaal = K_country_sep.sum().reset_index() totaal = totaal/1e6.rename(columns = {'index':'Material', 0:'value'}) ts = totaal['value'].sum().round(2) totaal.loc[totaal.index,'pct'] = totaal.loc[totaal.index,'value']/ts*100 tsdf = totaal[['Material','value']] tsdf.loc[totaal.index,'value']=np.nan tsdf.loc[totaal['Material']=='Stainless Steel','value']=ts for i, c in zip(totaal.Material.unique(), colors): # print(i) fig.add_trace(go.Bar(x = totaal[totaal['Material']==i][co], y = ['Total'], width=0.5, name = i , marker_color = c, orientation = 'h', text = tsdf[totaal['Material']==i], textposition='auto' )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", # traceorder="reversed", y=1.02, xanchor="right", x=1), template = 'simple_white+presentation', barmode='stack') # fig.update_xaxes(range=[0,110]) pio.write_image(fig, r'figures/country_{}.pdf'.format(n), width = 800, height = 500) pio.write_image(fig, r'figures/country_{}.eps'.format(n), width = 800, height = 500) # plotly.offline.plot(fig, filename = r'figures/country_{}.html'.format(n), auto_open=False) # fig.show() # - # # Plots comparing countries/scenarios # ## Per kWh # + # per KWH colors = px.colors.qualitative.Alphabet labels = ['Embodied Material Emissions [gCO_2/kWh]','Embodied Material Water Usage [L/kWh]', 'Material Costs [Euro‚cent/kWh]','Material Recycling Rate [g/kWh]'] for ne, l in zip(['E', 'W', 'K', 'R'], labels): fig = go.Figure() if ne == 'E' or ne == 'R': data = globals()['{}_tech_sep'.format(ne)] datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/2 datae.index = data.index if ne == 'W': data = globals()['{}_tech_sep'.format(ne)]/1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/2000 datae.index = data.index if ne == 'K': data = globals()['{}_tech_sep'.format(ne)]*1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/0.002 datae.index = data.index for i, c in zip(data.columns, colors): # print(i) fig.add_trace(go.Bar(x = data[i], y = data.index, name = i, marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae[i]) )) fig.update_layout( xaxis_title = l, template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/kWh_{}.pdf'.format(ne), width = 500, height = 450) pio.write_image(fig, r'figures/kWh_{}.eps'.format(ne), width = 500, height = 450) # plotly.offline.plot(fig, filename = r'figures/kWh_{}.html'.format(ne), auto_open=False) # fig.show() # - # ## For Country, all factors # + # for Country # make subplots for all scenario/year combos for each country # MISSING: ADD ERROR DATA # for ef, ne in zip([cE, cW, cR, cK],['E', 'W', 'R', 'K'] ): colors = px.colors.qualitative.Alphabet labels = ['Embodied Material Emissions [MtCO<sub>2</sub>]','Embodied Material Water Usage [L]', 'Material Costs [million Euro]','Material Recycling Rate [t]'] atechs = ['Wind (Onshore)', 'Solar CSP', 'Solar PV', 'Hydro', 'Geothermal', 'Gas CCS', 'Oil', 'Gas', 'Coal', 'Wind (Offshore)', 'Biomass', 'Nuclear', 'BECCS', 'Hydrogen','Coal CCS'] mats = list(cM.columns) # e_dfK_countries_sep # e_dfEc_tot # make list w names for plot titles for c in dfC['Country'].unique(): globals()['l_{}'.format(c)] = [] for sc, y in zip(dfC[dfC['Country']==c]['Scenario'],dfC[dfC['Country']==c]['Year']): globals()['l_{}'.format(c)].append('{0} {1}'.format(y,sc)) for ne, l in zip(['E', 'W', 'K', 'R'], labels): # 'df{ne}_{country}_{scenario}_{year}_sep' for c in dfC['Country'].unique(): fig = make_subplots(rows=len(dfC[dfC['Country']==c]['Scenario'].unique()), cols=1, subplot_titles=globals()['l_{}'.format(c)], x_title = l, shared_xaxes = True ) print(c) h = len(globals()['l_{}'.format(c)])*330 slegend = [True, False, False, False, False] for sc, ns,sl in zip(dfC[dfC['Country']==c]['Scenario'].unique(),range(1,len(dfC[dfC['Country']==c]['Scenario'].unique())+1),slegend): print(sc,ns) for y in dfC[(dfC['Country']==c)&(dfC['Scenario']==sc)]['Year'].unique(): # range(1,len(dfC[dfC['Country']==c]['Scenario'].unique())+1) # globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))] if ne == 'E': # data = globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))] /1e12 data = globals()['df{}c_tech_sep'.format(ne)] data = data[(data['Country']==c)&(data['Scenario']==sc)&(data['Year']==y)] data=data.drop(columns=['Country','Year','Scenario']).set_index('tech') data = data/1e12 datae = pd.DataFrame(np.nan, # index=np.arange(len(globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3], # str(y))])), index=np.arange(len(data)), # columns=globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))].columns columns=data.columns ) # ERROR error = globals()['e_df{}c_tot'.format(ne)] error = error[(error['Country']==c)&(error['Scenario']==sc)&(error['Year']==y)][atechs] datae.loc[datae.index,'Stainless Steel'] = error.values/1e12/2 datae.index = data.index if ne == 'W': # data = globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))] /1000 data = globals()['df{}c_tech_sep'.format(ne)] data = data[(data['Country']==c)&(data['Scenario']==sc)&(data['Year']==y)] data=data.drop(columns=['Country','Year','Scenario']).set_index('tech') data = data/1000 datae = pd.DataFrame(np.nan, index=np.arange(len(data)), columns=data.columns) # ERROR error = globals()['e_df{}c_tot'.format(ne)] error = error[(error['Country']==c)&(error['Scenario']==sc)&(error['Year']==y)][atechs] datae.loc[datae.index,'Stainless Steel'] = error.values/1000/2 datae.index = data.index if ne == 'K' or ne =='R': # data = globals()['df{0}_{1}_{2}_{3}_sep'.format(ne,c,sc[0:3],str(y))] /1e6 data = globals()['df{}c_tech_sep'.format(ne)] data = data[(data['Country']==c)&(data['Scenario']==sc)&(data['Year']==y)] data=data.drop(columns=['Country','Year','Scenario']).set_index('tech') data = data/1e6 datae = pd.DataFrame(np.nan, index=np.arange(len(data)), columns=data.columns) # ERROR error = globals()['e_df{}c_tot'.format(ne)] error = error[(error['Country']==c)&(error['Scenario']==sc)&(error['Year']==y)][atechs] datae.loc[datae.index,'Stainless Steel'] = error.values/1e6/2 datae.index = data.index for i, cl in zip(data.columns, colors): d = data.T.sum().reset_index() techs = d[d[0]>0]['tech'] fig.add_trace(go.Bar(x = data.loc[techs][i], y = data.loc[techs].index, name = i, marker_color = cl, orientation = 'h', showlegend = sl, error_x=dict(type='data', array=datae.loc[techs][i]) ), row = ns, col = 1) fig.update_layout( template = 'simple_white+presentation', barmode='stack') fig.update_layout(legend = dict(font = dict(size = 15))) pio.write_image(fig, r'figures/countries/{0}_{1}.pdf'.format(c,ne), width = 900, height = h) pio.write_image(fig, r'figures/countries/{0}_{1}.eps'.format(c,ne), width = 900, height = h) # plotly.offline.plot(fig, filename = r'figures/countries/{0}_{1}.html'.format(c,ne), auto_open=False) # fig.show() # - # ## Totals and percentages in country # + # Bar with totals and total percentages for Country colors = px.colors.qualitative.Alphabet labels = ['Total material mass accross all technologies for Uganda (%)', 'Total CO <sub>2</sub> emissions accross all technologies for Uganda (%)', 'Total material costs accross all technologies for Uganda [million Euro]'] names = ['material_mass', 'total_emiss', 'total_mat'] colu = ['pct','pct','value'] # emissions per energy technology [gCO2/kWh] mat_country = C.T.values*M[0:len(C.T)] for l, n, co in zip( labels, names, colu): fig = go.Figure() if n == 'material_mass': totaal = mat_country.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) if n == 'total_emiss': totaal = E_country_sep.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) if ne == 'total_mat': totaal = K_country_sep.sum().reset_index() totaal = totaal/1e6.rename(columns = {'index':'Material', 0:'value'}) ts = totaal['value'].sum().round(2) totaal.loc[totaal.index,'pct'] = totaal.loc[totaal.index,'value']/ts*100 tsdf = totaal[['Material','value']] tsdf.loc[totaal.index,'value']=np.nan tsdf.loc[totaal['Material']=='Stainless Steel','value']=ts for i, c in zip(totaal.Material.unique(), colors): # print(i) fig.add_trace(go.Bar(x = totaal[totaal['Material']==i][co], y = ['Total'], width=0.5, name = i , marker_color = c, orientation = 'h', text = tsdf[totaal['Material']==i], textposition='auto' )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", y=1.02, xanchor="right", x=1), template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/country_{}.pdf'.format(n), width = 800, height = 500) pio.write_image(fig, r'figures/country_{}.eps'.format(n), width = 800, height = 500) # plotly.offline.plot(fig, filename = r'figures/country_{}.html'.format(n), auto_open=False) # fig.show() # - # ## Employment and tech # year in x axis, jobs in y per type jobs_forplot = pd.read_csv(r'outputs/jobs_forplot.csv') # + colors = px.colors.qualitative.Vivid for c in jobs_forplot['Country'].unique(): for sc in jobs_forplot[jobs_forplot['Country']==c]['scenario'].unique(): data = jobs_forplot[(jobs_forplot['Country']==c)&\ (jobs_forplot['scenario']==sc)&\ (jobs_forplot['parameter']=='New power generation capacity (Aggregate)')&\ (jobs_forplot['Indicator']!='Capacity')] data = data[data['Value']>0] if not data.empty: fig = go.Figure() # print(c, sc) for t, co in zip(data['tech'].unique(), colors): for i, d in zip(data['Indicator'].unique(),['solid','dash','dot']):# ('circle','square','diamond')): fig.add_trace(go.Scatter(x = data[(data['tech']==t)&(data['Indicator']==i)].Year, y = data[(data['tech']==t)&(data['Indicator']==i)].Value, mode = 'lines', name = '{}- {}'.format(t,i) , # marker = dict(symbol=m), # marker_color = co, line = dict(dash = d), line_color = co, stackgroup='one') ) fig.update_layout(xaxis_title = 'Year', yaxis_title = 'Number of jobs', template = 'simple_white+presentation', # barmode='stack' ) # fig.show() pio.write_image(fig, r'figures/countries/{0}_{1}_employment.pdf'.format(c,sc[:3]), width = 1500, height = 1000) pio.write_image(fig, r'figures/countries/{0}_{1}_employment.eps'.format(c,sc[:3]), width = 1500, height = 1000) # - # # Plots for general values # ## Embodied vs use-phase # + # Embodied vs use phase EU = pd.read_excel(mypath+'Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Graph', skiprows = 44, usecols = "B,T:V", nrows = 15) EU.rename(columns = {'Unnamed: 1':'tech'}, inplace = True) EU = EU[EU['tech']!= 'Wind'] EE_u = pd.read_excel(mypath+'Excel_model_material_implications_energy_systems.xlsx', sheet_name = 'Graph', skiprows = 63, usecols = "B:C", nrows = 15) EE_u.rename(columns = {'Unnamed: 1':'tech'}, inplace = True) # EE is the same as e_E_tot colors = px.colors.qualitative.Vivid df = [EU,E_tech] dnames = ['EU','E_tech'] datae_u = EU[['tech','STANDARD ERROR']].rename(columns = {'STANDARD ERROR':'error'}) datae_u = datae_u[~datae_u['tech'].isin(['Hydrogen','BECCS','Coal CCS'])] datae_e = datae.append(e_E_tot.reset_index().rename(columns = {0:'error'})) datae_e = datae_e[~datae_e['tech'].isin(['Hydrogen','BECCS','Coal CCS'])] # sorting data based on use phase fig = go.Figure() for d, dn, c in zip(df, dnames, colors[0:2]): if dn == 'EU': d = EU.sort_values(by = 'AVERAGE', ascending = True) d = d[~d['tech'].isin(['Hydrogen','BECCS','Coal CCS'])] fig.add_trace(go.Bar(x = d.AVERAGE, y = d.tech, name = 'Use-Phase', marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae_u[['error']]) )) if dn == 'E_tech': d = E_tech.reset_index().rename(columns = {0:'energy'}) d = d[~d['tech'].isin(['Hydrogen','BECCS','Coal CCS'])] fig.add_trace(go.Bar(x = d.energy, y = d.tech, name = 'Embodied (materials)' , marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae_e['error']) )) fig.update_layout(xaxis_title = 'Emissions per kWh [gCO<sub>2</sub>/kWh]', template = 'simple_white+presentation', barmode='stack', legend=dict(orientation="h", y=-0.2)) pio.write_image(fig, r'figures/embodied_direct_CO2.pdf', width = 900, height = 700) pio.write_image(fig, r'figures/embodied_direct_CO2.eps', width = 900, height = 700) # plotly.offline.plot(fig, filename = r'figures/embodied_direct_CO2.html', auto_open=False) fig.show() # - # ## General - Embodied, water and costs # + # Embodied, water and costs colors = px.colors.qualitative.Alphabet labels = ['Embodied Material Emissions [gCO<sub>2</sub>/kWh]','Water Usage [L/kWh]', 'Costs [Euro cent/kWh]','Recycling Rate [g/kWh]'] for ne, l in zip(['E', 'W', 'K', 'R'], labels): fig = go.Figure() if ne == 'E' or ne =='R': data = globals()['{}_tech_sep'.format(ne)] datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/2 datae.index = data.index if ne == 'W': data = globals()['{}_tech_sep'.format(ne)]/1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values/1000/2 datae.index = data.index if ne == 'K': data = globals()['{}_tech_sep'.format(ne)]*1000 datae = pd.DataFrame(np.nan, index=np.arange(len(globals()['{}_tech_sep'.format(ne)])), columns=globals()['{}_tech_sep'.format(ne)].columns) error = globals()['e_{}_tot'.format(ne)] datae.loc[datae.index,'Stainless Steel'] = error.values*1000/2 datae.index = data.index for i, c in zip(data.columns, colors): # print(i) fig.add_trace(go.Bar(x = data[i], y = data.index, name = i, marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae[i]) )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", # traceorder="reversed", y=1.01, xanchor="center", x=0.4), template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/general_{}.pdf'.format(ne), width = 1000, height = 700) pio.write_image(fig, r'figures/general_{}.eps'.format(ne), width = 1000, height = 700) # plotly.offline.plot(fig, filename = r'figures/general_{}.html'.format(ne), auto_open=False) # fig.show() # - # ## General - Materials per kWh and embodied-material-emissions per kWh # + ## Materials per kWh and embodied-material-emissions per kWh colors = px.colors.qualitative.Alphabet labels = ['Materials [g/kWh]','Embodied Material Emissions [gCO<sub>2</sub>/kWh]'] for ne, l in zip(['Mat','Em'], labels): fig = go.Figure() if ne == 'Mat': data = M# /1e12 this would be for Mtonne datae = pd.DataFrame(np.nan, index=np.arange(len(M)), columns=M.columns) error = e_M.T.sum() datae.loc[datae.index,'Stainless Steel'] = error/2 #/1e12 this would be for Mtonne datae.index = data.index # the Mat error needs checking if ne == 'Em': data = E_tech_sep# /1e12 this would be for Mtonne datae = pd.DataFrame(np.nan, index=np.arange(len(E_tech_sep)), columns=E_tech_sep.columns) error = e_E_tech_sep.T.sum() datae.loc[datae.index,'Stainless Steel'] = error/2 datae.index = data.index data = data.sort_values(by = 'tech') for i, c in zip(data.columns, colors): fig.add_trace(go.Bar(x = data[i], y = data.index, name = i, marker_color = c, orientation = 'h', error_x=dict(type='data', array=datae[i]) )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", # traceorder="reversed", y=1.01, xanchor="center", x=0.4), template = 'simple_white+presentation', barmode='stack') pio.write_image(fig, r'figures/general_{}.pdf'.format(ne), width = 1000, height = 600) pio.write_image(fig, r'figures/general_{}.eps'.format(ne), width = 1000, height = 600) # plotly.offline.plot(fig, filename = r'figures/general_{}.html'.format(ne), auto_open=False) # fig.show() # - # ## General - Bars with total percentage of grams for materials and total percentage of CO2 for materials # + ## Bar with total percentage of material mass and total CO2 for materials general colors = px.colors.qualitative.Alphabet labels = ['Total material mass accross all technologies (%)', 'Total CO<sub>2</sub> emissions accross all technologies (%)'] names = ['material_mass', 'total_emiss'] colu = ['pct','pct'] for l, n, co in zip( labels, names, colu): fig = go.Figure() if n == 'material_mass': totaal = M.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) if n == 'total_emiss': totaal = E_tech_sep.sum().reset_index() totaal = totaal.rename(columns = {'index':'Material', 0:'value'}) ts = totaal['value'].sum().round(2) totaal.loc[totaal.index,'pct'] = totaal.loc[totaal.index,'value']/ts*100 tsdf = totaal[['Material','value']] tsdf.loc[totaal.index,'value']=np.nan tsdf.loc[totaal['Material']=='Stainless Steel','value']=ts for i, c in zip(totaal.Material.unique(), colors): # print(i) fig.add_trace(go.Bar(x = totaal[totaal['Material']==i][co], y = ['Total'], width=0.5, name = i , marker_color = c, orientation = 'h', text = tsdf[totaal['Material']==i], textposition='auto' )) fig.update_layout( xaxis_title = l, legend=dict(orientation="h", yanchor="bottom", # traceorder="reversed", y=1.02, xanchor="right", x=1), template = 'simple_white+presentation', barmode='stack') # fig.update_xaxes(range=[0,110]) pio.write_image(fig, r'figures/general_{}.pdf'.format(n), width = 800, height = 500) pio.write_image(fig, r'figures/general_{}.eps'.format(n), width = 800, height = 500) # plotly.offline.plot(fig, filename = r'figures/general_{}.html'.format(n), auto_open=False) # fig.show()
notebooks/.ipynb_checkpoints/MAT-DP-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %pylab inline # ## Notebook magic from IPython.core.magic import Magics, magics_class, line_cell_magic from IPython.core.magic import cell_magic, register_cell_magic, register_line_magic from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring import subprocess import os # + @magics_class class PyboardMagic(Magics): @cell_magic @magic_arguments() @argument('-skip') @argument('-unix') @argument('-pyboard') @argument('-file') @argument('-data') @argument('-time') @argument('-memory') def micropython(self, line='', cell=None): args = parse_argstring(self.micropython, line) if args.skip: # doesn't care about the cell's content print('skipped execution') return None # do not parse the rest if args.unix: # tests the code on the unix port. Note that this works on unix only with open('/dev/shm/micropython.py', 'w') as fout: fout.write(cell) proc = subprocess.Popen(["../../micropython/ports/unix/micropython", "/dev/shm/micropython.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) print(proc.stdout.read().decode("utf-8")) print(proc.stderr.read().decode("utf-8")) return None if args.file: # can be used to copy the cell content onto the pyboard's flash spaces = " " try: with open(args.file, 'w') as fout: fout.write(cell.replace('\t', spaces)) printf('written cell to {}'.format(args.file)) except: print('Failed to write to disc!') return None # do not parse the rest if args.data: # can be used to load data from the pyboard directly into kernel space message = pyb.exec(cell) if len(message) == 0: print('pyboard >>>') else: print(message.decode('utf-8')) # register new variable in user namespace self.shell.user_ns[args.data] = string_to_matrix(message.decode("utf-8")) if args.time: # measures the time of executions pyb.exec('import utime') message = pyb.exec('t = utime.ticks_us()\n' + cell + '\ndelta = utime.ticks_diff(utime.ticks_us(), t)' + "\nprint('execution time: {:d} us'.format(delta))") print(message.decode('utf-8')) if args.memory: # prints out memory information message = pyb.exec('from micropython import mem_info\nprint(mem_info())\n') print("memory before execution:\n========================\n", message.decode('utf-8')) message = pyb.exec(cell) print(">>> ", message.decode('utf-8')) message = pyb.exec('print(mem_info())') print("memory after execution:\n========================\n", message.decode('utf-8')) if args.pyboard: message = pyb.exec(cell) print(message.decode('utf-8')) ip = get_ipython() ip.register_magics(PyboardMagic) # - # ## pyboard import pyboard pyb = pyboard.Pyboard('/dev/ttyACM0') pyb.enter_raw_repl() pyb.exit_raw_repl() pyb.close() # + # %%micropython -pyboard 1 import utime import ulab as np def timeit(n=1000): def wrapper(f, *args, **kwargs): func_name = str(f).split(' ')[1] def new_func(*args, **kwargs): run_times = np.zeros(n, dtype=np.uint16) for i in range(n): t = utime.ticks_us() result = f(*args, **kwargs) run_times[i] = utime.ticks_diff(utime.ticks_us(), t) print('{}() execution times based on {} cycles'.format(func_name, n, (delta2-delta1)/n)) print('\tbest: %d us'%np.min(run_times)) print('\tworst: %d us'%np.max(run_times)) print('\taverage: %d us'%np.mean(run_times)) print('\tdeviation: +/-%.3f us'%np.std(run_times)) return result return new_func return wrapper def timeit(f, *args, **kwargs): func_name = str(f).split(' ')[1] def new_func(*args, **kwargs): t = utime.ticks_us() result = f(*args, **kwargs) print('execution time: ', utime.ticks_diff(utime.ticks_us(), t), ' us') return result return new_func # - # __END_OF_DEFS__ # # Polynomials # # Functions in the polynomial sub-module can be invoked by importing the module first. # ## polyval # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyval.html # # `polyval` takes two arguments, both arrays or other iterables. # + # %%micropython -unix 1 import ulab as np from ulab import poly p = [1, 1, 1, 0] x = [0, 1, 2, 3, 4] print('coefficients: ', p) print('independent values: ', x) print('\nvalues of p(x): ', poly.polyval(p, x)) # the same works with one-dimensional ndarrays a = np.array(x) print('\nndarray (a): ', a) print('value of p(a): ', poly.polyval(p, a)) # - # ## polyfit # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html # # polyfit takes two, or three arguments. The last one is the degree of the polynomial that will be fitted, the last but one is an array or iterable with the `y` (dependent) values, and the first one, an array or iterable with the `x` (independent) values, can be dropped. If that is the case, `x` will be generated in the function, assuming uniform sampling. # # If the length of `x`, and `y` are not the same, the function raises a `ValueError`. # + # %%micropython -unix 1 import ulab as np from ulab import poly x = np.array([0, 1, 2, 3, 4, 5, 6]) y = np.array([9, 4, 1, 0, 1, 4, 9]) print('independent values:\t', x) print('dependent values:\t', y) print('fitted values:\t\t', poly.polyfit(x, y, 2)) # the same with missing x print('\ndependent values:\t', y) print('fitted values:\t\t', poly.polyfit(y, 2)) # - # ### Execution time # # `polyfit` is based on the inversion of a matrix (there is more on the background in https://en.wikipedia.org/wiki/Polynomial_regression), and it requires the intermediate storage of `2*N*(deg+1)` floats, where `N` is the number of entries in the input array, and `deg` is the fit's degree. The additional computation costs of the matrix inversion discussed in [inv](#inv) also apply. The example from above needs around 150 microseconds to return: # + # %%micropython -pyboard 1 import ulab as np from ulab import poly @timeit def time_polyfit(x, y, n): return poly.polyfit(x, y, n) x = np.array([0, 1, 2, 3, 4, 5, 6]) y = np.array([9, 4, 1, 0, 1, 4, 9]) time_polyfit(x, y, 2)
docs/ulab-poly.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Carga Data TERMINO X MATERIA # # Creación del DataFrame asociado a los terminos por materia # # Rev: 29-10-2020 # + import os import pandas as pd import numpy as np from pyarrow import feather from tqdm import tqdm from src.data import cleandata # - path_raw = "../data/raw/pjud" archivos = os.listdir(path_raw) tqdm.pandas() # + #creacion dataframe con los datos de Términos por Materia Penal de los años 2015 a 2020 dataframes = [] for archivo in archivos: if archivo.find("Términos por Materia Penal") != -1: df = pd.read_csv(f"{path_raw}/{archivo}", sep = ";", encoding = 'cp850', low_memory = True) dataframes.append(df) df_termino_materia = pd.concat(dataframes, axis = 0) # - df_termino_materia.columns df_termino_materia['SISTEMA'].unique() # Elimino los registros asociados a Sistema METGE (METAS DE GESTION) ya que son causas para realizar cumplimiento de KPI df_metge = df_termino_materia[df_termino_materia['SISTEMA']=='METGE'] df_termino_materia.drop(df_metge.index, axis=0, inplace=True) # + # Estandarización de nombres de variables df_termino_materia.rename(columns = {'CÓD. CORTE':'COD. CORTE', 'CÓD. TRIBUNAL':'COD. TRIBUNAL', 'CÓD. MATERIA':'COD. MATERIA', 'MOTIVO DE TÉRMINO':'MOTIVO TERMINO', 'DURACIÓN CAUSA':'DURACION CAUSA', 'FECHA TÉRMINO':'FECHA TERMINO', 'MES TÉRMINO':'MES TERMINO', 'AÑO TÉRMINO':'AÑO TERMINO', 'TOTAL TÉRMINOS':'TOTAL TERMINOS' },inplace = True) df_termino_materia.drop(['N°','SISTEMA'], axis = 'columns', inplace = True) # + # TRANSFORMAMOS DE FLOAT A INTEGER df_termino_materia['COD. CORTE'] = df_termino_materia['COD. CORTE'].fillna(0).astype(np.int16) df_termino_materia['COD. TRIBUNAL'] = df_termino_materia['COD. TRIBUNAL'].fillna(0).astype(np.int16) df_termino_materia['COD. MATERIA'] = df_termino_materia['COD. MATERIA'].fillna(0).astype(np.int16) df_termino_materia['DURACION CAUSA'] = df_termino_materia['DURACION CAUSA'].fillna(0).astype(np.int16) df_termino_materia['AÑO TERMINO'] = df_termino_materia['AÑO TERMINO'].fillna(0).astype(np.int16) df_termino_materia['TOTAL TERMINOS'] = df_termino_materia['TOTAL TERMINOS'].fillna(0).astype(np.int8) # + # Transformamos formato fecha df_termino_materia['FECHA INGRESO'] = df_termino_materia['FECHA INGRESO'].progress_apply(cleandata.convierte_fecha) df_termino_materia['FECHA TERMINO'] = df_termino_materia['FECHA TERMINO'].progress_apply(cleandata.convierte_fecha) # + # Elimino espacios en las columnas tipo objetos df_termino_materia = df_termino_materia.progress_apply(cleandata.elimina_espacios, axis=0) # + # Elimino tildes de object cols = df_termino_materia.select_dtypes(include = ["object"]).columns df_termino_materia[cols] = df_termino_materia[cols].progress_apply(cleandata.elimina_tilde) # + # Categorizar variables df_termino_materia['CORTE'] = df_termino_materia['CORTE'].astype('category') df_termino_materia['MOTIVO TERMINO'] = df_termino_materia['MOTIVO TERMINO'].astype('category') # - # Dejo solo causas Ordinarias df_termino_materia['TIPO CAUSA'].unique() tipo_causa = df_termino_materia[df_termino_materia['TIPO CAUSA']!='Ordinaria'] df_termino_materia.drop(tipo_causa.index, axis=0, inplace=True) # + # Reset el index para realizar feather df_termino_materia.reset_index(inplace = True) # - df_termino_materia # + # Guardamos dataset como archivo feather path_interim = "../data/interim/pjud" os.makedirs(path_interim, exist_ok = True) df_termino_materia.to_feather(f'{path_interim}/TerminoMateria_feather') # -
notebooks/.ipynb_checkpoints/3.2-jalvaradoruiz-carga-limpieza-data-terminos-materia-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.9 64-bit # language: python # name: python3 # --- import glob import numpy as np import pandas as pd import re import ast # Constants PHARMA_PATH = '../../../data/twitter/combined data/pharma companies' GOVT_INSTITUTES_PATH = '../../../data/twitter/combined data/public health agencies' NGO_PATH = '../../../data/twitter/combined data/ngo' # ### Load data topic_df = pd.read_csv('../../../data/topic-keywords.csv') topic_df.columns def isPhraseIn(phrase, text): ''' Returns a boolean value testifying if the phrase exists in the tweet ''' return re.search(r"\b{}\b".format(phrase), text, re.IGNORECASE) is not None # + # pre_covid_topic_df = pd.DataFrame(columns=['username','topic','tweetCount']) during_covid_topic_df = pd.DataFrame(columns=['username','topic','tweetCount']) for file in glob.glob(PHARMA_PATH + "/*.csv"): user_df = pd.read_csv(file) username = user_df['username'].unique()[0] # Divide dataframe into pre and post-covid times # Convert 'created_at' column to datetime user_df['created_at'] = user_df['created_at'].str[:-6] user_df['created_at'] = pd.to_datetime(user_df['created_at']) # Sort by datetime ascending user_df = user_df.sort_values(by='created_at') # Divide as per dates # pre_covid_df = user_df.loc[user_df['created_at'] <= '2020-02-26 23:59:59'] during_covid_df = user_df.loc[user_df['created_at'] >= '2020-02-27 00:00:00'] # print('*'*100) # print('Pre-COVID') # for topic_index, topic_row in topic_df[topic_df['time-phase']=='Pre-COVID'].iterrows(): # topic = topic_row['topic'] # keywords = ast.literal_eval(topic_row['topic-keywords']) # topic_user_tweet_count = 0 # for index, row in pre_covid_df.iterrows(): # for phrase in keywords: # if(isinstance(row.tweet, float)): # row.tweet = str(row.tweet) # if phrase in row.tweet: # topic_user_tweet_count += 1 # print(username, topic, topic_user_tweet_count) # pre_covid_topic_df = pre_covid_topic_df.append({'username':username, 'topic':topic, 'tweetCount':topic_user_tweet_count}, ignore_index=True) print('='*100) print('During COVID: ', username) for topic_index, topic_row in topic_df[topic_df['time-phase']=='During COVID'].iterrows(): topic = topic_row['topic'] keywords = ast.literal_eval(topic_row['topic-keywords']) topic_user_tweet_count = 0 for index, row in during_covid_df.iterrows(): for phrase in keywords: if(isinstance(row.tweet, float)): row.tweet = str(row.tweet) if phrase in row.tweet: topic_user_tweet_count += 1 print(topic, topic_user_tweet_count) during_covid_topic_df = during_covid_topic_df.append({'organization':username, 'topic':topic, 'tweetCount':topic_user_tweet_count}, ignore_index=True) # pre_covid_topic_df.to_csv('pre-covid-pharma-companies.csv', index=False) during_covid_topic_df.to_csv('during-covid-pc.csv', index=False) # -
code/content-analysis/topic-modelling/topic-propagation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Univariate time series prediction with energy consumption data # In this example, we will be solving a problem of the domain of regression. # For this reason we will build a multi layer RNN with two LSTMs. # The type of regression we will do is of the "Many to one" type, because the network will receive a sequence of energy consumption values, and will try to output the next value, based on the previous 4 registers. # # The dataset we will be working on is a compendium of many measurements of power consumption of one # home, throughout a period of time. As we could infer, this kind of behaviour can easily # follow patterns (It increases when the persons uses the microwave to prepare breakfast, and # computers after the wake up hour, can decrease a bit in the afternoon, and then increase at # night with all the lights, decreasing to zero starting from midnight until next wake up # hour). So let's try to model for this behavior in a sample case. # + # %matplotlib inline # %config InlineBackend.figure_formats = {'png', 'retina'} import numpy as np import pandas as pd import tensorflow as tf from matplotlib import pyplot as plt from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.layers.recurrent import LSTM from keras.layers import Dropout # - # ## Dataset description and loading # In this example we will be using the Electricity Load Diagrams Data Sets, from <NAME> # (site: https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014). # This is the description of the original dataset: # # Data set has no missing values. # Values are in kW of each 15 min. To convert values in kWh values must be divided by 4. # Each column represent one client. Some clients were created after 2011. In these cases # consumption were considered zero. # All time labels report to Portuguese hour. However all days present 96 measures (24*15). # Every year in March time change day (which has only 23 hours) the values between 1:00 # am and 2:00 am are zero for all points. Every year in October time change day (which has # 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two # hours. # # # In order to simplify our model description, we took just one client complete measurements, # and converted its format to standard CSV. It is located on the data subfolder of this chapter # code folder. # # So we will load the first 1500 values of the consumption of a sample home of the dataset. df = pd.read_csv("data/elec_load.csv", error_bad_lines=False) plt.subplot() plot_test, = plt.plot(df.values[:1500], label='Load') plt.legend(handles=[plot_test]) # If we take a look at this representation (We look to the first 1500 samples) we see an initial # transient state, probably when the measurements were put in place, and then we see a really # clear cycle of high and low consumption levels. # From simple observation we also see that the cicles are more or less of 100 samples, pretty # close to the 96 samples per day this dataset has. # # ### Dataset preprocessing # In order to assure a better convergency of the back propagation methods, we should try to # normalize the input data. # So we will be applying the classic scale and centering technique, substracting the mean # value, and scaling by the floor of the maximum value. # To get the needed values, we use the pandas' describe() method. print(df.describe()) array=(df.values - 145.33) /338.21 plt.subplot() plot_test, = plt.plot(array[:1500], label='Normalized Load') plt.legend(handles=[plot_test]) # In this step we will prepare our input dataset, because we need an input x [Previous 5 values], with a corresponding input y [value after 5 timesteps] # Then we will assign the first 13000 elements to the train set, and then we will assign the following 1000 samples to the testig set. # + listX = [] listy = [] X={} y={} for i in range(0,len(array)-6): listX.append(array[i:i+5].reshape([5,1])) listy.append(array[i+6]) arrayX=np.array(listX) arrayy=np.array(listy) X['train']=arrayX[0:13000] X['test']=arrayX[13000:14000] y['train']=arrayy[0:13000] y['test']=arrayy[13000:14000] # - # Now we will build the model, which will be a dual LSTM, with a dropout layer at the end of each. Additionally, we will add a dense layer at the end, and a linear activation final unit, to obtain a single float prediction. # After that, we save a representation of the model in an image. # + #Build the model model = Sequential() model.add(LSTM( units=50, input_shape=(None, 1), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM( units=200, input_shape=(None, 100), return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(units=1)) model.add(Activation("linear")) model.compile(loss="mse", optimizer="rmsprop") # - # Now it's time to run the model and adjust the weights. The model fitter will use 8% of the dataset values as the validation set. # + #Fit the model to the data model.fit(X['train'], y['train'], batch_size=512, epochs=10, validation_split=0.08) # - # After rescaling , it's time to see how our model predicts the values, compared with the actual test values, which didn't participate in the training of the models, to understand how the model is able to generalize the behavior of the sample home. # + # Rescale the test dataset and predicted data test_results = model.predict( X['test']) test_results = test_results * 338.21 + 145.33 y['test'] = y['test'] * 338.21 + 145.33 plt.figure(figsize=(15,10)) plot_predicted, = plt.plot(test_results, label='predicted') plot_test, = plt.plot(y['test'] , label='test'); plt.legend(handles=[plot_predicted, plot_test]);
Chapter07/CH7_time_series.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd from scipy.io import loadmat # machine learning libraries from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split #import xgboost as xgb behav8 = pd.read_csv('data_behavioral/gamble.data.s08.csv') behav8.head() # + from scipy.io import loadmat badTrials = loadmat('bad_trials_OFC.mat') badTrials #subjects except for subject 8 & 9 don't match for trial #s #behavior some are timed out # - neur8 = loadmat('s08_ofc_hg_events.mat') # + list(neur8) ''' Electrophysiological data Each .mat file contains the following variables: 1. game_events_hg: a nTrials x nTimePoints x nElectrodes matrix containing the HG activity across all electrodes for each subject, sampled at 1KHz. Each trial contains the data at [-1,2]s around each game presentation event. 2. game_window_events_hg: same data as game_events_hg after window averaging (200ms windows at 50ms increments; see paper Methods for details). 3. buttonpress_events_hg: as buttonpress_events_hg, but centered around each button press (subject choice) event. 4. buttonpress_window_events_hg: as game_window_events_hg, but derived from buttonpress_events_hg. ''' neur8['buttonpress_events_hg'] electrodes = ['Electrode_' + str(x) for x in range(1, 11)] #button8 = pd.DataFrame(data = neur8['buttonpress_events_hg'], columns = electrodes) neur8['buttonpress_events_hg'].shape #button8.head() neur8['game_events_hg'].shape # - electrodes neur8['buttonpress_events_hg'].shape # + #create data frames for each electrode. use a dictionary for convenience and naming. #this is for buttonpress_events_hg & game_events_hg electrodesBPE = {} electrodesGE = {} for x in range(0, 10): electrodesBPE[electrodes[x]] = neur8['buttonpress_events_hg'][:, :, x] electrodesGE[electrodes[x]] = neur8['game_events_hg'][:, :, x] # reduce to 0 to 1000 ms # + ''' data to generate: - average wave for each electrode, each trial (each second) - min, max for each electrode, each trial (each second) --> add these points to behav8 can split dataset into 3 parts for seconds. ''' # do an example for electrode 1, trial 1. # what does the data represent? difference between game events & button press event? import matplotlib.pyplot as plt plt.plot(neur8['buttonpress_events_hg'][0,:,0].T) plt.show() # - plt.plot(neur8['game_events_hg'][0,:,0].T) plt.show() neur8['buttonpress_window_events_hg'].shape # + # do an example for electrode 1, trial 1. electrodesBPE['Electrode_1'].shape avgE1 = [np.mean(i) for i in electrodesBPE['Electrode_1']] #np.average(electrodesBPE['Electrode_1'][0, :]) len(avgE1) # initialize columns for electrode in electrodes: behav8[electrode + '_avgBP'] = [np.mean(i) for i in electrodesBPE[electrode]] behav8[electrode + '_minBP'] = [min(i) for i in electrodesBPE[electrode]] behav8[electrode + '_maxBP'] = [max(i) for i in electrodesBPE[electrode]] behav8[electrode + '_sdBP'] = [np.std(i) for i in electrodesBPE[electrode]] # also for game events behav8[electrode + '_avgGE'] = [np.mean(i) for i in electrodesGE[electrode]] behav8[electrode + '_minGE'] = [np.min(i) for i in electrodesGE[electrode]] behav8[electrode + '_maxGE'] = [np.max(i) for i in electrodesGE[electrode]] behav8[electrode + '_sdGE'] = [np.std(i) for i in electrodesGE[electrode]] # there are many other things to do with the data, such as adding in avg per second. # - np.max(electrodesBPE['Electrode_1'][0, :]) behav8.head() # + #convert left and right in choice location to 0 and 1 (respectively) behav8['choice.location'].isnull().any() behav8['convertedLocation'] = np.nan for index, row in behav8.iterrows(): if row['choice.location'] == 'Left': behav8.at[index, 'convertedLocation'] = 0 else: behav8.at[index, 'convertedLocation'] = 1 behav8.head() # + from sklearn.model_selection import train_test_split X = behav8.drop(['choice.class', 'outcome', 'choice.location'], axis = 1) Y = behav8['outcome'] x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=100) # + from sklearn import linear_model # Name our logistic regression object LogisticRegressionModel = linear_model.LogisticRegression() # we create an instance of logistic Regression Classifier and fit the data. print ('Training a logistic Regression Model..') LogisticRegressionModel.fit(x_train, y_train) training_accuracy=LogisticRegressionModel.score(x_train,y_train) print ('Training Accuracy: ', training_accuracy) # - test_accuracy=LogisticRegressionModel.score(x_test,y_test) print('Accuracy of the model on unseen test data: ',test_accuracy) #serious overfitting in the above. # going to check it out in r... behav8.to_csv('behav8_v1.csv') # + import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression import mne from mne.datasets import sample from mne.decoding import (SlidingEstimator, GeneralizingEstimator, cross_val_multiscore, LinearModel, get_coef) import pandas as pd # - best_trials = pd.read_csv('best_trials_master_df.csv') neur8 = loadmat('s08_ofc_hg_events.mat') neur8
Model History/explore_lillian.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using some method in List # print a list given below . list = ["Afridi",2,3.14,True] list # * Add element using append() method list.append("Lol") list # * Add element in specific position using insert() method list.insert(1,4) list # * Add element using extend() method list.extend(["af",4]) list # * Add element using addition " + " operator list = list + [False] list # * Delete element using del() method del list[4] list del list[-1] list # * Delete element in specific position using remove() method list.remove(4) list # * Delete last element using pop() method list.pop() # * Reverse elment using reverse() method list.reverse() list # * Clear full list list.clear() list type(list) # * Sort using sort() method list2 = [0,8,7] list2.sort() list2 a = [1,8,5,3] a.sort() print(a) # * Nested lists & Acces elements dhaka = ['gazipur',20,False] ctg = ['anwara',30,True] bangladesh = [dhaka,ctg,['comilla',50,True]] print(bangladesh) print(bangladesh[0][2]) # # Let's Rock !
Allinone py/List2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt titanic = sns.load_dataset('titanic') titanic.head(10) titanic.shape titanic.describe() titanic['survived'].value_counts() sns.countplot(titanic['survived'],label="Count") # + cols = ['who', 'sex', 'pclass', 'sibsp', 'parch', 'embarked'] n_rows = 2 n_cols = 3 fig, axs = plt.subplots(n_rows, n_cols, figsize=(n_cols*3.2,n_rows*3.2)) for r in range(0,n_rows): for c in range(0,n_cols): i = r*n_cols+ c ax = axs[r][c] sns.countplot(titanic[cols[i]], hue=titanic["survived"], ax=ax) ax.set_title(cols[i]) ax.legend(title="survived", loc='upper right') plt.tight_layout() # - titanic.groupby('sex')[['survived']].mean() titanic.pivot_table('survived', index='sex', columns='class') titanic.pivot_table('survived', index='sex', columns='class').plot() sns.barplot(x='class', y='survived', data=titanic) age = pd.cut(titanic['age'], [0, 18, 80]) titanic.pivot_table('survived', ['sex', age], 'class') titanic.isna().sum() for val in titanic: print(titanic[val].value_counts()) print() titanic = titanic.drop(['deck', 'embark_town', 'alive', 'class', 'alone', 'adult_male', 'who'], axis=1) titanic = titanic.dropna(subset =['embarked', 'age']) titanic.shape titanic.dtypes print(titanic['sex'].unique()) print(titanic['embarked'].unique()) from sklearn.preprocessing import LabelEncoder labelencoder = LabelEncoder() titanic.iloc[:,2]= labelencoder.fit_transform(titanic.iloc[:,2].values) titanic.iloc[:,7]= labelencoder.fit_transform(titanic.iloc[:,7].values) print(titanic['sex'].unique()) print(titanic['embarked'].unique()) X = titanic.iloc[:, 1:8].values Y = titanic.iloc[:, 0].values from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0) from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) def models(X_train,Y_train): from sklearn.linear_model import LogisticRegression log = LogisticRegression(random_state = 0) log.fit(X_train, Y_train) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2) knn.fit(X_train, Y_train) from sklearn.svm import SVC svc_lin = SVC(kernel = 'linear', random_state = 0) svc_lin.fit(X_train, Y_train) from sklearn.svm import SVC svc_rbf = SVC(kernel = 'rbf', random_state = 0) svc_rbf.fit(X_train, Y_train) from sklearn.naive_bayes import GaussianNB gauss = GaussianNB() gauss.fit(X_train, Y_train) from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) tree.fit(X_train, Y_train) from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0) forest.fit(X_train, Y_train) print('[0]Logistic Regression Training Accuracy:', log.score(X_train, Y_train)) print('[1]K Nearest Neighbor Training Accuracy:', knn.score(X_train, Y_train)) print('[2]Support Vector Machine (Linear Classifier) Training Accuracy:', svc_lin.score(X_train, Y_train)) print('[3]Support Vector Machine (RBF Classifier) Training Accuracy:', svc_rbf.score(X_train, Y_train)) print('[4]Gaussian Naive Bayes Training Accuracy:', gauss.score(X_train, Y_train)) print('[5]Decision Tree Classifier Training Accuracy:', tree.score(X_train, Y_train)) print('[6]Random Forest Classifier Training Accuracy:', forest.score(X_train, Y_train)) return log, knn, svc_lin, svc_rbf, gauss, tree, forest model = models(X_train,Y_train) from sklearn.metrics import confusion_matrix for i in range(len(model)): cm = confusion_matrix(Y_test, model[i].predict(X_test)) #extracting TN, FP, FN, TP TN, FP, FN, TP = confusion_matrix(Y_test, model[i].predict(X_test)).ravel() print(cm) print('Model[{}] Testing Accuracy = "{} !"'.format(i, (TP + TN) / (TP + TN + FN + FP))) print()# Print a new line forest = model[6] importances = pd.DataFrame({'feature':titanic.iloc[:, 1:8].columns,'importance':np.round(forest.feature_importances_,3)}) importances = importances.sort_values('importance',ascending=False).set_index('feature') importances importances.plot.bar() pred = model[6].predict(X_test) print(pred) print() print(Y_test) # + my_survival = [[3,1,21,0, 0, 0, 1]] pred = model[6].predict(my_survival) print(pred) if pred == 0: print('Oh no! You didn\'t make it') else: print('Nice! You survived') # -
Titanic Survival Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + ## 字符串 string Python 的程序中充满了字符串(string),在平常阅读代码时也屡见不鲜。字符串同样是 Python 中很常见的一种数据类型, 比如日志的打印、程序中函数的注释、数据库的访问、变量的基本操作等等,都用到了字符串。 ### 字符串基础 什么是字符串呢?字符串是由独立字符组成的一个序列,通常包含在单引号('')双引号("")或者三引号之中 (''' '''或""" """,两者一样),比如下面几种写法。 ```bash name = 'jason' city = 'beijing' text = "welcome to jike shijian" ``` ### docstring Python 的三引号字符串,则主要应用于多行字符串的情境,比如函数的注释等等。 ```bash def calculate_similarity(item1, item2): """ Calculate similarity between two items Args: item1: 1st item item2: 2nd item Returns: similarity score between item1 and item2 """ ``` 大家思考一下,为什么我们写一个函数,需要去写docstring呢? # + def calculate_similarity(item1, item2): """ Calculate similarity between two items Args: item1: 1st item item2: 2nd item Returns: similarity score between item1 and item2 """ pass help(calculate_similarity) # - # # # ### 字符串的常用操作 # * 切片 # ```bash # name = 'jason' # name[1:3] # 'as' # ``` # # * 索引 # ```bash # name = 'jason' # name[0] # 'j' # name[-1] # 'n' # ``` # * 遍历 # ```bash # name = 'jason' # for _ in name: # print(_) # ``` # * 拼接 # ```bash # str1 = 'abc' # str2 = 'efg' # str1 += str2 # 表示str1 = str1 + str2 # ``` # * 去掉空字符 # ```bash # string.strip(str),表示去掉首尾的 str 字符 # string.lstrip(str),表示只去掉开头的 str 字符 # string.rstrip(str),表示只去掉尾部的 str 字符串。 # ``` # # * Count字母的出现次数 # ```bash # s = 'testtesttt' # s.count('t') # # 6 # ``` # ### String是不可变的 # Python 的字符串是不可变的(immutable)。 # ```bash # s = 'hello' # s[0] = 'H' # # # s = 'H' + s[1:] # s = s.replace('h', 'H') # ``` # + s = 'hello' s[0] = 'H' s = 'H' + s[1:] s = s.replace('h', 'H') # - s = 'hello' print(s) s = 'H' + s[1:] print(s)
module1/jupyter/string.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # %load_ext autoreload # %autoreload 2 from comet_ml import Experiment import numpy as np import scipy.spatial import pandas as pd import comet_ml import sklearn.decomposition import matplotlib.pyplot as plt # import keras from sklearn import preprocessing from sklearn.metrics import pairwise_distances,mean_absolute_error, mean_squared_error import matplotlib.pyplot as plt import seaborn as sns from utils.readProfiles import readMergedProfiles,readMergedProfiles2 from utils.pred_models import * from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile # from utils import networksEvol, tsne, readProfiles import umap # - # #### In this notebook we test how combinting two data modalities will improve preformance of the following tasks: # 1 - MOA prediction # 2 - ? # ## MOA Prediction: # - Methods: # - Baseline: simple concatenation # - SNF # - prbabilistic modeling # list(set(mergProf_treatLevel.columns.tolist())-set(l1k_features)) import os os.listdir('./preprocessed_data/LINCS-Pilot1/CellPainting/') # ## Treatment level # + dataset_rootDir='./';pertColName='PERT' # datasets=['LUAD', 'TAORF', 'LINCS', 'CDRP-bio']; # dataset options: 'CDRP' , 'LUAD', 'TAORF', 'LINCS','CDRP-bio' dataset='CDRP'; # CP Profile Type options: 'augmented' , 'normalized', 'normalized_variable_selected' # lincs --> normalized_feature_select_dmso profileType='normalized' profileLevel='treatment'; #'replicate' or 'treatment' highRepOverlapEnabled=0 # n of samples for replicate picking options: numbers or, 'max' nRep=1 mergProf_repLevel,mergProf_treatLevel,cp_features,l1k_features=\ readMergedProfiles(dataset_rootDir,dataset,profileType,profileLevel,nRep,highRepOverlapEnabled); # mergProf_repLevel,mergProf_treatLevel,l1k_features,cp_features,pertColName=readMergedProfiles(dataset,profileType,nRep) # cp_features,l1k_features=cp_features.tolist(),l1k_features.tolist() # mergProf_repLevel['Compounds']=mergProf_repLevel['PERT'].str[0:13] if profileLevel=='replicate': l1k=mergProf_repLevel[[pertColName]+l1k_features] cp=mergProf_repLevel[[pertColName]+cp_features] elif profileLevel=='treatment': l1k=mergProf_treatLevel[list(set(mergProf_treatLevel.columns.tolist())-set(cp_features))] cp=mergProf_treatLevel[list(set(mergProf_treatLevel.columns.tolist())-set(l1k_features))] scaler_ge = preprocessing.StandardScaler() scaler_cp = preprocessing.StandardScaler() l1k_scaled=l1k.copy() l1k_scaled[l1k_features] = scaler_ge.fit_transform(l1k[l1k_features].values) cp_scaled=cp.copy() cp_scaled[cp_features] = scaler_cp.fit_transform(cp[cp_features].values.astype('float64')) if 1: cp_scaled[cp_features] =preprocessing.MinMaxScaler(feature_range=(0, 1)).fit_transform(cp_scaled[cp_features].values) l1k_scaled[l1k_features] =preprocessing.MinMaxScaler(feature_range=(0, 1)).fit_transform(l1k_scaled[l1k_features].values) # moa_col='moa' moa_col='Metadata_moa' if 1: cp=cp_scaled.copy() l1k=l1k_scaled.copy() # merged_scaled=pd.merge(cp, l1k, how='inner',on=['PERT',moa_col]); # ### l1k[moa_col]=cp[moa_col] # for CDRP #### merged_scaled=pd.merge(cp, l1k, how='inner',on=['PERT',moa_col]); # for CDRP merged_scaled=pd.concat([cp, l1k], axis=1) merged_scaled = merged_scaled.loc[:,~merged_scaled.columns.duplicated()] # - merged_scaled = merged_scaled.loc[:,~merged_scaled.columns.duplicated()] # just_comp_treatLevel=mergProf_treatLevel[mergProf_treatLevel['PERT']=='DMSO'] # mergProf_treatLevel[mergProf_treatLevel['PERT']=='DMSO'].shape l1k.shape,cp.shape mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isnull()].shape mergProf_treatLevel.shape mergProf_treatLevel[~mergProf_treatLevel['Metadata_moa'].isnull()].shape[0]/mergProf_treatLevel.shape[0] mergProf_treatLevel.shape # LINCS: Replicate Level Shapes (nSamples x nFeatures): cp: 52223 , 1670 , l1k: 27837 , 978 # l1k n of rep: 3.0 # # cp n of rep: 5.0 # CP: from 9394 to 4647 # l1k: from 8369 to 2338 # CP and l1k high rep overlap: 1140 # Treatment Level Shapes (nSamples x nFeatures+metadata): (1141, 1671) (1141, 979) Merged Profiles Shape: (1141, 2649) repp_df=mergProf_treatLevel.groupby(['Metadata_moa']).size().reset_index().rename(columns={0:'nrep'}).groupby(['nrep']).size().reset_index() plt.bar(repp_df['nrep'].values, repp_df[0].values); repp_df=mergProf_treatLevel.groupby(['moa']).size().reset_index().rename(columns={0:'nrep'}).groupby(['nrep']).size().reset_index() plt.bar(repp_df['nrep'].values, repp_df[0].values); # mergProf_treatLevel.groupby(['moa']).size().reset_index().rename(columns={0:'nrep'}).sort_values(by='nrep') moa_col='moa' nSamplesforEachMOAclass=mergProf_treatLevel.groupby(['Metadata_moa']).size().reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) listOfSelectedMoAs=nSamplesforEachMOAclass[nSamplesforEachMOAclass['size']>1]['Metadata_moa'].tolist() print(len(listOfSelectedMoAs)) # mergProf_treatLevel['Metadata_moa']=mergProf_treatLevel['Metadata_moa'].str.lower() nSamplesforEachMOAclass=mergProf_treatLevel.groupby(['Metadata_moa']).size().reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) listOfSelectedMoAs=nSamplesforEachMOAclass[nSamplesforEachMOAclass['size']>1]['Metadata_moa'].tolist() print(len(listOfSelectedMoAs)) listOfSelectedMoAs # + # mergProf_treatLevel['moa'].str.lower().unique().shape # + # mergProf_treatLevel['moa'].unique().shape # - # mergProf_treatLevel.Metadata_moa.unique().shape nSamplesforEachMOAclass merged_scaled.columns[merged_scaled.columns.str.contains('moa')] merged_scaled[['Metadata_alternative_moa_x', 'Metadata_moa_x', 'moa_x', 'moa_y', 'Metadata_moa_y', 'Metadata_alternative_moa_y']] xxx=mergProf_treatLevel.groupby(['Compounds']).size().reset_index() # xxx[xxx[0]==2] # xxx[xxx[0]==2] xxx nSamplesforEachMOAclass # mergProf_treatLevel[mergProf_treatLevel['Compounds']=='BRD-K73323637'].Metadata_moa nSamplesforEachMOAclass2 # ### Single Modalities Classification performance # + from sklearn.decomposition import PCA # # %matplotlib inline # Dimension reduction and clustering libraries import umap # import hdbscan import sklearn.cluster as cluster from sklearn.metrics import adjusted_rand_score, adjusted_mutual_info_score from sklearn.model_selection import LeaveOneOut,cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier # nSamplesMOA=10 # results in grant for CDRP are for this number of MOAs nSamplesMOA=5 # from MulticoreTSNE import MulticoreTSNE as TSNE # df_1 = df_1.interpolate() ############# ########## # mergProf_treatLevel['Metadata_moa']=mergProf_treatLevel['Metadata_moa'].str.lower() mergProf_treatLevel=mergProf_treatLevel[~mergProf_treatLevel[moa_col].isnull()].reset_index(drop=True) mergProf_treatLevel['Compounds']=mergProf_treatLevel['PERT'].str[0:13] nSamplesforEachMOAclass=mergProf_treatLevel.groupby(['Compounds']).sample(1).groupby([moa_col]).size().\ reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) nSamplesforEachMOAclass2=mergProf_treatLevel.groupby([moa_col]).size().reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) # lkjklj listOfSelectedMoAs=nSamplesforEachMOAclass[nSamplesforEachMOAclass['size']>nSamplesMOA][moa_col].tolist() le = preprocessing.LabelEncoder() le.fit(listOfSelectedMoAs) # corresPertID=[mergProf_treatLevel[mergProf_treatLevel['Metadata_moa']==i]['Metadata_pert_id'] for i in listOfSelectedMoAs] # filteredMOAs=mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isin(listOfSelectedMoAs)].reset_index(drop=True) IDs4filteredMOAs=mergProf_treatLevel[mergProf_treatLevel[moa_col].isin(listOfSelectedMoAs)][pertColName].tolist() cp['Compounds']=cp['PERT'].str[0:13] l1k['Compounds']=l1k['PERT'].str[0:13] merged_scaled['Compounds']=merged_scaled['PERT'].str[0:13] data4eval=[[cp,cp_features],[l1k,l1k_features],[merged_scaled,cp_features+l1k_features]] # for r in range(len(data4eval)): #range(1):# # print(r) # domXdata=data4eval[r][0]; # domXfeats=data4eval[r][1] # domXfeats['Metadata_moa_num']=le.transform(domXfeats['Metadata_moa'].tolist()) # filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) # data = filteredMOAs[domXfeats].values; # labels=filteredMOAs.Metadata_moa.tolist() # loocv = LeaveOneOut() # model_loocv = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000) # results_loocv = cross_val_score(model_loocv, data, labels, cv=loocv) # print("Accuracy: %.2f%%" % (results_loocv.mean()*100.0)) # - # cp.shape,l1k.shape,merged_scaled.shape # merged_scaled['PERT'] # mergProf_treatLevel[moa_col].unique() # mergProf_treatLevel # IDs4filteredMOAs mergProf_treatLevel[mergProf_treatLevel[moa_col].isin(listOfSelectedMoAs)]['Compounds'].shape len(listOfSelectedMoAs),len(IDs4filteredMOAs),mergProf_treatLevel.shape # merged_scaled['PERT'] # listOfSelectedMoAs # cp_features pertColName # + # import os # os.mkdir('../../results/dataIntegration') # - len(listOfSelectedMoAs) #filt set (n>1) CDRP len(listOfSelectedMoAs) #full set (n>1) cdrp len(listOfSelectedMoAs) #full set (n>3) len(listOfSelectedMoAs) #filt set(n>3) len(listOfSelectedMoAs) #filt set (n>1) len(listOfSelectedMoAs) #full set (n>1) # + cp['Compounds']=cp['PERT'].str[0:13] l1k['Compounds']=l1k['PERT'].str[0:13] data4eval=[[cp,cp_features],[l1k,l1k_features],[merged_scaled,cp_features+l1k_features]] # - # ls ../../results/ # DataFuseResults # pd.to_csv('../../results/dataIntegration/fusion_res.xlsx') res_path='../../results/dataIntegration/fusion_res.xlsx' saveAsNewSheetToExistingFile(res_path,DataFuseResults,'logisticReg') saveAsNewSheetToExistingFile(res_path,DataFuseResults_loaded,'logisticReg') # + # DataFuseResults_loaded=pd.read_excel(res_path, sheet_name=None)['logisticReg'] # DataFuseResults_loaded # - DataFuseResults=pd.DataFrame(columns=["Data","Modality"]) i=0 for d in ['Filtered','All']: for m in ['CP','GE','CP+GE']: temp_df=pd.DataFrame(data=Acc_all2[i],columns=['acc']) temp_df['Modality']=m temp_df['Data']=d i+=1 DataFuseResults=DataFuseResults.append(temp_df) plt.figure(figsize=(4,5)) sns.set_theme(style="whitegrid") ax = sns.boxplot(x="Modality", y="acc", hue="Data",data=DataFuseResults, palette="Set1") ax = sns.swarmplot(x="Modality", y="acc", hue="Data",data=DataFuseResults,dodge=True,color=".2") plt.figure(figsize=(4,5)) sns.set_theme(style="whitegrid") ax = sns.boxplot(x="Data", y="acc", hue="Modality",data=DataFuseResults, palette="Set1") # ax = sns.swarmplot(x="Modality", y="acc", hue="Data",data=DataFuseResults,dodge=True,color=".2") # DataFuseResults_loaded['NMI']=DataFuseResults_loaded['NMI']*100 DataFuseResults # + # DataFuseResults=DataFuseResults_loaded.copy() DataFuseResults=DataFuseResults.rename(columns={'acc':'Accuracy'}) fig, axes = plt.subplots(1,4,figsize=(10,5)) sns.set_context("paper") sns.set_style("whitegrid") # sns.rcParams['patch.force_edgecolor'] = True # for d in range(5):#(len(datasets)): sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples'],\ palette="Set1",ax=axes[0]) axes[0].axhline(y=(1/557)*100,linestyle=':',color='r'); axes[0].set_ylim(0,20) axes[0].set_title('(a)'); sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='Filtered Samples'],\ palette="Set1",ax=axes[1]) axes[1].axhline(y=(1/179)*100,linestyle=':',color='r'); axes[1].set_ylim(0,70) axes[1].set_title('(b)'); sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples - CCA'],\ hue_order=['CP','GE','CP+GE'],palette="Set1",ax=axes[2]) axes[2].axhline(y=(1/557)*100,linestyle=':',color='r'); axes[2].set_ylim(0,20) axes[2].set_title('(c)'); sns.boxplot(x="Data", y="NMI", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples - SNF'],\ palette="Set1",ax=axes[3]) # axes[3].axhline(y=(1/557)*100,linestyle=':',color='r'); axes[3].set_ylim(10,100) axes[3].set_title('(d)'); fig.tight_layout() # sns.distplot(pred_scoress,kde=True,hist=True,bins=100,label=datasets[d],ax=axes[d,m],norm_hist=True,color='r') # sns.distplot(rand_scoress,kde=True,hist=True,bins=100,label='random',ax=axes[d,m],norm_hist=True) # print(np.percentile(rand_scoress,90)) # axes[d,m].set_xlim(-1,1) # axes[d,m].set_xlim(-0.5,0.6) # # axes[d,m].set_ylim(0,15) # axes[d,m].axvline(x=np.percentile(rand_scoress,90),linestyle=':',color='r'); # axes[len(datasets)-1,m].set_xlabel("Accuracy ($R^2$)"); # axes[d,m].legend(); # axes[0,m].set_title(models[m]); # + import matplotlib.style as style style.use('seaborn-colorblind') DataFuseResults=DataFuseResults.rename(columns={'acc':'Accuracy'}) hfont = {'fontname':'sans-serif'} # plt.title('title',**csfont) fig, axes = plt.subplots(1,3,figsize=(9,6)) fig.suptitle('MoA Classification', fontsize=15,**hfont) sns.set_context("paper") sns.set_style("whitegrid") # sns.rcParams['patch.force_edgecolor'] = True # for d in range(5):#(len(datasets)): sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples'],\ ax=axes[0]) axes[0].axhline(y=(1/557)*100,linestyle=':',color='r'); axes[0].set_ylim(0,30) axes[0].set_title('(a)'); sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='Filtered Samples'],\ ax=axes[2]) axes[2].axhline(y=(1/179)*100,linestyle=':',color='r'); axes[2].set_ylim(0,100) axes[2].set_title('(c)'); axes[2].set(ylabel=None) sns.boxplot(x="Data", y="Accuracy", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples - CCA'],\ ax=axes[1]) # hue_order=['CP','GE','CP+GE'] axes[1].axhline(y=(1/557)*100,linestyle=':',color='r'); axes[1].set_ylim(0,30) axes[1].set_title('(b)'); axes[1].set(ylabel=None) # sns.boxplot(x="Data", y="NMI", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples - SNF'],\ # palette="cividis",ax=axes[3]) # # axes[3].axhline(y=(1/557)*100,linestyle=':',color='r'); # axes[3].set_ylim(0,100) # axes[3].set_title('(d)'); # fig.tight_layout() fig.savefig('moa_clussif.eps') # + fig, axes = plt.subplots(1,1,figsize=(2.5,6)) fig.suptitle('MoA Clustering', fontsize=15,**hfont) sns.boxplot(x="Data", y="NMI", hue="Modality",data=DataFuseResults[DataFuseResults['Data']=='All Samples - SNF'],\ palette="cividis",ax=axes) # axes[3].axhline(y=(1/557)*100,linestyle=':',color='r'); axes.set_ylim(0,100) axes.set_title('(d)'); # fig.tight_layout() fig.savefig('moa_clustering.eps',bbox_inches='tight') # - # DataFuseResults_loaded DataFuseResults_loaded=DataFuseResults_loaded.append(ress_df,ignore_index=True) # + # listOfSelectedMoAs # list(le.classes_) # train_index.shape,filteredMOAs.shape # test_index # Acc_all # DataFuseResults[DataFuseResults['Data']=='All'] # DataFuseResults # temp_df DataFuseResults=pd.DataFrame(columns=["Data","Modality"]) # DataFuseResults_loaded['acc']=DataFuseResults_loaded['acc']*100 # i=0 for d in ['All Samples - CCA']: # for d in ['All Samples - CCA']: for n,m in zip([0,1,2,3],['CP','GE','Early Fusion','Late Fusion']): temp_df=pd.DataFrame(data=acc_array_cca_full2_cdrp[:,n],columns=['acc']) temp_df['Modality']=m temp_df['Data']=d # i+=1 DataFuseResults=DataFuseResults.append(temp_df,ignore_index=True) # for d in ['Filt Samples - CCA']: # for n,m in zip([0,1,2,3],['CP','GE','Early Fusion','Late Fusion']): # temp_df=pd.DataFrame(data=acc_array_filt2[:,n],columns=['acc']) # temp_df['Modality']=m # temp_df['Data']=d # # i+=1 # DataFuseResults=DataFuseResults.append(temp_df,ignore_index=True) for d in ['All Samples']: for n,m in zip([0,1,2,3],['CP','GE','Early Fusion','Late Fusion']): temp_df=pd.DataFrame(data=acc_array_fullSet2_cdrp[:,n],columns=['acc']) temp_df['Modality']=m temp_df['Data']=d # i+=1 DataFuseResults=DataFuseResults.append(temp_df,ignore_index=True) for d in ['Filtered Samples']: for n,m in zip([0,1,2,3],['CP','GE','Early Fusion','Late Fusion']): temp_df=pd.DataFrame(data=acc_array_filtSet2_cdrp[:,n],columns=['acc']) temp_df['Modality']=m temp_df['Data']=d # i+=1 DataFuseResults=DataFuseResults.append(temp_df,ignore_index=True) # ress_df['Data']='All Samples - SNF' # ress_df_filt['Data']='All Samples - SNF' ress_df_full_cdrp['Data']='All Samples - SNF' # ress_df_full_cdrp DataFuseResults=DataFuseResults.append(ress_df_full_cdrp,ignore_index=True) DataFuseResults['NMI']=DataFuseResults['NMI']*100 # + # acc_array_fullSet=np.copy(acc_array_filtSet) # - DataFuseResults_lincs=DataFuseResults.copy() # test_index # filteredMOAs # len(data4eval) # xx=filteredMOAs.groupby(['Metadata_moa_num']).sample(1) # xx filteredMOAs # + # # xx.sample(20) # # filteredMOAs['Compounds'].unique().shape # from sklearn.model_selection import KFold # kf = KFold(n_splits=n_of_random_sel,random_state=2,shuffle=True) # # KFold(n_splits=2, random_state=None, shuffle=False) # for train_index, test_index in kf.split(filteredMOAs): # print(test_index) # len(train_index) # filteredMOAs[moa_col].unique() # filteredMOAs['Metadata_moa_num'].unique() # - # filteredMOAs.groupby([moa_col]).size().describe() filteredMOAs.shape # + from sklearn.utils import class_weight from sklearn.naive_bayes import GaussianNB,ComplementNB from sklearn.metrics import accuracy_score from sklearn.model_selection import KFold domXdata=merged_scaled.copy(); # domXfeats=data4eval[r][1] # outdim_size=40 filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) filteredMOAs['Metadata_moa_num']=le.transform(filteredMOAs[moa_col].tolist()) Acc_list=[] # n_of_random_sel=20 # n_of_random_sel=50 n_of_random_sel=20 acc_array_fullSet2_cdrp=np.zeros((n_of_random_sel,4)); # acc_array_filtSet2_cdrp=np.zeros((n_of_random_sel,4)); # for i in range(n_of_random_sel): kf = KFold(n_splits=n_of_random_sel,random_state=1,shuffle=True) i=0 for train_index0, test_index in kf.split(filteredMOAs): print('rand ',i) # for outdim_size in range(10,110,10): # test_index=filteredMOAs.groupby(['Metadata_moa_num']).sample(1).sample(50).index.values # hgfhf unq_comp_test=filteredMOAs.loc[test_index,'Compounds'].unique().tolist() # print(filteredMOAs.loc[test_index,'Metadata_moa_num'].unique().tolist()) # print(len(filteredMOAs.loc[train_index,'Metadata_moa_num'].unique().tolist())) testFiltMoA=filteredMOAs.loc[test_index,:] comp_to_remove_from_train=filteredMOAs[filteredMOAs['Compounds'].isin(unq_comp_test)].index.values # print(test_index[0:30]) # khhk train_index=np.array(list(set(filteredMOAs.index.values)-set(test_index)-set(comp_to_remove_from_train))) train_moaClassess=filteredMOAs.loc[train_index,'Metadata_moa_num'].unique().tolist() test_moaClassess=filteredMOAs.loc[test_index,'Metadata_moa_num'].unique().tolist() # if len(filteredMOAs.loc[train_index,'Metadata_moa_num'].unique().tolist())=70: # sfsdssf test_cl_toRemove=list(set(test_moaClassess)-set(train_moaClassess)) test_ind_toRemo=testFiltMoA[testFiltMoA['Metadata_moa_num'].isin(test_cl_toRemove)].index print(test_cl_toRemove,test_ind_toRemo) test_index=np.array(list(set(test_index)-set(test_ind_toRemo))) # data_train = filteredMOAs.loc[train_index,domXfeats].values; labels_train=filteredMOAs.loc[train_index,'Metadata_moa_num'].tolist() # data_test = filteredMOAs.loc[test_index,domXfeats].values; labels_test=filteredMOAs.loc[test_index,'Metadata_moa_num'].tolist() class_weightt = class_weight.compute_class_weight(class_weight='balanced',classes=np.unique(labels_train),y=labels_train) # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") probs=[] for n,dt_modality in zip([0,1,2],data4eval): data_m=dt_modality[0][dt_modality[0][pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) dt_train=data_m.loc[train_index,dt_modality[1]].values; dt_test=data_m.loc[test_index,dt_modality[1]].values; # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") # model_tr = GaussianNB() # model_tr = ComplementNB() model_tr = LogisticRegression(multi_class='multinomial',n_jobs=1,max_iter=1000,class_weight="balanced") model_tr.fit(dt_train,labels_train) accc=model_tr.score(dt_test,labels_test) probs.append(model_tr.predict_proba(dt_test)) model_tr.classes_ # print(accc) acc_array_fullSet2_cdrp[i,n]=accc*100 # acc_array_filtSet2_cdrp[i,n]=accc*100 Acc_list.append(accc); # labels_lateFusion=list(np.argmax((probs[0]+probs[1])/2,axis=1)) labels_lateFusion=model_tr.classes_[np.argmax((probs[0]+probs[1])/2,axis=1)] acc_array_fullSet2_cdrp[i,n+1]=accuracy_score(labels_test,labels_lateFusion)*100 # acc_array_filtSet2_cdrp[i,n+1]=accuracy_score(labels_test,labels_lateFusion)*100 i+=1 # hfh print(np.median(acc_array_fullSet2_cdrp,axis=0)) # print('Accuracy: ',r, np.mean(Acc_list)*100) # - # train_index # data_m # len(dt_modality[1]) print(np.median(acc_array_fullSet2_cdrp,axis=0)) # + # acc_array_fullSet2 # list(set(test_moaClassess)-set(train_moaClassess) # model_tr.classes_[np.argmax((probs[0]+probs[1])/2,axis=1)] np.median(acc_array_filtSet2,axis=0) # - np.argmax((probs[0]+probs[1])/2,axis=1) model_tr.predict(dt_test) np.argmax((probs[0]+probs[1])/2,axis=1) # accuracy_score(labels_test,labels_lateFusion) # len(labels_lateFusion),len(labels_test) # labels_lateFusion np.median(acc_array_filtSet2,axis=0) np.median(acc_array_fullSet,axis=0) np.median(acc_array_fullSet2,axis=0) # np.argmax(model_tr.predict_proba(dt_train),axis=1) # labels_lateFusion=np.argmax((probs[0]+probs[1])/2,axis=1) # accc accuracy_score(model_tr.predict(dt_test),labels_test) # model_tr.predict(dt_train) accuracy_score(labels_test,labels_test) # dt_modality[0] len(labels_test),dt_test.shape,accc # model_tr.predict(dt_test) # train_index test_index # + Acc_all=[] Acc_all2=[] n_of_random_sel=10 acc_array_fullSet=np.zeros((n_of_random_sel,3)); from sklearn.utils import class_weight for r in range(len(data4eval)): #range(1):# print(r) domXdata=data4eval[r][0]; domXfeats=data4eval[r][1] filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) filteredMOAs['Metadata_moa_num']=le.transform(filteredMOAs[moa_col].tolist()) # filteredMOAs['Compounds']=filteredMOAs['PERT'].str[0:13] Acc_list=[] for i in range(n_of_random_sel): test_index=filteredMOAs.groupby(['Metadata_moa_num']).sample(1).index.values unq_comp_test=filteredMOAs.loc[test_index,'Compounds'].unique().tolist() comp_to_remove_from_train=filteredMOAs[filteredMOAs['Compounds'].isin(unq_comp_test)].index.values # print(test_index[0:30]) # khhk train_index=np.array(list(set(filteredMOAs.index.values)-set(test_index)-set(comp_to_remove_from_train))) data_train = filteredMOAs.loc[train_index,domXfeats].values; labels_train=filteredMOAs.loc[train_index,'Metadata_moa_num'].tolist() data_test = filteredMOAs.loc[test_index,domXfeats].values; labels_test=filteredMOAs.loc[test_index,'Metadata_moa_num'].tolist() class_weightt = class_weight.compute_class_weight('balanced',np.unique(labels_train),labels_train) # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") model_tr = LogisticRegression(multi_class='multinomial',n_jobs=100,max_iter=1000,class_weight=class_weightt) model_tr.fit(data_train,labels_train) accc=model_tr.score(data_test,labels_test) print(accc) Acc_list.append(accc) acc_array_fullSet[i,n]=accc*100 khjlhglg print(Acc_list) print('Accuracy: ',r, np.mean(Acc_list)*100) Acc_all2.append(Acc_list) Acc_all.append(np.mean(Acc_list)*100) # loocv = LeaveOneOut() # model_loocv = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000) # results_loocv = cross_val_score(model_loocv, data, labels, cv=loocv) # print("Accuracy: %.2f%%" % (results_loocv.mean()*100.0)) # - domXfeats Acc_all2 Acc_all3=Acc_all2.copy() Acc_all3 2 [0.09156193895870736, 0.08797127468581688, 0.09694793536804308, 0.09694793536804308] Accuracy: 2 9.33572710951526 # domXfeats['Metadata_moa'] # domXdata['Metadata_moa'].tolist() model_tr.predict(data_test) # Acc_list class_weightt.shape # + import multiprocessing multiprocessing.cpu_count() # - Acc_all2 Acc_all from sklearn.utils import class_weight class_weight = class_weight.compute_class_weight('balanced, np.unique(target_Y), target_Y) model = LogisticRegression(class_weight = class_weight) labels_train # + # filteredMOAs.loc[test_index].groupby(['Metadata_moa']).sample(1).index.values # + # filteredMOAs # - # - CP # - Accuracy: 66.83% # - L1k # - Accuracy: 55.74% # - CP + L1k # - Accuracy: 67.43% filteredMOAs['Metadata_moa'].unique().shape Clustering performance # + from sklearn.decomposition import PCA # # %matplotlib inline # Dimension reduction and clustering libraries import umap # import hdbscan import sklearn.cluster as cluster from sklearn.metrics import adjusted_rand_score, adjusted_mutual_info_score from sklearn.model_selection import LeaveOneOut,cross_val_score from sklearn.linear_model import LogisticRegression # nSamplesMOA=10 # results in grant for CDRP are for this number of MOAs nSamplesMOA=1 # from MulticoreTSNE import MulticoreTSNE as TSNE # df_1 = df_1.interpolate() ############# ########## # mergProf_treatLevel['Metadata_moa']=mergProf_treatLevel['Metadata_moa'].str.lower() mergProf_treatLevel=mergProf_treatLevel[~mergProf_treatLevel['Metadata_moa'].isnull()].reset_index(drop=True) nSamplesforEachMOAclass=mergProf_treatLevel.groupby(['Metadata_moa']).size().reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) listOfSelectedMoAs=nSamplesforEachMOAclass[nSamplesforEachMOAclass['size']>nSamplesMOA]['Metadata_moa'].tolist() # corresPertID=[mergProf_treatLevel[mergProf_treatLevel['Metadata_moa']==i]['Metadata_pert_id'] for i in listOfSelectedMoAs] # filteredMOAs=mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isin(listOfSelectedMoAs)].reset_index(drop=True) IDs4filteredMOAs=mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isin(listOfSelectedMoAs)][pertColName].tolist() data4eval=[[cp,cp_features],[l1k,l1k_features],[mergProf_treatLevel,cp_features+l1k_features]] for r in range(len(data4eval)): #range(1):# print(r) domXdata=data4eval[r][0]; domXfeats=data4eval[r][1] filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) data = filteredMOAs[domXfeats].values; labels=filteredMOAs.Metadata_moa.tolist() loocv = LeaveOneOut() model_loocv = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000) results_loocv = cross_val_score(model_loocv, data, labels, cv=loocv) print("Accuracy: %.2f%%" % (results_loocv.mean()*100.0)) # - # ## Modality Integration using CCA # + from sklearn.cross_decomposition import CCA # from DeepCCAmaster import DeepCCA,models,objectives def cca_analysis2(l1k_train, cp_train, l1k_test, cp_test, outdim_size): GE_train = np.asarray(l1k_train)[:,1:] MF_train = np.asarray(cp_train)[:,1:] GE_test = np.asarray(l1k_test)[:,1:] MF_test = np.asarray(cp_test)[:,1:] cca = CCA(n_components=outdim_size) cca.fit(GE_train, MF_train) X_c, Y_c = cca.transform(GE_test, MF_test) # wwmm=DeepCCA.linear_cca(new_data[0][0], new_data[0][1], outdim_size) return X_c, Y_c, [] # - data4eval=[[cp,cp_features],[l1k,l1k_features],[merged_scaled,cp_features+l1k_features]] filteredMOAs['Compounds'].unique().shape # + from sklearn.utils import class_weight domXdata=merged_scaled.copy(); # domXfeats=data4eval[r][1] outdim_size=40 filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) filteredMOAs['Metadata_moa_num']=le.transform(filteredMOAs[moa_col].tolist()) Acc_list=[] # n_of_random_sel=50 # # acc_array=np.zeros((n_of_random_sel,4)); # acc_array_filt=np.zeros((n_of_random_sel,4)); # for i in range(n_of_random_sel): # print(i) # # for outdim_size in range(10,110,10): # test_index=filteredMOAs.groupby(['Metadata_moa_num']).sample(1).sample(50).index.values n_of_random_sel=50 # full # n_of_random_sel=20 # filt # acc_array_fullSet=np.zeros((n_of_random_sel,3)); acc_array_cca_full2_cdrp=np.zeros((n_of_random_sel,4)); # acc_array_cca_filt2=np.zeros((n_of_random_sel,4)); # acc_array_filtSet=np.zeros((n_of_random_sel,4)); # for i in range(n_of_random_sel): kf = KFold(n_splits=n_of_random_sel,random_state=1,shuffle=True) i=0 for train_index0, test_index in kf.split(filteredMOAs): print(i) # print(test_index[0:30]) unq_comp_test=filteredMOAs.loc[test_index,'Compounds'].unique().tolist() comp_to_remove_from_train=filteredMOAs[filteredMOAs['Compounds'].isin(unq_comp_test)].index.values # print(test_index[0:30]) # khhk train_index=np.array(list(set(filteredMOAs.index.values)-set(test_index)-set(comp_to_remove_from_train))) train_moaClassess=filteredMOAs.loc[train_index,'Metadata_moa_num'].unique().tolist() test_moaClassess=filteredMOAs.loc[test_index,'Metadata_moa_num'].unique().tolist() # if len(filteredMOAs.loc[train_index,'Metadata_moa_num'].unique().tolist())=70: # sfsdssf test_cl_toRemove=list(set(test_moaClassess)-set(train_moaClassess)) test_ind_toRemo=testFiltMoA[testFiltMoA['Metadata_moa_num'].isin(test_cl_toRemove)].index # print(test_cl_toRemove,test_ind_toRemo) test_index=np.array(list(set(test_index)-set(test_ind_toRemo))) # train_index=np.array(list(set(filteredMOAs.index.values)-set(test_index))) data_train_l1k = filteredMOAs.loc[train_index,l1k_features].values; data_train_cp = filteredMOAs.loc[train_index,cp_features].values; cca = CCA(n_components=outdim_size) cca.fit(data_train_l1k, data_train_cp) X_c, Y_c = cca.transform(data_train_l1k, data_train_cp) data_train=np.concatenate((X_c, Y_c), axis=1) # data_train = filteredMOAs.loc[train_index,domXfeats].values; labels_train=filteredMOAs.loc[train_index,'Metadata_moa_num'].tolist() data_test_l1k = filteredMOAs.loc[test_index,l1k_features].values; data_test_cp = filteredMOAs.loc[test_index,cp_features].values; X_c_2, Y_c_2 = cca.transform(data_test_l1k, data_test_cp) data_test=np.concatenate((X_c_2, Y_c_2), axis=1) labels_test=filteredMOAs.loc[test_index,'Metadata_moa_num'].tolist() probs=[] for n,dt_train,dt_test in zip([0,1,2],[Y_c, X_c,data_train],[Y_c_2,X_c_2,data_test]): # print(n) # class_weightt = class_weight.compute_class_weight(class_weight='balanced',classes=np.unique(labels_train),y=labels_train) # class_weightt = class_weight.compute_class_weight(class_weight='balanced',np.unique(labels_train),labels_train) # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") # model_tr= model_tr = LogisticRegression(multi_class='multinomial',n_jobs=3,max_iter=1000,class_weight="balanced") model_tr.fit(dt_train,labels_train) accc=model_tr.score(dt_test,labels_test) probs.append(model_tr.predict_proba(dt_test)) acc_array_cca_full2_cdrp[i,n]=accc*100 # acc_array_cca_filt2[i,n]=accc*100 # acc_array_f[i,n]=accc*100 # Acc_list.append(accc); # labels_lateFusion=list(np.argmax((probs[0]+probs[1])/2,axis=1)) labels_lateFusion=model_tr.classes_[np.argmax((probs[0]+probs[1])/2,axis=1)] # acc_array_cca_filt2[i,n+1]=accuracy_score(labels_test,labels_lateFusion)*100 acc_array_cca_full2_cdrp[i,n+1]=accuracy_score(labels_test,labels_lateFusion)*100 i+=1 print(np.median(acc_array_cca_full2_cdrp,axis=0)) # print('Accuracy: ',r, np.mean(Acc_list)*100) # - # acc_array_filt # model_tr.fit(dt_train,labels_train) # dt_train.shape,labels_train np.median(acc_array_cca_filt2,axis=0) np.median(acc_array_cca_full,axis=0) np.median(acc_array_cca_full,axis=0) np.median(acc_array_cca_full,axis=0) np.median(acc_array_fullSet2,axis=0) # import sklearn as sk # sk.__version__ acc_array_cca_full acc_array [0.06463195691202872, 0.09874326750448834, 0.12567324955116696, 0.13285457809694792, 0.12567324955116696, 0.12208258527827648, 0.0843806104129264, 0.11131059245960502, 0.09874326750448834, 0.11490125673249552] Accuracy: 2 13.789946140035909 # + data_train=np.concatenate((X_c, Y_c), axis=1) # data_train = filteredMOAs.loc[train_index,domXfeats].values; labels_train=filteredMOAs.loc[train_index,'Metadata_moa_num'].tolist() data_test_l1k = filteredMOAs.loc[test_index,l1k_features].values; data_test_cp = filteredMOAs.loc[test_index,cp_features].values; X_c, Y_c = cca.transform(data_test_l1k, data_test_cp) data_test=np.concatenate((X_c, Y_c), axis=1) labels_test=filteredMOAs.loc[test_index,'Metadata_moa_num'].tolist() class_weightt = class_weight.compute_class_weight('balanced',np.unique(labels_train),labels_train) # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") model_tr = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000,class_weight=class_weightt) model_tr.fit(data_train,labels_train) print((model_tr.score(data_test,labels_test))) # + X_c, Y_c = cca.transform(data_train_l1k, data_train_cp) data_train=X_c; #np.concatenate((X_c, Y_c), axis=1) # data_train = filteredMOAs.loc[train_index,domXfeats].values; labels_train=filteredMOAs.loc[train_index,'Metadata_moa_num'].tolist() data_test_l1k = filteredMOAs.loc[test_index,l1k_features].values; data_test_cp = filteredMOAs.loc[test_index,cp_features].values; X_c_2, Y_c_2 = cca.transform(data_test_l1k, data_test_cp) data_test=X_c_2; #np.concatenate((X_c, Y_c), axis=1) labels_test=filteredMOAs.loc[test_index,'Metadata_moa_num'].tolist() class_weightt = class_weight.compute_class_weight('balanced',np.unique(labels_train),labels_train) # model_tr = RandomForestClassifier(n_estimators=10,max_features=100,class_weight="balanced") model_tr = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000,class_weight=class_weightt) model_tr.fit(data_train,labels_train) print((model_tr.score(data_test,labels_test))) # - # np.concate(X_c) data_train.shape # + from snf import compute # ress_df=pd.DataFrame(columns=['Data','Modality','NMI']) # ress_df_filt=pd.DataFrame(columns=['Data','Modality','NMI']) ress_df_full_cdrp=pd.DataFrame(columns=['Data','Modality','NMI']) # n_rand_moaClass=100 n_rand_moaClass=40 for i in range(20): print(i) listOfSelectedMoAs2=np.random.choice(merged_scaled[moa_col].unique(),n_rand_moaClass) IDs4filteredMOAs=merged_scaled[merged_scaled[moa_col].isin(listOfSelectedMoAs2)][pertColName].tolist() filteredMOAs=merged_scaled[merged_scaled[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) snfInput=[filteredMOAs[cp_features].values,filteredMOAs[l1k_features].values] affinities = compute.make_affinity(snfInput, metric='euclidean') fused = compute.snf(affinities) labels=filteredMOAs[moa_col].tolist() le_2 = preprocessing.LabelEncoder() labels_categorical_2=filteredMOAs[moa_col].unique().tolist(); le_2.fit(labels_categorical_2) labels_numerical_2=le_2.transform(labels) for m,d in zip(["CP","GE","fused(CP,GE)"],[affinities[0],affinities[1],fused]): pred_labels = spectral_clustering(d, n_clusters=n_rand_moaClass) nmi_snf=v_measure_score(np.random.permutation(pred_labels), labels) temp_df = pd.DataFrame(data={'NMI': [nmi_snf], 'Modality': [m]}) # print(temp_df) # temp_df=pd.DataFrame(data=np.array([[nmi_snf],[m]]),columns=['NMI','Modality']) # ress_df_filt=ress_df_filt.append(temp_df) ress_df_full_cdrp=ress_df_full_cdrp.append(temp_df) # ress_df_filt['Data']='All-snf' ress_df_full_cdrp['Data']='All-snf' # - # v_measure_score(np.random.permutation(labels), labels) i ress_df_rand=ress_df.copy() ress_df_rand.groupby(['Modality']).describe() # ### SNF data fusion # + from sklearn.cluster import SpectralClustering filteredMOAs=merged_scaled[merged_scaled[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) snfInput=[filteredMOAs[cp_features].values,filteredMOAs[l1k_features].values] # snfInput=[filteredMOAs[cp_features].values,filteredMOAs[cp_features].values] # snfInput=[filteredMOAs[l1k_features].values,filteredMOAs[l1k_features].values] from snf import compute affinities = compute.make_affinity(snfInput, metric='euclidean') # fuse the similarity matrices with SNF fused = compute.snf(affinities) # domXdata=mergProf_treatLevel; # # domXfeats=data4eval[r][1] # filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) # data = fused[domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].index]; labels=filteredMOAs[moa_col].tolist() le_2 = preprocessing.LabelEncoder() labels_categorical_2=filteredMOAs[moa_col].unique().tolist(); le_2.fit(labels_categorical_2) labels_numerical_2=le_2.transform(labels) pred_labels = spectral_clustering(fused, n_clusters=179) # pred_labels = SpectralClustering(n_clusters=179,assign_labels="discretize",random_state=0)\ # .fit(filteredMOAs[cp_features].values).labels_ print('nmi: ',v_measure_score(pred_labels, labels)) # # accuracy_coclus(labels_numerical_2, pred_labels) # cm = confusion_matrix(labels_numerical_2, pred_labels) # print(cm.shape) # # deprecated: indexes = linear_assignment(_make_cost_m(cm)) # indexes = linear_sum_assignment(_make_cost_m(cm)) # # print(indexes) # total = 0 # # for row, column in indexes: # for i in range(cm.shape[0]): # row, column=indexes[0][i],indexes[1][i] # value = cm[row][column] # total += value # acc=(total * 1. / np.sum(cm)) # print(acc) # # loocv = LeaveOneOut() # # model_loocv = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000) # # results_loocv = cross_val_score(model_loocv, data, labels, cv=loocv) # # print("Accuracy: %.2f%%" % (results_loocv.mean()*100.0)) # - # + # affinities # - fused.shape Filtered Set # > Filtered set: # - CP # - nmi: 0.78 # - L1k # - nmi: 0.73 # - CP + L1k (snf) # - nmi: 0.79 # # > Full set: # - CP # - nmi: 0.45 # - L1k # - nmi: 0.51 # - CP + L1k (snf) # - nmi: 0.63 pred_labels.shape clustering = SpectralClustering(n_clusters=2, ... assign_labels="discretize", ... random_state=0).fit(X) # + from sklearn.cluster import spectral_clustering from sklearn.metrics import v_measure_score, accuracy_score pred_labels = spectral_clustering(fused, n_clusters=557) v_measure_score(pred_labels, labels) # + n_unq_labels=np.unique(labels_numerical_2).shape[0] pred_labels = spectral_clustering(affinities[0], n_clusters=n_unq_labels) print('CP nmi: ',v_measure_score(pred_labels, labels)) pred_labels = spectral_clustering(affinities[1], n_clusters=n_unq_labels) print('L1k nmi: ',v_measure_score(pred_labels, labels)) pred_labels = spectral_clustering(fused, n_clusters=n_unq_labels) print('Fused nmi: ',v_measure_score(pred_labels, labels)) # + n_unq_labels=np.unique(labels_numerical_2).shape[0] pred_labels = spectral_clustering(compute.snf([affinities[0],affinities[0]]), n_clusters=n_unq_labels) print('CP nmi: ',v_measure_score(pred_labels, labels)) pred_labels = spectral_clustering(compute.snf([affinities[1],affinities[1]]), n_clusters=n_unq_labels) print('L1k nmi: ',v_measure_score(pred_labels, labels)) # pred_labels = spectral_clustering(fused, n_clusters=n_unq_labels) # print('Fused nmi: ',v_measure_score(pred_labels, labels)) # - v_measure_score(labels_numerical_2, pred_labels) # len(indexes) indexes[1].shape # + # from coclust.evaluation.external import accuracy le_2 = preprocessing.LabelEncoder() labels_categorical_2=filteredMOAs.Metadata_moa.unique().tolist(); le_2.fit(labels_categorical_2) labels_numerical_2=le_2.transform(labels) # accuracy_coclus(labels_numerical_2, pred_labels) cm = confusion_matrix(labels_numerical_2, pred_labels) print(cm.shape) # deprecated: indexes = linear_assignment(_make_cost_m(cm)) indexes = linear_sum_assignment(_make_cost_m(cm)) # print(indexes) total = 0 # for row, column in indexes: for i in range(cm.shape[0]): row, column=indexes[0][i],indexes[1][i] value = cm[row][column] total += value acc=(total * 1. / np.sum(cm)) print(acc) # accuracy_score(pred_labels, labels) # + # set(labels_numerical_2) # - # labels_numerical_2.shape,pred_labels.shape y_pred, confusion_matrix3 = get_y_preds(pred_labels, labels_numerical_2, 179); accuracy_score(pred_labels, labels_numerical_2),accuracy_score(y_pred, labels_numerical_2) # + # labels # - v_measure_score(labels_numerical_2, y_pred) pred_labels # + import numpy as np from sklearn.metrics import confusion_matrix # deprecated: from sklearn.utils.linear_assignment_ import linear_assignment from scipy.optimize import linear_sum_assignment def accuracy_coclus(true_row_labels, predicted_row_labels): """Get the best accuracy. Parameters ---------- true_row_labels: array-like The true row labels, given as external information predicted_row_labels: array-like The row labels predicted by the model Returns ------- float Best value of accuracy """ cm = confusion_matrix(true_row_labels, predicted_row_labels) print(cm.shape) # deprecated: indexes = linear_assignment(_make_cost_m(cm)) indexes = linear_sum_assignment(_make_cost_m(cm)) print(indexes) total = 0 for row, column in indexes: value = cm[row][column] total += value return (total * 1. / np.sum(cm)) def _make_cost_m(cm): s = np.max(cm) return (- cm + s) # - # a=1 pred_labels.shape len(set(labels)) domXdata=mergProf_treatLevel; # domXfeats=data4eval[r][1] fused[domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].index] domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True).shape # + domXdata=mergProf_treatLevel; # domXfeats=data4eval[r][1] filteredMOAs=domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) data = fused[domXdata[domXdata[pertColName].isin(IDs4filteredMOAs)].index]; labels=filteredMOAs.Metadata_moa.tolist() loocv = LeaveOneOut() model_loocv = LogisticRegression(multi_class='ovr',n_jobs=100,max_iter=1000) results_loocv = cross_val_score(model_loocv, data, labels, cv=loocv) print("Accuracy: %.2f%%" % (results_loocv.mean()*100.0)) # + from snf import datasets simdata = datasets.load_simdata() # sorted(simdata.keys()) # ['data', 'labels'] # this dataset has two data arrays representing features from 200 samples # >>> len(simdata.data) # 2 # >>> len(simdata.labels) # 200 # convert raw data arrays into sample x sample affinity matrices from snf import compute affinities = compute.make_affinity(simdata.data, metric='euclidean',K=20, mu=0.5) # fuse the similarity matrices with SNF fused = compute.snf(affinities,K=20) # # estimate the number of clusters present in the fused matrix, derived via # # an "eigengap" method (i.e., largest difference in eigenvalues of the # # laplacian of the graph). note this function returns the top two options; # # we'll only use the first # first, second = compute.get_n_clusters(fused) # # >>> first, second # # (2, 5) # # apply clustering procedure # # you can use any clustering method here, but since SNF returns an affinity # # matrix (i.e., all entries are positively-valued and indicate similarity) # # spectral clustering makes a lot of sense # >>> from sklearn import cluster # >>> fused_labels = cluster.spectral_clustering(fused, n_clusters=first) # # compute normalized mutual information for clustering solutions # >>> from snf import metrics # >>> labels = [simdata.labels, fused_labels] # >>> for arr in affinities: # ... labels += [cluster.spectral_clustering(arr, n_clusters=first)] # >>> nmi = metrics.nmi(labels) # # compute silhouette score to assess goodness-of-fit for clustering # >>> silhouette = metrics.silhouette_score(fused, fused_labels) # - le_2 = preprocessing.LabelEncoder() labels_categorical_2=filteredMOAs.Metadata_moa.unique().tolist(); le_2.fit(labels_categorical_2) labels_numerical_2=le_2.transform(labels) y_pred, confusion_matrix = get_y_preds(cluster_assignments, y_true, n_clusters); y_pred, confusion_matrix = get_y_preds(cluster_assignments, y_true, n_clusters); from munkres import Munkres def get_y_preds(cluster_assignments, y_true, n_clusters): ''' Computes the predicted labels, where label assignments now correspond to the actual labels in y_true (as estimated by Munkres) cluster_assignments: array of labels, outputted by kmeans y_true: true labels n_clusters: number of clusters in the dataset returns: a tuple containing the accuracy and confusion matrix, in that order ''' confusion_matrix = sklearn.metrics.confusion_matrix(y_true, cluster_assignments, labels=None) # compute accuracy based on optimal 1:1 assignment of clusters to labels cost_matrix = calculate_cost_matrix(confusion_matrix, n_clusters) indices = Munkres().compute(cost_matrix) kmeans_to_true_cluster_labels = get_cluster_labels_from_indices(indices) y_pred = kmeans_to_true_cluster_labels[cluster_assignments] return y_pred, confusion_matrix def calculate_cost_matrix(C, n_clusters): cost_matrix = np.zeros((n_clusters, n_clusters)) # cost_matrix[i,j] will be the cost of assigning cluster i to label j for j in range(n_clusters): s = np.sum(C[:,j]) # number of examples in cluster i for i in range(n_clusters): t = C[i,j] cost_matrix[j,i] = s-t return cost_matrix def get_cluster_labels_from_indices(indices): n_clusters = len(indices) clusterLabels = np.zeros(n_clusters) for i in range(n_clusters): clusterLabels[i] = indices[i][1] return clusterLabels def AccMeasure(T,idx): # # %Measure percentage of Accuracy and the Rand index of clustering results # % The number of class must equal to the number cluster # # %Output # % Acc = Accuracy of clustering results # % rand_index = Rand's Index, measure an agreement of the clustering results # % match = 2xk mxtrix which are the best match of the Target and clustering results # # %Input # % T = 1xn target index # % idx =1xn matrix of the clustering results # % EX: # % X=[randn(200,2);randn(200,2)+6,;[randn(200,1)+12,randn(200,1)]]; T=[ones(200,1);ones(200,1).*2;ones(200,1).*3]; # % idx=kmeans(X,3,'emptyaction','singleton','Replicates',5); # % [Acc,rand_index,match]=AccMeasure(T,idx) k=np.max([T.max(),idx.max()]); n=len(T); for i=1:k temp=find(T==i); a{i}=temp; #%#ok<AGROW> b1=[]; t1=zeros(1,k); for i=1:k tt1=find(idx==i); for j=1:k t1(j)=sum(ismember(tt1,a{j})); b1=[b1;t1]; #%#ok<AGROW> Members=zeros(1,k); P = perms((1:k)); Acc1=0; for pi=1:size(P,1) for ki=1:k Members(ki)=b1(P(pi,ki),ki); if sum(Members)>Acc1 match=P(pi,:); Acc1=sum(Members); rand_ss1=0; rand_dd1=0; for xi=1:n-1 for xj=xi+1:n rand_ss1=rand_ss1+((idx(xi)==idx(xj))&&(T(xi)==T(xj))); rand_dd1=rand_dd1+((idx(xi)~=idx(xj))&&(T(xi)~=T(xj))); rand_index=200*(rand_ss1+rand_dd1)/(n*(n-1)); Acc=Acc1/n*100; match=[1:k;match]; return [Acc,rand_index,match] # # Using MildInt # + nSamplesMOA=5 # from MulticoreTSNE import MulticoreTSNE as TSNE # df_1 = df_1.interpolate() ############# ########## # mergProf_treatLevel['Metadata_moa']=mergProf_treatLevel['Metadata_moa'].str.lower() mergProf_treatLevel=mergProf_treatLevel[~mergProf_treatLevel['Metadata_moa'].isnull()].reset_index(drop=True) nSamplesforEachMOAclass=mergProf_treatLevel.groupby(['Metadata_moa']).size().reset_index().rename(columns={0:'size'}).sort_values(by=['size'],ascending=False).reset_index(drop=True) listOfSelectedMoAs=nSamplesforEachMOAclass[nSamplesforEachMOAclass['size']>nSamplesMOA]['Metadata_moa'].tolist() # corresPertID=[mergProf_treatLevel[mergProf_treatLevel['Metadata_moa']==i]['Metadata_pert_id'] for i in listOfSelectedMoAs] # filteredMOAs=mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isin(listOfSelectedMoAs)].reset_index(drop=True) IDs4filteredMOAs=mergProf_treatLevel[mergProf_treatLevel['Metadata_moa'].isin(listOfSelectedMoAs)][pertColName].tolist() filteredMOAs=mergProf_treatLevel[mergProf_treatLevel[pertColName].isin(IDs4filteredMOAs)].reset_index(drop=True) labels=filteredMOAs.Metadata_moa.tolist() print(filteredMOAs.shape) le_2 = preprocessing.LabelEncoder() labels_categorical_2=filteredMOAs.Metadata_moa.unique().tolist(); le_2.fit(labels_categorical_2) labels_numerical_2=le_2.transform(labels) # Data m has a shape (#samples, length of time series, size of input dimension). import sys sys.path.insert(1, '../MildInt-master_v2') from mmrnn import * m = MMRNN() cp_hidden=2 ge_hidden=2 # snfInput=[filteredMOAs[cp_features].values,filteredMOAs[l1k_features].values] cp_m=filteredMOAs[cp_features].values[:,np.newaxis,:] ge_m=filteredMOAs[l1k_features].values[:,np.newaxis,:] m.append_component('cp', cp_m.shape[2], cp_hidden, cp_m.shape[1]) m.append_component('ge', ge_m.shape[2], ge_hidden, ge_m.shape[1]) IDs=pd.Series(list(range(cp_m.shape[0]))) m.append_data('cp', IDs, cp_m, labels_numerical_2, np.ones(cp_m.shape[0])) m.append_data('ge', IDs, ge_m, labels_numerical_2, np.ones(cp_m.shape[0])) ## testIDs=np.array(range(cp_m.shape[1])) # + ###m.append_test_overlapIDs(testIDs) ###m.append_training_overlapIDs(trainIDs) # from DataManager import * # dm = DataManager() # 5-fold CV overlapIDs=IDs.copy() # test_folds = dm.generate_crossvalidation_set(IDs) accuracy = [] for i in range(len(overlapIDs)): test_folds_IDs=[overlapIDs[i]] m.append_test_overlapIDs(pd.Series(test_folds_IDs)) # trainIDs = overlapIDs[~overlapIDs.isin(test_folds[i])] trainIDs=list(set(overlapIDs)-set(test_folds_IDs)) m.append_training_overlapIDs(pd.Series(trainIDs)) # with tf.varialbe_scope('fold run'): m.build_integrative_network() m.training(len(trainIDs)) accuracy.append(m.evaluate_accuracy()) # tf.reset_default_graph() print(accuracy) print(np.mean(accuracy)) # + # filteredMOAs[cp_features].values # del m # [overlapIDs[i]] # cp_m.shape # m.training(len(trainIDs)) # pd.Series(m.IDs['cp'])[~pd.Series(m.IDs['cp']).isin(m.test_overlapIDs)] # + # m.IDs['cp'] # m.IDs['cp'][not (pd.Series(m.IDs['cp']).isin(m.test_overlapIDs).tolist())] # len(overlapIDs) # test_folds_IDs=[overlapIDs[0]] # + overlapIDs=IDs.copy() # test_folds = dm.generate_crossvalidation_set(IDs) accuracy = [] for i in range(len(overlapIDs)): test_folds_IDs=[overlapIDs[i]] m.append_test_overlapIDs(pd.Series(test_folds_IDs)) # trainIDs = overlapIDs[~overlapIDs.isin(test_folds[i])] trainIDs=list(set(overlapIDs)-set(test_folds_IDs)) m.append_training_overlapIDs(pd.Series(trainIDs)) # with tf.varialbe_scope('fold run'): m.build_integrative_network() m.training(len(trainIDs)) accuracy.append(m.evaluate_accuracy()) # tf.reset_default_graph() print(accuracy) print(np.mean(accuracy)) # - filteredMOAs.shape filteredMOAs.Metadata_moa.tolist()
5-Modality_Integration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 2A.i - Sérialisation - correction # # Sérialisation d'objets, en particulier de dataframes. Mesures de vitesse. from jyquickhelper import add_notebook_menu add_notebook_menu() # ## Exercice 1 : sérialisation d'un gros dataframe # # **Etape 1 :** construction d'un gros dataframe composé de nombres aléatoires import random values = [ [random.random() for i in range(0,20)] for _ in range(0,100000) ] col = [ "col%d" % i for i in range(0,20) ] import pandas df = pandas.DataFrame( values, columns = col ) # **Etape 2 :** on sauve ce dataframe sous deux formats texte et sérialisé (binaire) df.to_csv("df_text.txt", sep="\t") df.to_pickle("df_text.bin") # **Etape 3 :** on mesure le temps de chargement # %timeit pandas.read_csv("df_text.txt", sep="\t") # %timeit pandas.read_pickle("df_text.bin") # ## Exercice 2 : json # # Un premier essai. # + obj = dict(a=[50, "r"], gg=(5, 't')) import jsonpickle frozen = jsonpickle.encode(obj) frozen # - # Ce module est équivalent au module [json](https://docs.python.org/3/library/json.html) sur les types standard du langage Python (liste, dictionnaires, nombres, ...). Mais le module [json](https://docs.python.org/3/library/json.html) ne fonctionne pas sur les dataframe. frozen = jsonpickle.encode(df) len(frozen), type(frozen), frozen[:55] # La methode [to_json](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html) donnera un résultat statisfaisant également mais ne pourra s'appliquer à un modèle de machine learning produit par [scikit-learn](http://scikit-learn.org/). # + def to_json(obj, filename): frozen = jsonpickle.encode(obj) with open(filename, "w", encoding="utf-8") as f: f.write(frozen) def read_json(filename): with open(filename, "r", encoding="utf-8") as f: enc = f.read() return jsonpickle.decode(enc) # - to_json(df, "df_text.json") try: df = read_json("df_text.json") except Exception as e: print(e) # Visiblement, cela ne fonctionne pas sur les DataFrame. Il faudra s'inspirer du module [numpyson](https://github.com/hpk42/numpyson). # ## json + scikit-learn # # Il faut lire l'issue [147](https://github.com/jsonpickle/jsonpickle/issues/147) pour saisir l'intérêt des deux lignes suivantes. import jsonpickle.ext.numpy as jsonpickle_numpy jsonpickle_numpy.register_handlers() from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. y = iris.target from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X,y) clf.predict_proba([[0.1, 0.2]]) to_json(clf, "logreg.json") try: clf2 = read_json("logreg.json") except AttributeError as e: # Pour une raison inconnue, un bug sans doute, le code ne fonctionne pas. print(e) # Donc on essaye d'une essaye d'une autre façon. Si le code précédent ne fonctionne pas et le suivant si, c'est un bug de [jsonpickle](https://github.com/jsonpickle/jsonpickle). # + class EncapsulateLogisticRegression: def __init__(self, obj): self.obj = obj def __getstate__(self): return {k: v for k, v in sorted(self.obj.__getstate__().items())} def __setstate__(self, data): self.obj = LogisticRegression() self.obj.__setstate__(data) enc = EncapsulateLogisticRegression(clf) to_json(enc, "logreg.json") # - enc2 = read_json("logreg.json") clf2 = enc2.obj clf2.predict_proba([[0.1, 0.2]]) with open("logreg.json", "r") as f: content = f.read() content
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scratchwork 1 - Subalgebras from finite_algebras import * from cayley_table import * from permutations import * import os aa_path = os.path.join(os.getenv("PYPROJ"), "abstract_algebra") alg_dir = os.path.join(aa_path, "Algebras") ex = Examples(alg_dir) # ## Subalgebras of a Semigroup sg = ex[11] sg.about() alg = sg alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(sg, partitions) # ### A closer look at the commutative Groups print("\n" + "-"*40) for sub in alg_subs: if isinstance(sub, Group): sub.about(use_table_names=True) print("\n" + "-"*40) # ## Subgroups of an Alternating Group a4 = ex[0] a4.about() alg = a4 alg_subs = alg.proper_subalgebras(divisors_only=False, include_inverses=False) partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) # **NOTE**: The single commutative normal (sub)group, above, is actually the **commutator subgroup** of A4, as shown by the derivation below: a4.commutator_subgroup() # ## Subalgebras of a Powerset Group psr = generate_powerset_group(4) psr.about(max_size=16) alg = psr # %time alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) psr.commutators() psr.commutator_subgroup() q8 = ex[13] q8.about() alg = q8 alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) # The order 2 normal subgroup, above, is the **commutator subgroup**, as shown below: alg.commutator_subgroup().about() sd16 = ex[14] sd16.about(max_size=16) alg = sd16 # %time alg_subs = alg.proper_subalgebras() # %time partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) # **ISSUE**: subs 4 & 9 are normal, as shown below, but that is not indicated in the summary above. print("\n" + "-"*40) for sub in alg_subs: if alg.is_normal(sub): # sub.about(use_table_names=True) print("\n" + "-"*40) # **ISSUE**: Why isn't the order 4 commutator subgroup (derived below) one of the subgroups reported by the summary above? alg.commutator_subgroup().about() # **Observation**: The intersections of the element sets belonging to the 3 order 8 normal subgroups is the set of elements of the commutator subgroup. x1 = ['e', 't', 's^2', 's^2t', 'tsts', 'sts', 'ts^2t', 'ts^2'] x2 = ['e', 'st', 's^2', 'ts', 'tsts', 's^2ts', 'ts^2t', 'sts^2'] x3 = ['e', 's', 's^2', 'tst', 'tsts', 'tsts^2', 'ts^2t', 'ts^2ts'] set(x1) & set(x2) set(x1) & set(x3) set(x2) & set(x3) alg.commutators() alg_subs[3].about() alg_subs[8].about() alg_subs[10].about() # ## More Work Needed Past This Point # Deriving subalgebras of Rings and Fields needs more work, as does the method for reporting on them: *about_isomorphic_partitions* f4 = ex[9] f4.about() alg = f4 alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) ex6 = ex[12] ex6.about() alg = ex6 alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) alg1 = generate_algebra_mod_n(12) alg1.about(use_table_names=True) alg = alg1 alg_subs = alg.proper_subalgebras() partitions = partition_into_isomorphic_lists(alg_subs) about_isomorphic_partitions(alg, partitions) alg.zero_divisors() alg.mult_identity
notebooks/scratchwork1-subalgebras.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %pylab inline import pybedtools import pickle from matplotlib_venn import venn3, venn3_circles,venn3_unweighted,venn2 import seaborn as sns from pandas import DataFrame import os import csv import matplotlib.patches as patches import pysam # # Initialization methods=["Tophat","STAR","HISAT2"] sample="NA12878" reliable_est_bed="/path/to/reliable/EST/junctions.bed" # # Predictions bed_files={'Tophat':'/path/to/TopHat/junctions.bed', 'STAR':'/path/to/STAR/SJ.out.tab', 'HISAT2':'/path/to/HISAT/splicesites.txt', } bam_files={'Tophat':'/path/to/TopHat/alignments.bam', 'STAR':'/path/to/STAR/alignments.bam', 'HISAT2':'/path/to/HISAT2/alignments.bam', } # # Functions # + def find_stats(bamfile,statfile): sam_file = pysam.Samfile(bamfile, "rb") seq={"1":[],"2":[]} current_qname="" uniqmap_uniqmap=0 uniqmap_multimap=0 multimap_multimap=0 uniqmap_unmap=0 multimap_unmap=0 unmap_unmap=0 cnts=0 for line in sam_file: qname=line.qname if current_qname=="": current_qname=qname if qname!=current_qname: uniqed_multi_un={} for fs in ["1","2"]: NHs=map(lambda x:x[1],seq[fs]) if len(set(NHs))==1: NH=NHs[0] if NH==1: uniqed_multi_un[fs]=0 elif NH==-1: uniqed_multi_un[fs]=2 else: uniqed_multi_un[fs]=1 if uniqed_multi_un["1"]==0 and uniqed_multi_un["2"]==0: uniqmap_uniqmap+=1 elif (uniqed_multi_un["1"]==0 and uniqed_multi_un["2"]==1) or ( uniqed_multi_un["1"]==1 and uniqed_multi_un["2"]==0): uniqmap_multimap+=1 elif (uniqed_multi_un["1"]==1 and uniqed_multi_un["2"]==1): multimap_multimap+=1 elif (uniqed_multi_un["1"]==0 and uniqed_multi_un["2"]==2) or ( uniqed_multi_un["1"]==2 and uniqed_multi_un["2"]==0): uniqmap_unmap+=1 elif (uniqed_multi_un["1"]==1 and uniqed_multi_un["2"]==2) or ( uniqed_multi_un["1"]==2 and uniqed_multi_un["2"]==1): multimap_unmap+=1 elif (uniqed_multi_un["1"]==2 and uniqed_multi_un["2"]==2): unmap_unmap+=1 else: print "ERRR3 ", line aaaa current_qname=qname seq={"1":[],"2":[]} flag=np.binary_repr(line.flag,12) tags=dict(line.get_tags()) NH=-1 if "NH" not in tags else tags["NH"] mpd=flag[-3]=="0" pmpd=flag[-4]=="0" first=flag[-7]=="1" second=flag[-8]=="1" if not (first ^ second): print "ERRR1 ", line aaaa if (not mpd) and NH>0: print "ERRR1 ", line aaaa fs="1" if first else "2" seq[fs].append([flag,NH,mpd,pmpd]) cnts+=1 with open(statfile, 'wb') as csvfile: spamwriter = csv.writer(csvfile, delimiter='\t', quotechar='|', quoting=csv.QUOTE_MINIMAL) spamwriter.writerow(["uniqmap_uniqmap", "uniqmap_multimap", "multimap_multimap", "uniqmap_unmap", "multimap_unmap", "unmap_unmap","total","cnts"]) spamwriter.writerow([uniqmap_uniqmap, uniqmap_multimap, multimap_multimap, uniqmap_unmap, multimap_unmap, unmap_unmap, sum([uniqmap_uniqmap, uniqmap_multimap, multimap_multimap, uniqmap_unmap, multimap_unmap, unmap_unmap]),cnts]) def find_matchstats(bamfile,matchstatfile): sam_file = pysam.Samfile(bamfile, "rb") match_stats={} for line in sam_file: if line.cigar: codes={} for k,v in line.cigar: if k not in codes: codes[k]=0 codes[k]+=v for k,v in codes.iteritems(): if k not in match_stats: match_stats[k]={} if v not in match_stats[k]: match_stats[k][v]=0 match_stats[k][v]+=1 pickle.dump(match_stats,open(matchstatfile,"w")) def find_NMstats(bamfile,NMstatfile): sam_file = pysam.Samfile(bamfile, "rb") NM_stats={} for line in sam_file: unmapped=(line.flag/4)%2==1 if unmapped: continue tags=dict(line.tags) if "NM" in tags: nm=tags["NM"] if nm not in NM_stats: NM_stats[nm]=0 NM_stats[nm]+=1 elif "nM" in tags: nm=tags["nM"] if nm not in NM_stats: NM_stats[nm]=0 NM_stats[nm]+=1 else: print tags aaaa pickle.dump(NM_stats,open(NMstatfile,"w")) # - # # Analysis est_junctions_reliable=pybedtools.BedTool(reliable_est_bed) all_beds={} for method,bedfile in bed_files.iteritems(): mybed=pybedtools.BedTool(bedfile) if method == "STAR": mybed=mybed.filter(lambda x: (int(x[2])-int(x[1]))>1).each(lambda x:[x[0],int(x[1])+1,x[2]]).saveas() elif method == "HISAT2": mybed=mybed.each(lambda x:[x[0],int(x[1])+1,x[2]]).saveas() elif method == "Tophat": mybed=mybed.each(lambda x:[x[0],int(x[1])+int(x[10].split(",")[0]),int(x[2])-int(x[10].split(",")[1])]).saveas() all_beds[method]=mybed.each(lambda x:["chr%s"%x[0],x[1],x[2]]).saveas() for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats" if os.path.exists(bamfile): if not os.path.exists(statfile): find_stats(bamfile,statfile) for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats_match" if os.path.exists(bamfile): if not os.path.exists(statfile): find_matchstats(bamfile,statfile) for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats_NM" if os.path.exists(bamfile): if not os.path.exists(statfile): print sample,method find_NMstats(bamfile,statfile) def parse_my_stats(stat_file): mystats={} with open(stat_file, 'r') as csv_f: spamreader = csv.reader(csv_f, delimiter='\t', quotechar='|') cnt=0 for row in spamreader: if cnt==0: keys=row cnt=1 else: vals=row mystats={x[0]:int(x[1]) for x in zip(keys,vals)} return mystats return {} alignment_stats={} for method,bed in all_beds.iteritems(): alignment_stats[method]={} L=len(bed) L_est_reliable=len(bed.intersect(est_junctions_reliable,f=0.99,u=True,r=True)) alignment_stats[method].update({"n_junctions":L,\ "n_est_reliable":L_est_reliable,\ "r_est_reliable":round(float(L_est_reliable)/float(L),2)}) for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats" mystats=parse_my_stats(statfile) alignment_stats[method].update(mystats) for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats_match" mystats=pickle.load(open(statfile)) alignment_stats[method].update({"match_stats":mystats}) for method,bamfile in bam_files.iteritems(): statfile=bamfile+".mystats_NM" mystats=pickle.load(open(statfile)) alignment_stats[method].update({"NM":mystats}) intersect_3methods={i:{} for i in range(8)} for iii in range(8): if iii==0: continue i=iii%2 j=(iii/2)%2 k=(iii/4)%2 bed1=all_beds[methods[0]] bed2=all_beds[methods[1]] bed3=all_beds[methods[2]] if i==1: bed=bed1 elif j==1: bed=bed2 elif k==1: bed=bed3 bed=bed.intersect(bed1,f=0.99,u=True if i==1 else False,v=True if i==0 else False,r=True) bed=bed.intersect(bed2,f=0.99,u=True if j==1 else False,v=True if j==0 else False,r=True) bed=bed.intersect(bed3,f=0.99,u=True if k==1 else False,v=True if k==0 else False,r=True) L=len(bed) L_est_reliable=len(bed.intersect(est_junctions_reliable,f=0.99,u=True,r=True)) intersect_3methods[iii].update({"n_junctions":L,\ "n_est_reliable":L_est_reliable,\ "r_est_reliable":round(float(L_est_reliable)/float(L),2)}) # ## Plots # ## junction validation # + sns.set(style="white",font_scale=1.5) fig, ax = plt.subplots(figsize=(8,2)) bin_labels=["Reliable" , "Not Reliable"] A=[] B=[] res=[] labels=[] my_colors=sns.color_palette("Set1",n_colors=10) for jjj,method in enumerate(methods): A.append(alignment_stats[method]["n_junctions"]) B.append(alignment_stats[method]["n_est_reliable"]) labels.append(method) res.append(np.array(A)) res.append(np.array(B)) my_data=DataFrame(np.array(res).transpose(),index=labels,columns=bin_labels[::-1]) for ii,b in enumerate(bin_labels[::-1]): cg=sns.barplot(data=my_data,x=b,y=labels,label=b, color=my_colors[ii],ax=ax) for i,ytick in enumerate(cg.get_yticklabels()): ytick.set_fontsize(12) ax.set_xlabel("Number of Junctions") ax.set_xticks(range(0,600000,200000)) ax.set_yticks(range(len(labels))) ax.set_xticklabels(["%sk"%(x/1000) if x>0 else "0" for x in range(0,600000,200000)]) ax.set_xlim([0,500000]) ax.set_title("Validation rate of splicing junctions on dbEST",fontsize=16) sns.despine(left=True) handles, labels = ax.get_legend_handles_labels() # reverse the order ax.legend(handles[::-1], labels[::-1],bbox_to_anchor=(0.85, 0.65, 0.5, .3), loc=1,ncol=1, mode="expand", borderaxespad=0.,frameon=False,fontsize=14) # + sns.set(style="white",font_scale=2.2) fig, ax = plt.subplots(figsize=(10,10)) keys=["r_est","r_est_reliable"] labels=["% of EST matches","% Reliable EST matches"] index = np.arange(len(methods)) bar_width = 0.2 opacity = 0.5 my_colors=sns.color_palette("Set2",n_colors=10) v = venn3(subsets=[intersect_3methods[k]['n_junctions'] for k in range(1,8)], set_labels = ('A','B','C'),ax=ax,alpha=0.6,set_colors=my_colors[0:3]) for c in range(1,8): i=c%2 j=(c/2)%2 k=(c/4)%2 v.get_label_by_id('%d%d%d'%(i,j,k)).set_text("%d%%"%( intersect_3methods[c]['r_est_reliable']*100)) v.get_label_by_id('A').set_text('TopHat\n%s,%03d\n(%d%%)'%(alignment_stats['Tophat']['n_junctions']/1000, alignment_stats['Tophat']['n_junctions']%1000, alignment_stats['Tophat']['r_est_reliable']*100)) v.get_label_by_id('B').set_text('STAR\n%s,%03d\n(%d%%)'%(alignment_stats['STAR']['n_junctions']/1000, alignment_stats['STAR']['n_junctions']%1000, alignment_stats['STAR']['r_est_reliable']*100)) v.get_label_by_id('C').set_text('HISAT2\n%s,%03d\n(%d%%)'%(alignment_stats['HISAT2']['n_junctions']/1000, alignment_stats['HISAT2']['n_junctions']%1000, alignment_stats['HISAT2']['r_est_reliable']*100)) for labe_id in ["A","B","C"]: v.get_label_by_id(labe_id).set_fontsize(25) ax.set_title(sample,fontsize=25) for labe_id in ["A","B","C","110","101","111","011"]: v.get_patch_by_id(labe_id).set_linewidth(0) ax.legend(["Only TopHat","Only STAR","Only TopHat & STAR","Only HISAT2", "Only TopHat & HISAT2","Only STAR & HISAT2","TopHat & STAR & HISAT2"],bbox_to_anchor=(0, 1.1, 1.2, .3), loc=0,ncol=2, mode="expand", borderaxespad=0.,frameon=False) # - # ## Read mapping analysis # + sns.set(style="white",font_scale=1.2) colors=[4] nt=["A","C","G","T"] etypes=[] for i in nt: for j in nt: if i!=j: etypes.append(i+j) print etypes bin_labels=["Both pairs uniquely mapped","Both pairs multi-mapped", "One pair uniquely, one multi-mapped", "One pair uniquely mapped, one unmapped","One pair multi-mapped, one unmapped", "Both pairs unmapped"] keys=['uniqmap_uniqmap','multimap_multimap', 'uniqmap_multimap', 'uniqmap_unmap', 'multimap_unmap', 'unmap_unmap'] my_colors=sns.color_palette("Set3",n_colors=10) fig, axes = plt.subplots(1,3,figsize=(17,2)) ax=axes[0] res=[] labels=[] for method in methods: if method not in alignment_stats: continue if "uniqmap_uniqmap" in alignment_stats[method]: myres=[alignment_stats[method][k]/float(alignment_stats[method]["total"])*100 for k in keys][::-1] myres=[sum(myres[i:]) for i in range(len(myres))] res.append(myres) label=method labels.append(label) my_data=DataFrame(np.array(res),index=labels,columns=bin_labels) for ii,b in enumerate(bin_labels): cg=sns.barplot(data=my_data,x=b,y=labels,label=b, color=my_colors[ii],ax=ax) ax.set_xlabel("% of fragments") ax.set_xlim([0,100]) sns.despine(left=True) handles, labels = ax.get_legend_handles_labels() # reverse the order ax.legend(handles[::-1], labels,bbox_to_anchor=(-0.4, 1, 1.52, .3), loc=0,ncol=2, mode="expand", borderaxespad=0.,frameon=False,fontsize=12) plt.tight_layout() ax=axes[1] bin_labels=["1","2-3","4-6","7-10","11-20",">20"] bins=[1,3,6,10,20,1000] codes=[4] res=[] labels=[] for method in methods: if method not in alignment_stats: continue if "match_stats" not in alignment_stats[method]: continue if set(alignment_stats[method]["match_stats"].keys())&set(codes): my_res=[] for b in bins[::-1]: my_res.append(sum([v for code in set(alignment_stats[method]["match_stats"].keys())&set(codes) for k,v in alignment_stats[method]["match_stats"][code].iteritems() if ( k<=b)])/float(sum(alignment_stats[method]["NM"].values()))*100) my_res=my_res res.append(my_res) label=method labels.append(label) else: my_res=[] for b in bins: my_res.append(0) my_res=my_res res.append(my_res) label=method labels.append(label) my_data=DataFrame(np.array(res),index=labels,columns=bin_labels) for ii,b in enumerate(bin_labels): cg=sns.barplot(data=my_data,x=b,y=labels,label=b, color=my_colors[ii],ax=ax) ax.set_yticklabels([]) ax.set_xlabel("% of mapped fragments") sns.despine(left=True) handles, labels = ax.get_legend_handles_labels() ax.legend(handles[::-1], labels,bbox_to_anchor=(0.2, 1, .6, .3), loc=0,ncol=3, mode="expand", borderaxespad=0.,frameon=False,fontsize=12, title="Number of soft clipped bases") plt.tight_layout() ax=axes[2] bin_labels=["1","2","3-4","5-6","7-9",">9"] bins=[1,2,4,6,9,1000] res=[] labels=[] for method in methods: if method not in alignment_stats: continue if "NM" not in alignment_stats[method]: continue my_res=[] for b in bins[::-1]: my_res.append(sum([v/float(sum(alignment_stats[method]["NM"].values()))*100 for k,v in alignment_stats[method]["NM"].iteritems() if ( 0<k<=b)])) my_res=my_res res.append(my_res) label=method labels.append(label) my_data=DataFrame(np.array(res),index=labels,columns=bin_labels) for ii,b in enumerate(bin_labels): cg=sns.barplot(data=my_data,x=b,y=labels,label=b, color=my_colors[ii],ax=ax) ax.set_yticklabels([]) ax.set_xlabel("% of mapped fragments") sns.despine(left=True) handles, labels = ax.get_legend_handles_labels() # reverse the order ax.legend(handles[::-1], labels,bbox_to_anchor=(0.2, 1, 0.6, .3), loc=0,ncol=3, mode="expand", borderaxespad=0.,frameon=False,fontsize=12,title="Number of mismatches") plt.tight_layout()
analysis_scripts/alignment/.ipynb_checkpoints/RNACocktail-Alignment-Analysis-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Fit the SVI model with no minibatches. Only the SVI lower bound # + import autoreg import GPy import numpy as np from matplotlib import pyplot as plt from __future__ import print_function # %matplotlib inline from autoreg.benchmark import tasks # + # Function to compute root mean square error: def comp_RMSE(a,b): return np.sqrt(np.square(a-b).mean()) # - # Define class for normalization class Normalize(object): def __init__(self, data, name, norm_name): self.data_mean = data.mean(axis=0) self.data_std = data.std(axis=0) self.normalization_computed = True setattr(self, name, data) setattr(self, norm_name, (data-self.data_mean) / self.data_std ) def normalize(self, data, name, norm_name): if hasattr(self,norm_name): raise ValueError("This normalization name already exist, choose another one") setattr(self, name, data ) setattr(self, norm_name, (data-self.data_mean) / self.data_std ) def denormalize(self, data): return data*self.data_std + self.data_mean trainned_models_folder_name = "/Users/grigoral/work/code/RGP/examples/identif_trainded" task_name = 'IdentificationExample5' # task names: # Actuator, Ballbeam, Drive, Gas_furnace, Flutter, Dryer, Tank, # IdentificationExample1..5 task = getattr( tasks, task_name) task = task() task.load_data() print("Data OUT train shape: ", task.data_out_train.shape) print("Data IN train shape: ", task.data_in_train.shape) print("Data OUT test shape: ", task.data_out_test.shape) print("Data IN test shape: ", task.data_in_test.shape) # ### Normalize training and test data: # + normalize = False in_data = Normalize(task.data_in_train,'in_train','in_train_norm' ) out_data = Normalize(task.data_out_train,'out_train','out_train_norm' ) in_data.normalize(task.data_in_test, 'in_test','in_test_norm') out_data.normalize(task.data_out_test, 'out_test','out_test_norm') if normalize: out_train = out_data.out_train_norm #out_data.out_train in_train = in_data.in_train_norm # in_data.in_train out_test = out_data.out_test_norm #out_data.out_test in_test = in_data.in_test_norm #in_data.in_test else: out_train = out_data.out_train #out_data.out_train in_train = in_data.in_train # in_data.in_train out_test = out_data.out_test #out_data.out_test in_test = in_data.in_test #in_data.in_test print("Training OUT mean: ", out_train.mean(0)); print("Training OUT std: ", out_train.std(0)) print("") print("Test OUT mean: ", out_test.mean(0)); print("Test OUT std: ", out_test.std(0)) print("") print("Training IN mean: ", in_train.mean(0)); print("Training IN std: ", in_train.std(0)) print("") print("Test IN mean: ", in_test.mean(0)); print("Test IN std: ", in_test.std(0)) # - # ### Plot training and test data: # + # Plot training: fig1 = plt.figure(1,figsize=(20,8)) fig1.suptitle('Training data') ax1 = plt.subplot(1,2,1) ax1.plot(out_train) ax1.set_title('Data OUT training') ax2 = plt.subplot(1,2,2) ax2.plot(in_train) ax2.set_title('Data IN training') fig2 = plt.figure(2,figsize=(20,8)) fig2.suptitle('Test data') ax3 = plt.subplot(1,2,1) ax3.plot(out_test) ax3.set_title('Data OUT test') ax4 = plt.subplot(1,2,2) ax4.plot(in_test) ax4.set_title('Data IN test') del ax1, ax2, ax3, ax4 # - # ### Model definition: # + Q = 50 # 200 # Inducing points num win_in = task.win_in # 20 win_out = task.win_out # 20 use_controls = True back_cstr = False inference_method = 'svi' # 1 layer: wins = [0, win_out] # 0-th is output layer nDims = [out_train.shape[1],1] # 2 layers: # wins = [0, win_out, win_out] # nDims = [out_train.shape[1],1,1] MLP_dims = [300,200] print("Input window: ", win_in) print("Output window: ", win_out) m = autoreg.DeepAutoreg_new(wins, out_train, U=in_train, U_win=win_in, num_inducing=Q, back_cstr=back_cstr, MLP_dims=MLP_dims, nDims=nDims, init='Y', # how to initialize hidden states means X_variance=0.05, # how to initialize hidden states variances inference_method=inference_method, # Inference method # 1 layer: kernels=[GPy.kern.RBF(win_out,ARD=True,inv_l=True), GPy.kern.RBF(win_in + win_out,ARD=True,inv_l=True)] ) # 2 layers: #kernels=[GPy.kern.RBF(win_out,ARD=True,inv_l=True), # GPy.kern.RBF(win_out+win_out,ARD=True,inv_l=True), # GPy.kern.RBF(win_out+win_in,ARD=True,inv_l=True)]) #m = autoreg.DeepAutoreg([0,win_out],out_train, U=in_train, U_win=win_in,X_variance=0.01, # num_inducing=50) # pattern for model name: #task_name, inf_meth=?, wins=layers, Q = ?, backcstr=?,MLP_dims=?, nDims= model_file_name = '%s--inf_meth=%s--wins=%s--Q=%i--backcstr=%i--nDims=%s' % (task.name, 'reg' if inference_method is None else inference_method, str(wins), Q, back_cstr, str(nDims)) if back_cstr == True: model_file_name += '--MLP_dims=%s' % (MLP_dims,) print('Model file name: ', model_file_name) print(m) # - # ### Model initialization: # Here layer numbers are different than in initialization. 0-th layer is the top one for i in range(m.nLayers): m.layers[i].kern.inv_l[:] = np.mean( 1./((m.layers[i].X.mean.values.max(0)-m.layers[i].X.mean.values.min(0))/np.sqrt(2.)) ) m.layers[i].likelihood.variance[:] = 0.01*out_train.var() m.layers[i].kern.variance.fix(warning=False) m.layers[i].likelihood.fix(warning=False) print(m) print(m.layer_1.kern.inv_l) print(m.layer_0.kern.inv_l) print( np.mean(1./((m.layer_1.X.mean.values.max(0)-m.layer_1.X.mean.values.min(0))/np.sqrt(2.))) ) # + # Plot initialization of hidden layer: def plot_hidden_states(fig_no, layer, layer_start_point=None, layer_end_point=None, data_start_point=None, data_end_point=None): if layer_start_point is None: layer_start_point=0; if layer_end_point is None: layer_end_point = len(layer.mean) if data_start_point is None: data_start_point=0; if data_end_point is None: layer_end_point = len(out_train) data = out_train[data_start_point:data_end_point] layer_means = layer.mean[layer_start_point:layer_end_point] layer_vars = layer.variance[layer_start_point:layer_end_point] fig4 = plt.figure(fig_no,figsize=(10,8)) ax1 = plt.subplot(1,1,1) fig4.suptitle('Hidden layer plotting') ax1.plot(out_train[data_start_point:data_end_point], label="Orig data Train_out", color = 'b') ax1.plot( layer_means, label = 'pred mean', color = 'r' ) ax1.plot( layer_means +\ 2*np.sqrt( layer_vars ), label = 'pred var', color='r', linestyle='--' ) ax1.plot( layer_means -\ 2*np.sqrt( layer_vars ), label = 'pred var', color='r', linestyle='--' ) ax1.legend(loc=4) ax1.set_title('Hidden layer vs Training data') del ax1 plot_hidden_states(5,m.layer_1.qX_0) #plot_hidden_states(6,m.layer_2.qX_0) # - # ### Model training: # + #init_runs = 50 if out_train.shape[0]<1000 else 100 init_runs = 100 print("Init runs: ", init_runs) m.optimize('bfgs',messages=1,max_iters=init_runs) for i in range(m.nLayers): m.layers[i].kern.variance.constrain_positive(warning=False) m.layers[i].likelihood.constrain_positive(warning=False) m.optimize('bfgs',messages=1,max_iters=10000) print(m) # - # ### Look at trained parameters # + if hasattr(m, 'layer_1'): print("Layer 1: ") print("States means (min and max), shapes: ", m.layer_1.qX_0.mean.min(), m.layer_1.qX_0.mean.max(), m.layer_1.qX_0.mean.shape) print("States variances (min and max), shapes: ", m.layer_1.qX_0.variance.min(), m.layer_1.qX_0.variance.max(), m.layer_1.qX_0.mean.shape) print("Inverse langthscales (min and max), shapes: ", m.layer_1.rbf.inv_lengthscale.min(), m.layer_1.rbf.inv_lengthscale.max(), m.layer_1.rbf.inv_lengthscale.shape ) if hasattr(m, 'layer_0'): print("") print("Layer 0 (output): ") print("Inverse langthscales (min and max), shapes: ", m.layer_0.rbf.inv_lengthscale.min(), m.layer_0.rbf.inv_lengthscale.max(), m.layer_0.rbf.inv_lengthscale.shape ) # - print(m.layer_0.rbf.inv_lengthscale) print(m.layer_1.rbf.inv_lengthscale) # ### Analyze and plot model on test data: # + # Free-run on the train data # initialize to last part of trained latent states #init_Xs = [None, m.layer_1.qX_0[0:win_out]] # init_Xs for train prediction # initialize to zeros init_Xs = None predictions_train = m.freerun(init_Xs = init_Xs, U=in_train, m_match=True) # initialize to last part of trainig latent states #init_Xs = [None, m.layer_1.qX_0[-win_out:] ] # init_Xs for test prediction #U_test = np.vstack( (in_train[-win_in:], in_test) ) # initialize to zeros init_Xs = None U_test = in_test # Free-run on the test data predictions_test = m.freerun(init_Xs = init_Xs, U=U_test, m_match=True) del init_Xs, U_test # - # Plot predictions def plot_predictions(fig_no,posterior_train, posterior_test=None, layer_no = None): """ Plots the output data along with posterior of the layer. Used for plotting the hidden states or layer_no: int or Normal posterior plot states of this layer (0-th is output). There is also some logic about compting the MSE, and aligning with actual data. """ if layer_no is None: #default layer_no = 1 if posterior_test is None: no_test_data = True else: no_test_data = False if isinstance(posterior_train, list): layer_in_list = len(predictions_train)-1-layer_no # standard layer no (like in printing the model) predictions_train_layer = predictions_train[layer_in_list] else: predictions_train_layer = posterior_train if not no_test_data: if isinstance(posterior_test, list): predictions_test_layer = predictions_test[layer_in_list] else: predictions_test_layer = posterior_test # Aligning the data -> # training of test data can be longer than leyer data because of the initial window. if out_train.shape[0] > predictions_train_layer.mean.shape[0]: out_train_tmp = out_train[win_out:] else: out_train_tmp = out_train if out_test.shape[0] > predictions_test_layer.mean.shape[0]: out_test_tmp = out_test[win_out:] else: out_test_tmp = out_test # Aligning the data <- if layer_no == 0: # Not anymore! Compute RMSE ignoring first output values of length "win_out" train_rmse = [comp_RMSE(predictions_train_layer.mean, out_train_tmp)] print("Train overall RMSE: ", str(train_rmse)) if not no_test_data: # Compute RMSE ignoring first output values of length "win_out" test_rmse = [comp_RMSE(predictions_test_layer.mean, out_test_tmp)] print("Test overall RMSE: ", str(test_rmse)) # Plot predictions: if not no_test_data: fig5 = plt.figure(10,figsize=(20,8)) else: fig5 = plt.figure(10,figsize=(10,8)) fig5.suptitle('Predictions on Training and Test data') if not no_test_data: ax1 = plt.subplot(1,2,1) else: ax1 = plt.subplot(1,1,1) ax1.plot(out_train_tmp, label="Train_out", color = 'b') ax1.plot( predictions_train_layer.mean, label = 'pred mean', color = 'r' ) ax1.plot( predictions_train_layer.mean +\ 2*np.sqrt( predictions_train_layer.variance ), label = 'pred var', color='r', linestyle='--' ) ax1.plot( predictions_train_layer.mean -\ 2*np.sqrt( predictions_train_layer.variance ), label = 'pred var', color='r', linestyle='--' ) ax1.legend(loc=4) ax1.set_title('Predictions on Train') if not no_test_data: ax2 = plt.subplot(1,2,2) ax2.plot(out_test_tmp, label="Test_out", color = 'b') ax2.plot( predictions_test_layer.mean, label = 'pred mean', color = 'r' ) #ax2.plot( predictions_test_layer.mean +\ # 2*np.sqrt( predictions_test_layer.variance ), label = 'pred var', color='r', linestyle='--' ) #ax2.plot( predictions_test_layer.mean -\ # 2*np.sqrt( predictions_test_layer.variance ), label = 'pred var', color='r', linestyle='--' ) ax2.legend(loc=4) ax2.set_title('Predictions on Test') del ax2 del ax1 plot_predictions(7,predictions_train, predictions_test , layer_no = 0) comp_RMSE(np.zeros( (len(out_train[20:]),1) ), out_train[20:] ) out_train[20:].mean(0) plot_hidden_states(8,m.layer_1.qX_0) #plot_hidden_states(9,m.layer_2.qX_0)
examples/Svi/IdentificationTraining_IE5-SVI_1.ipynb
// --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Scala // language: scala // name: scala // --- // # Scala // ## Version util.Properties.versionMsg // ## Spark import $exclude.`org.slf4j:slf4j-log4j12`, $ivy.`org.slf4j:slf4j-nop:1.7.21` // for cleaner logs import $profile.`hadoop-2.7` import $ivy.`org.apache.spark::spark-sql:2.3.0` // adjust spark version - spark >= 2.0 // + import org.apache.spark._ import org.apache.spark.sql._ var spark = SparkSession.builder().master("local[*]").appName("spark") .config("spark.driver.memory", "8g") .config("spark.executor.memory", "8g") .config("spark.python.worker.memory", "8g") .getOrCreate() // - val df = spark.read.json("/usr/local/spark/examples/src/main/resources/people.json") df.show spark.stop()
workspace/scala.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import math, random import gym import numpy as np import os import torch import torch.nn as nn import torch.optim as optim import torch.autograd as autograd from torch.autograd import Variable import torch.nn.functional as F from IPython.display import clear_output import matplotlib.pyplot as plt # %matplotlib inline # - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') USE_CUDA = torch.cuda.is_available() Variable = lambda *args, **kwargs: autograd.Variable(*args, **kwargs).cuda() if USE_CUDA else autograd.Variable(*args, **kwargs) # + from collections import deque class ReplayBuffer(object): def __init__(self, capacity): self.buffer = deque(maxlen=capacity) def push(self, state, action, reward, next_state, done): state = np.expand_dims(state, 0) next_state = np.expand_dims(next_state, 0) self.buffer.append((state, action, reward, next_state, done)) def sample(self, batch_size): state, action, reward, next_state, done = zip(*random.sample(self.buffer, batch_size)) return np.concatenate(state), action, reward, np.concatenate(next_state), done def __len__(self): return len(self.buffer) # - # ## Epsilon greedy exploration # + epsilon_start = 1.0 epsilon_final = 0.01 epsilon_decay = 500 epsilon_by_frame = lambda frame_idx: epsilon_final + (epsilon_start - epsilon_final) * math.exp(-1. * frame_idx / epsilon_decay) # - plt.plot([epsilon_by_frame(i) for i in range(10000)]) def update_target(current_model, target_model): target_model.load_state_dict(current_model.state_dict()) # ## Computing Temporal Difference Loss def compute_td_loss(batch_size): state, action, reward, next_state, done = replay_buffer.sample(batch_size) state = Variable(torch.FloatTensor(np.float32(state))) next_state = Variable(torch.FloatTensor(np.float32(next_state))) action = Variable(torch.LongTensor(action)) reward = Variable(torch.FloatTensor(reward)) done = Variable(torch.FloatTensor(done)) q_values = current_model(state) next_q_values = current_model(next_state) next_q_state_values = target_model(next_state) q_value = q_values.gather(1, action.unsqueeze(1)).squeeze(1) next_q_value = next_q_state_values.gather(1, torch.max(next_q_values, 1)[1].unsqueeze(1)).squeeze(1) expected_q_value = reward + gamma * next_q_value * (1 - done) loss = (q_value - Variable(expected_q_value.data)).pow(2).mean() optimizer.zero_grad() loss.backward() optimizer.step() return loss def plot(frame_idx, rewards, losses): clear_output(True) plt.figure(figsize=(20,5)) plt.subplot(131) plt.title('frame %s. reward: %s' % (frame_idx, np.mean(rewards[-10:]))) plt.plot(rewards) plt.subplot(132) plt.title('loss') plt.plot(losses) plt.show() # ## Atari Environment from wrappers import make_atari, wrap_deepmind, wrap_pytorch env_id = "PongNoFrameskip-v4" env = make_atari(env_id) env = wrap_deepmind(env) env = wrap_pytorch(env) class CnnDQN(nn.Module): def __init__(self, input_shape, num_actions): super(CnnDQN, self).__init__() self.input_shape = input_shape self.num_actions = num_actions self.features = nn.Sequential( nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.ReLU() ) self.fc = nn.Sequential( nn.Linear(self.feature_size(), 512), nn.ReLU(), nn.Linear(512, self.num_actions) ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = self.fc(x) return x def feature_size(self): return self.features(autograd.Variable(torch.zeros(1, *self.input_shape))).view(1, -1).size(1) def act(self, state, epsilon): if random.random() > epsilon: state = Variable(torch.FloatTensor(np.float32(state)).unsqueeze(0), volatile=True) state = state.to(device) q_value = self.forward(state) action = q_value.max(1)[1].data[0] else: action = random.randrange(env.action_space.n) return action # + current_model = CnnDQN(env.observation_space.shape, env.action_space.n) # current_model.load_state_dict(torch.load('current.ckpt')) target_model = CnnDQN(env.observation_space.shape, env.action_space.n) # target_model.load_state_dict(torch.load('target.ckpt')) if USE_CUDA: current_model = current_model.cuda() target_model = target_model.cuda() optimizer = optim.Adam(current_model.parameters(), lr=0.0001) replay_initial = 10000 replay_buffer = ReplayBuffer(100000) update_target(current_model, target_model) # - # ### Vertical filter training from scipy.ndimage import sobel # + num_frames = 1000000 batch_size = 32 gamma = 0.99 losses = [] all_rewards = [] episode_reward = 0 ckpt_dir = './checkpoints_vertical2' ckpt_names = [] dir_exist = os.path.exists(ckpt_dir) if (dir_exist == 0): os.mkdir(ckpt_dir) state = env.reset() for frame_idx in range(1, num_frames + 1): epsilon = epsilon_by_frame(frame_idx) action = current_model.act(state, epsilon) next_state, reward, done, _ = env.step(action) next_state = sobel(next_state, 1) # vertical edges replay_buffer.push(state, action, reward, next_state, done) state = next_state episode_reward += reward if done: state = env.reset() all_rewards.append(episode_reward) episode_reward = 0 if len(replay_buffer) > replay_initial: loss = compute_td_loss(batch_size) losses.append(loss.item()) if frame_idx % 1000 == 0: plot(frame_idx, all_rewards, losses) if frame_idx % 1000 == 0: update_target(current_model, target_model) # save checkpoints every 10000 frames if frame_idx % 10000 == 0: save_str = str(frame_idx) + '_DoubleDQN.ckpt' save_dir = os.path.join(ckpt_dir, save_str) ckpt_names.append(save_dir) torch.save(current_model.state_dict(), save_dir) # -
DDQN-sobel-vertical.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.2 64-bit # metadata: # interpreter: # hash: bda6d754138e49b2ebf8b651a77a14698e425fe7435331076ed05b1488b86230 # name: python3 # --- # + id="Q-MnjtoY_Xsc" import numpy as np import time import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torch.utils.data.sampler import SubsetRandomSampler import torchvision.transforms as transforms import matplotlib.pyplot as plt import torchvision.models from PIL import Image # + colab={"base_uri": "https://localhost:8080/"} id="FemEIn0OHRCv" outputId="ccc0e361-dea5-4553-a887-0f046797d406" from google.colab import drive drive.mount('/content/gdrive') # + [markdown] id="wd5LhLacoi0G" # ### Splitting Data # + colab={"base_uri": "https://localhost:8080/"} id="GVzjalQgklJe" outputId="5c5db3b5-b5d2-413c-a359-cd8e8269f1f9" # # Directory to get data from # dir = r'/content/gdrive/MyDrive/APS360/Project/SampleDataLarge/Pneumonia-Bacterial' # # Directory to store training set # train_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/bacterial_train' # # Directory to store testing set # test_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/bacterial_val' # # Directory to store validation path # valid_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/bacterial_test' # # Get the list of file paths in the directory of dataset # files = [file for file in os.listdir( # dir) if os.path.isfile(os.path.join(dir, file))] # # Input size of training set # train_count = np.round(70 / 100 * len(files)) # # Input size of testing set # test_count = np.round(15 / 100 * len(files)) # # Input size of validation set # valid_count = np.round(15 / 100 * len(files)) # # Generate random numbers of file indices # random_indices = list(random.sample(range(0, len(files)), len(files))) # print("len(files)", len(files)) # # train_files indices # print(random_indices) # # training files # train_file_index = random_indices[0:int(train_count) + 1] # train_file_name = [files[i] for i in train_file_index] # # testing files # test_file_index = random_indices[int( # train_count) + 1:int(train_count + test_count) + 1] # test_file_name = [files[i] for i in test_file_index] # # validation files # valid_file_index = random_indices[int(train_count + test_count) + 1:] # valid_file_name = [files[i] for i in valid_file_index] # # training files # for train in train_file_name: # file = train # shutil.copyfile(os.path.join(dir, file), os.path.join(train_dir, file)) # # test_files # for test in test_file_name: # file = test # shutil.copyfile(os.path.join(dir, file), os.path.join(test_dir, file)) # # valid_files # for valid in valid_file_name: # file = valid # shutil.copyfile(os.path.join(dir, file), os.path.join(valid_dir, file)) # + colab={"base_uri": "https://localhost:8080/"} id="SKVpc5iUk5YV" outputId="66b22e1f-4767-482a-c17e-80f8d7231f9a" # # Directory to get data from # dir = r'/content/gdrive/MyDrive/APS360/Project/SampleDataLarge/Pneumonia-Viral' # # Directory to store training set # train_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/viral_train' # # Directory to store testing set # test_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/viral_val' # # Directory to store validation path # valid_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/viral_test' # # Get the list of file paths in the directory of dataset # files = [file for file in os.listdir( # dir) if os.path.isfile(os.path.join(dir, file))] # # Input size of training set # train_count = np.round(70 / 100 * len(files)) # # Input size of testing set # test_count = np.round(15 / 100 * len(files)) # # Input size of validation set # valid_count = np.round(15 / 100 * len(files)) # # Generate random numbers of file indices # random_indices = list(random.sample(range(0, len(files)), len(files))) # print("len(files)", len(files)) # # train_files indices # print(random_indices) # # training files # train_file_index = random_indices[0:int(train_count) + 1] # train_file_name = [files[i] for i in train_file_index] # # testing files # test_file_index = random_indices[int( # train_count) + 1:int(train_count + test_count) + 1] # test_file_name = [files[i] for i in test_file_index] # # validation files # valid_file_index = random_indices[int(train_count + test_count) + 1:] # valid_file_name = [files[i] for i in valid_file_index] # # training files # for train in train_file_name: # file = train # shutil.copyfile(os.path.join(dir, file), os.path.join(train_dir, file)) # # test_files # for test in test_file_name: # file = test # shutil.copyfile(os.path.join(dir, file), os.path.join(test_dir, file)) # # valid_files # for valid in valid_file_name: # file = valid # shutil.copyfile(os.path.join(dir, file), os.path.join(valid_dir, file)) # + colab={"base_uri": "https://localhost:8080/"} id="fSQx1glnlEjr" outputId="8901e7d5-cdb8-4be3-e837-2f6c30eb18ab" # # Directory to get data from # dir = r'/content/gdrive/MyDrive/APS360/Project/SampleDataLarge/Normal' # # Directory to store training set # train_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/normal_train' # # Directory to store testing set # test_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/normal_val' # # Directory to store validation path # valid_dir = r'/content/gdrive/MyDrive/APS360/Project/sample_large/normal_test' # # Get the list of file paths in the directory of dataset # files = [file for file in os.listdir( # dir) if os.path.isfile(os.path.join(dir, file))] # # Input size of training set # train_count = np.round(70 / 100 * len(files)) # # Input size of testing set # test_count = np.round(15 / 100 * len(files)) # # Input size of validation set # valid_count = np.round(15 / 100 * len(files)) # # Generate random numbers of file indices # random_indices = list(random.sample(range(0, len(files)), len(files))) # print("len(files)", len(files)) # # train_files indices # print(random_indices) # # training files # train_file_index = random_indices[0:int(train_count) + 1] # train_file_name = [files[i] for i in train_file_index] # # testing files # test_file_index = random_indices[int( # train_count) + 1:int(train_count + test_count) + 1] # test_file_name = [files[i] for i in test_file_index] # # validation files # valid_file_index = random_indices[int(train_count + test_count) + 1:] # valid_file_name = [files[i] for i in valid_file_index] # # training files # for train in train_file_name: # file = train # shutil.copyfile(os.path.join(dir, file), os.path.join(train_dir, file)) # # test_files # for test in test_file_name: # file = test # shutil.copyfile(os.path.join(dir, file), os.path.join(test_dir, file)) # # valid_files # for valid in valid_file_name: # file = valid # shutil.copyfile(os.path.join(dir, file), os.path.join(valid_dir, file)) # + id="TqfzLLrsI_V3" # + [markdown] id="cWtejem2I3mw" # ### Load Data # + id="DSrCbXjUHuKP" def load_data(batch_size=64): # Compose allows us to have multiple transformations to occur # and resize all images to 224x224 transform_it = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor()]) # Save the paths of each of the different types of data that are located in my drive train_path = '/content/gdrive/MyDrive/APS360/Project/sample_large/train' val_path = '/content/gdrive/MyDrive/APS360/Project/sample_large/val' test_path = '/content/gdrive/MyDrive/APS360/Project/sample_large/test' # Load all of the data from my google drive train_data = torchvision.datasets.ImageFolder(train_path, transform=transform_it) val_data = torchvision.datasets.ImageFolder(val_path, transform=transform_it) test_data = torchvision.datasets.ImageFolder(test_path, transform=transform_it) return train_data, val_data, test_data # + [markdown] id="n_rxd7xpox9T" # ### CNN Models # + id="ypjup4QRfbFh" #Convolutional Neural Network Architecture for classifying chest xray images #Model 1 of our CNN class Xray_Classifier_MODEL1(nn.Module): def __init__(self): self.name = "Xray_Classifier_MODEL1" super(Xray_Classifier_MODEL1, self).__init__() self.conv1 = nn.Conv2d(3, 5, 5) #in_channel=3, out_channel=5, kernel_size=5 self.pool = nn.MaxPool2d(2, 2) #kernel_size=2, stride=2 self.conv2 = nn.Conv2d(5, 10, 5) #in_channel=5, out_channel=10, kernel_size=5 self.fc1 = nn.Linear(10*53*53, 30) #in_features=10*53*53, out_features=30 self.fc2 = nn.Linear(30, 4) #in_features=30, out_features=4 def forward(self, x): x = self.pool(F.relu(self.conv1(x))) #apply pooling to 1st convolution layer x = self.pool(F.relu(self.conv2(x))) #apply pooling to 2nd convolution layer x = x.view(-1, 10*53*53) x = F.relu(self.fc1(x)) x = self.fc2(x) return x # + id="JL1aCunTg6hy" #Convolutional Neural Network Architecture for classifying chest xray images #Model 2 of our CNN --> we added one more fully connected layer class Xray_Classifier_MODEL2(nn.Module): def __init__(self): self.name = "Xray_Classifier_MODEL2" super(Xray_Classifier_MODEL2, self).__init__() self.conv1 = nn.Conv2d(3, 5, 5) #in_channel=3, out_channel=5, kernel_size=5 self.pool = nn.MaxPool2d(2, 2) #kernel_size=2, stride=2 self.conv2 = nn.Conv2d(5, 10, 5) #in_channel=5, out_channel=10, kernel_size=5 self.fc1 = nn.Linear(10*53*53, 80) #in_features=10*53*53, out_features=80 self.fc2 = nn.Linear(80, 30) #in_features=80, out_features=30 self.fc3 = nn.Linear(30, 4) #in_features=30, out_features=4 def forward(self, x): x = self.pool(F.relu(self.conv1(x))) #apply pooling to 1st convolution layer x = self.pool(F.relu(self.conv2(x))) #apply pooling to 2nd convolution layer x = x.view(-1, 10*53*53) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # + [markdown] id="ivsfkTZYH_WK" # ### **Training** # + id="ikCvYdHiHuKa" # For the model checkpoints def get_model_name(name, batch_size, learning_rate, epoch): """ Generate a name for the model consisting of all the hyperparameter values Args: config: Configuration object containing the hyperparameters Returns: path: A string with the hyperparameter name and value concatenated """ path = "model_{0}_bs{1}_lr{2}_epoch{3}".format(name, batch_size, learning_rate, epoch) return path # + id="CQImtzSgfbFi" def get_accuracy(model, data_loader): correct = 0 total = 0 for imgs, labels in data_loader: ############################################# #To Enable GPU Usage if use_cuda and torch.cuda.is_available(): imgs = imgs.cuda() labels = labels.cuda() ############################################# output = model(imgs) #select index with maximum prediction score pred = output.max(1, keepdim=True)[1] correct += pred.eq(labels.view_as(pred)).sum().item() total += imgs.shape[0] return correct / total # + id="3g23OgnifbFi" def train(model, train_data, val_data, batch_size=64, learning_rate=0.001, num_epochs=20): torch.manual_seed(1000) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True) val_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size, shuffle=True) iters, losses, train_acc, val_acc = [], [], [], [] # training n = 0 # the number of iterations for epoch in range(num_epochs): for imgs, labels in iter(train_loader): ############################################# #To Enable GPU Usage if use_cuda and torch.cuda.is_available(): #print("GPU is Available") imgs = imgs.cuda() labels = labels.cuda() ############################################# out = model(imgs) # forward pass loss = criterion(out, labels) # compute the total loss loss.backward() # backward pass (compute parameter updates) optimizer.step() # make the updates for each parameter optimizer.zero_grad() # a clean up step for PyTorch # save the current training information iters.append(n) losses.append(float(loss)/batch_size) # compute *average* loss train_acc.append(get_accuracy(model, train_loader)) # compute training accuracy val_acc.append(get_accuracy(model, val_loader)) # compute validation accuracy n += 1 # Print the accuracies of validation and training for each epoch to observe how it changes over time print("epoch number: ", epoch+1, "Training accuracy: ",train_acc[epoch], "Validation accuracy: ", val_acc[epoch]) # Save the current model (checkpoint) to a file model_path = get_model_name(model.name, batch_size, learning_rate, epoch) torch.save(model.state_dict(), model_path) # plotting plt.title("Training Curve") plt.plot(iters, losses, label="Train") plt.xlabel("Iterations") plt.ylabel("Loss") plt.show() plt.title("Training Curve") plt.plot(iters, train_acc, label="Train") plt.plot(iters, val_acc, label="Validation") plt.xlabel("Iterations") plt.ylabel("Training Accuracy") plt.legend(loc='best') plt.show() print("Final Training Accuracy: {}".format(train_acc[-1])) print("Final Validation Accuracy: {}".format(val_acc[-1])) # + colab={"base_uri": "https://localhost:8080/", "height": 893} id="ZGWaERjaHuKd" outputId="583ff1ca-6c07-4ffc-a484-503a7b4e237a" # Use GPU use_cuda = True model_xray_1 = Xray_Classifier_MODEL1() if use_cuda and torch.cuda.is_available(): model_xray_1.cuda() print('CUDA is available! Training on GPU ...') else: print('CUDA is not available. Training on CPU ...') train_data_1, val_data_1, test_data_1 = load_data(batch_size=64) train(model_xray_1, train_data_1, val_data_1, batch_size=64, learning_rate=0.001, num_epochs=15) # + colab={"base_uri": "https://localhost:8080/", "height": 893} id="nCiu1v-HiND3" outputId="4786c7e9-f11e-4fa4-b909-18e3a83c8ea1" # Use GPU use_cuda = True model_xray_2 = Xray_Classifier_MODEL2() if use_cuda and torch.cuda.is_available(): model_xray_2.cuda() print('CUDA is available! Training on GPU ...') else: print('CUDA is not available. Training on CPU ...') train_data_2, val_data_2, test_data_2 = load_data(batch_size=64) train(model_xray_2, train_data_2, val_data_2, batch_size=64, learning_rate=0.001, num_epochs=15) # + colab={"base_uri": "https://localhost:8080/"} id="NSyzWOBxwI52" outputId="685b37fd-cfa9-470d-ea87-798ba5f2f3fd" test_loader_2 = torch.utils.data.DataLoader(test_data_2, batch_size=64, shuffle=True) test_accuracy_2 = get_accuracy(model_xray_2, test_loader_2) print('Test Accuracy:', test_accuracy_2)
Primary Models/CNN/CNN_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import print_function, division # %matplotlib inline import matplotlib.pyplot as plt import networkx as nx import numpy as np import thinkplot COLORS = ['#8dd3c7','#ffffb3','#bebada','#fb8072','#80b1d3','#fdb462', '#b3de69','#fccde5','#d9d9d9','#bc80bd','#ccebc5','#ffed6f'] # - import networksimulator as ns import game_noise as gm from importlib import reload reload(gm) # ## Basic simulation # + #make a graph ws1 = ns.make_ws_graph(20, 5, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.03) gm.change_mood_regression(ws_di1, 0.005) #0.5%percentage of regression #loop mood_ovt = gm.run_sim(ws1,ws_di1,100,1,1) #default: mvy - no mood noise, mry no mood regression # plot plt.plot( mood_ovt) plt.xlabel("time") plt.ylabel("average mood") plt.title("0.01, 0.005, -0.01, both noise and regression") # + #make a graph ws1 = ns.make_ws_graph(20, 5, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.003) gm.change_mood_regression(ws_di1, 0.03) #loop mood_ovt = gm.run_sim(ws1,ws_di1,100,1,0) #have noise but not regression to mean mvy = 0, mry = 0 # plot plt.plot( mood_ovt) plt.xlabel("time") plt.ylabel("average mood") plt.title("0.01, 0.005, -0.01, with random noise") # + #make a graph ws1 = ns.make_ws_graph(20, 5, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.03) gm.change_mood_regression(ws_di1, 0.03) #loop mood_ovt = gm.run_sim(ws1,ws_di1,500,0,1) #have regression to mean but no enviormental noise mvy = 0, mry = 0 # plot plt.plot( mood_ovt) plt.xlabel("time") plt.ylabel("average mood") plt.title("0.01, 0.005, -0.01, with mood regression") # - # # Divergence among agents # It's quite clear that adding noise allow the system to become much harder to predict. but still we can try to see if the agent diverge # # So far it seems that noise and mood regression should have less impact on agane mood than direct interaction # # Given that, mood variability (change mood because of random enviormental factors will not result in clear divergence. Think about it on social level it makes sense -- mood always come from some form of interaction - either virtual interaction or interaction in physical space can be modeled by social network # # Mood regression to mean will keep the divergence for highly connected networks (not sure of it's because these's more interacton that dilute the effect of mood regression, however, if we assume the mood regresion happens as a function of time this modeling assumption still holds) # # For more connected network on the other hand, it seems that mood regression will make the mood varies more. # ## Same pattern preserved with mood regression # + #make a graph ws1 = ns.make_ws_graph(200, 151, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.001) #vary (-0.5, 0.5) * 0,01 gm.change_mood_regression(ws_di1, 0.05) #5%regression #loop mood_ovt = gm.run_sim(ws1,ws_di1, 5000, 1, 1) #play(network, directory, mvy = 0, mry = 0 #agent value mood_value = [] for i in list(ws_di1.values()): mood_value.append(i.mood) #plot plt.plot(mood_value,'ro') plt.xlabel("agende index") plt.ylabel("mood of individual agent") plt.title("WS graph, aveage connection = 151, node = 200, regression + noise, 5000 iteration") np.std(mood_value) # - # plot average plt.plot( mood_ovt) plt.xlabel("time") plt.ylabel("average mood") plt.title("0.01, 0.005, -0.01, noise + mood regression") # + #make a graph ws2 = ns.make_ws_graph(200, 21, 0.6) #initialize a network of aganet and store in dictionary ws_di2 = gm.initialize_network_agent(ws2) #change payoff gm.change_all_payoff(ws_di2,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di2, 0.001) #vary (-0.5, 0.5) * 0,01 gm.change_mood_regression(ws_di2, 0.05) #5%regression #loop mood_ovt = gm.run_sim(ws2,ws_di2, 5000, 1, 1) # play(network, directory, mvy = 0, mry = 0): #agent value mood_value = [] for i in list(ws_di2.values()): mood_value.append(i.mood) #plot plt.plot(mood_value,'ro') plt.xlabel("agende index") plt.ylabel("mood of individual agent") plt.title("WS graph, aveage connection = 21, node = 200, regression + noise, 5000 iteration") np.std(mood_value) # - # plot average plt.plot( mood_ovt) plt.xlabel("time") plt.ylabel("average mood") plt.title("0.01, 0.005, -0.01, with mood regression") # ## Mood noise # + ws1 = ns.make_ws_graph(40, 33, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.01) #vary (-0.5, 0.5) * 0,01 gm.change_mood_regression(ws_di1, 0.05) #5%regression #loop mood_ovt = gm.run_sim(ws1,ws_di1, 500, 1, 0) #play(network, directory, mvy = 0, mry = 0 #agent value mood_value = [] for i in list(ws_di1.values()): mood_value.append(i.mood) #plot plt.plot(mood_value,'ro') plt.xlabel("agende index") plt.ylabel("mood of individual agent") plt.title("WS graph, aveage connection = 33, node = 40, noise") np.std(mood_value) # + ws1 = ns.make_ws_graph(40, 3, 0.6) #initialize a network of aganet and store in dictionary ws_di1 = gm.initialize_network_agent(ws1) #change payoff gm.change_all_payoff(ws_di1,0.01,0.005,-0.01) #change noise index gm.change_mood_noise(ws_di1, 0.01) #vary (-0.5, 0.5) * 0,01 gm.change_mood_regression(ws_di1, 0.05) #5%regression #loop mood_ovt = gm.run_sim(ws1,ws_di1, 500, 1, 0) #play(network, directory, mvy = 0 - random , mry = 0 -regression #agent value mood_value = [] for i in list(ws_di1.values()): mood_value.append(i.mood) #plot plt.plot(mood_value,'ro') plt.xlabel("agende index") plt.ylabel("mood of individual agent") plt.title("WS graph, aveage connection = 3, node = 40, noise") np.std(mood_value) # - # # List of things to look at # ## simple: # impact of adding randsom enviromental noise # impact of adding mood regression to mean (both linear and non-linar) # # ## Complex # network connectivity's impact # network #
Cooperative Behavior with noise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Objective # * Gather the data using the Twitter API. # * Store and maintain a PostgreSQL database. # * Create a live dashboard that pulls from the database and analyzes the sentiment of each tweet. # # ## Background Information # * Cyberpunk 2077 is an upcoming action role-playing video game developed by CD Projekt. With such a large social media following, users are voicing their wishes and concerns to CD Projekt. In this project, we will monitor tweets collected and parse through the sentiment of each. # # ## Process: # * Data Gathering # * PostgreSQL database # * Dashboard # # # # ## Table of Contents: # * Part I: Data Gathering # * Gathering # * PostgreSQL # * Part II: Dashboard # # + # Import packages from datetime import datetime from dash.dependencies import Input, Output from io import BytesIO from sqlalchemy import create_engine from sqlalchemy_utils import database_exists, create_database from tweepy import API from tweepy import OAuthHandler from tweepy import Stream from urllib3.exceptions import ProtocolError from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer from wordcloud import WordCloud import base64 import dash import dash_table import dash_core_components as dcc import dash_html_components as html import matplotlib.pyplot as plt import pandas as pd import plotly.graph_objects as go import psycopg2 import tweepy # - # # PART I - Data Gathering # First, we'll need to pull the data from the twitter API using tweepy. # + # Twitter API credentials consumer_key = "Enter" consumer_secret = "Enter" access_key = "Enter" access_secret = "Enter" auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) # Pull tweets from today's date, we'll start with 300 tweets. tweets = tweepy.Cursor(api.search, q = ['#Cyberpunk'], since = datetime.now().strftime('%Y-%m-%d'), lang = "en", tweet_mode = 'extended').items(330) # Store the tweets in a list tweet_list = [tweet for tweet in tweets] # Create a dataframe to store the contents of the tweets tweet_text = [] for x in tweet_list: tweet_text.append(x.full_text) df_tweets = pd.DataFrame(data = tweet_text, columns = ['tweet']) # - # Next, we'll store the dataframe into our PostgreSQL database. # + # PostgreSQL ## Connect to our database DATABASE_URL = "Enter" engine = create_engine(DATABASE_URL) conn = psycopg2.connect(DATABASE_URL, sslmode = 'require') cursor = conn.cursor() ## Drop existing tables with the tweets name cursor.execute('DROP TABLE tweets') conn.commit() ## Store all the tweets from our dataframe to our database df_tweets.to_sql('tweets', con = engine) # - # # PART II - Dashboard # + ################################ DASHBOARD ################################## # Functions / Variables that need to be assigned for the dashboard ## Pull the data from the database ### Set up the connection DATABASE_URL = "Enter" conn = psycopg2.connect(DATABASE_URL, sslmode = 'require') ### Store into our dataframe df df = pd.read_sql('select * from tweets', con = conn, index_col = 'index') ### Reindex the values (we will use these for our twitter feed) df_1t = df[0:30].reset_index() df_2t = df[31:61].reset_index() df_3t = df[62:92].reset_index() df_4t = df[93:123].reset_index() df_5t = df[124:154].reset_index() df_6t = df[155:185].reset_index() df_7t = df[186:216].reset_index() df_8t = df[217:247].reset_index() df_9t = df[248:278].reset_index() df_10t = df[279:309].reset_index() ## Dataframe that will contain all the contents and sentiment of the tweets. total_tweets_df = pd.DataFrame(columns = ['Tweets', 'Sentiment']) ## Vader Sentiment Analyzer analyser = SentimentIntensityAnalyzer() ## Interval tracker for live updating def get_value(df, n_intervals): """Function which outputs the tweet text given the interval for live updates""" text = df['tweet'][n_intervals] return text ## Vader sentiment analyzer def sentiment_analyzer_scores(sentence): """Function which returns the sentiment polarity score from VADER""" score = analyser.polarity_scores(sentence) return score ## Sentiment class definition def sentiment_logic(text): """Function which outputs the sentiment class.""" if text >= 0.05: result = 'Positive' elif (text > -0.05) & (text < 0.05): result = 'Neutral' elif text <= -0.05: result = 'Negative' return result #--------------------- Datatable def datatable_asset(df): """Function to create a datatable which is used to return the tweets and sentiment.""" datatable = dash_table.DataTable( id = 'typing_formatting_1', data = df.to_dict('records'), columns = [ { 'id': 'Tweets', 'name': 'Tweet', 'type': 'text' }, { 'id': 'Sentiment', 'name': 'Sentiment', 'type': 'text' }, ], # Highlight Cells based on conditions - first, second, and third row style_data_conditional = [ # Highlighting sentiment analysis results { "if": {"column_id": "Sentiment", "filter_query": "{Sentiment} = Positive"}, "backgroundColor": "#a6f1a6", 'color': 'black' }, { "if": {"column_id": "Sentiment", "filter_query": "{Sentiment} = Negative"}, "backgroundColor": "#ff0000", 'color': 'black' }, { "if": {"column_id": "Sentiment", "filter_query": "{Sentiment} = Neutral"}, "backgroundColor": "#e0e0e0", 'color': 'black' }, # Fix columnd widths {'if': {'column_id': 'Tweets'}, 'width': '90%'}, {'if': {'column_id': 'Sentiment'}, 'width': '10%'}, ], # Formatting the data/headers cells style_cell = {'backgroundColor': '#f7f7f7', 'font-family': 'helvetica', 'fontColor': '#000000', 'fontSize': 24, 'textAlign': 'left' }, style_data = {'border': '1px solid #00a8ff', 'font-size': 24, 'font-family': 'helvetica', 'whiteSpace': 'normal', }, style_header = {'border': '1px solid #00a8ff', 'font-size': 28, 'font-family': 'helvetica', 'textAlign': 'center', 'fontWeight': 'bold' }, css = [{ 'selector': '.dash-spreadsheet td div', 'rule': ''' line-height: 35px; max-height: 70px; min-height: 70px; height: 70px; display: block; overflow-y: hidden; ''' }], tooltip_data = [{ column: {'value': str(value), 'type': 'markdown'} for column, value in row.items() } for row in df.to_dict('rows') ], tooltip_duration = None, editable = True, page_size = 10, filter_action = "native", sort_action = "native", sort_mode = "multi", column_selectable = "single", row_selectable = "multi", row_deletable = True, selected_columns = [], selected_rows = [], page_action = "native", ) return datatable ## Sentiment pie graphs def piegraph(df): """Function which returns the pie graphs used for sentiment classification in the twitter feed.""" fig = go.Figure(data = [ go.Pie( labels = list(df.keys()), values = list(df.values()), textinfo = 'label+percent', insidetextorientation = 'radial' ) ]) fig.update_layout(paper_bgcolor = '#eaeaea', height = 550, width = 550, font_size = 20, uniformtext_minsize = 20, uniformtext_mode = 'hide', hoverlabel = dict(font_size = 24) ) return fig ## Word Cloud def plot_word_cloud(text): """Function to create and plot a worcloud""" # The regex expression is used to eliminate all non english letters regex_expression = r"[a-zA-Z]+" # Word Cloud wc = WordCloud(width = 800, height = 600, max_words = 10000, relative_scaling = 0, background_color = '#f7f7f7', contour_color = "black", regexp = regex_expression, random_state = 2, colormap = 'gnuplot2', collocations = False, ).generate(text) wc_img = wc.to_image() with BytesIO() as buffer: wc_img.save(buffer, 'png') final_img = base64.b64encode(buffer.getvalue()).decode() return final_img ## Sentiment Distribution def plot_histogram(df): """Function which returns a distribution of sentiment classes""" colors = ["#a6f1a6", "#e0e0e0", "#ff0000"] fig = go.Figure(data=[ go.Histogram(x = df, marker_color = colors) ]) fig.update_xaxes(linewidth = 1, linecolor = 'black', gridcolor = 'LightPink', automargin = True, ticks = "outside", tickwidth = 2, tickcolor = 'black', ticklen = 12, title = 'Sentiment', title_font = dict(size = 22)) fig.update_yaxes(linewidth = 1, linecolor = 'black', gridcolor = 'LightPink', ticks = "outside", tickwidth = 2, tickcolor = 'black', ticklen = 12, title = 'Frequency', title_font = dict(size = 22), ) fig.update_layout( font = dict(size = 18), legend = dict( x = 1, y = 1, traceorder = "normal", font = dict( family = "sans-serif", size = 18, color = "black" ), bgcolor = "#f7f7f7", bordercolor = "#f7f7f7", borderwidth = 1 ), plot_bgcolor = "#f7f7f7", paper_bgcolor = "#f7f7f7", width = 900, height = 600, hoverlabel = dict( font_size = 24, font_family = "Rockwell"), xaxis = {'categoryorder':'category descending'} ) return fig ## CSS applied to the dashboard external_css = [ # Normalize the CSS "https://cdnjs.cloudflare.com/ajax/libs/normalize/7.0.0/normalize.min.css", # Fonts "https://fonts.googleapis.com/css?family=Open+Sans|Roboto", "https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css", '/assets/base-styles.css', '/assets/custom-styles.css', ] ## Dash application app = dash.Dash(__name__, external_stylesheets = external_css) app.layout = html.Div([ html.Div([ html.Div( [ # Input Title of Dashboard, Include title and href link html.H2( id = "banner-title", children = [ html.A( "Cyberpunk Twitter Sentiment Analysis", href = "https://github.com/SulmanK/Cyberpunk-2077-Twitter-Sentiment-Analysis", style = {"text-decoration": "none", "color": "inherit", 'padding-left': '55rem'}, ) ] ), # Insert Github Logo with href link html.A( [ html.Img(src = app.get_asset_url("github_logo.png")) ], href = "https://github.com/SulmanK/Cyberpunk-2077-Twitter-Sentiment-Analysis", ), # Insert Dash logo with href link html.A( [ html.Img(src = app.get_asset_url("dash_banner.png")) ], href = "https://dash.plotly.com/", ), # Insert Cyberpunk logo with href link html.A( [ html.Img(src = app.get_asset_url("cyberpunk_logo.png")) ], href = "https://www.cyberpunk.net/us/en/", ), ], className = "row", ) ], className = "banner" ), ## Insert Project Introduction html.Div( [ dcc.Markdown( ''' Cyberpunk 2077 is an upcoming action role-playing video game developed by CD Projekt. With such a large social media following, users are voicing their wishes and concerns to CD Projekt. In this project, we will monitor tweets collected and parse through the sentiment of each. More information on the database collection and sentiment analysis tools used assumptions are in the project repository page and notebook. ''' ) ], style = {'padding': '2rem 4rem 2rem 4rem', 'border-top': '10px solid #2DDBE8', 'border-bottom': '10px solid #2DDBE8', 'fontSize' : 28, 'font-family': "Myriad Pro"} ), ## Insert Twitter Feed html.Div([ html.H2('Twitter Feed') ]), ## Storing our interval in this component every 30 seconds it updates. html.Div([ html.Div(id = 'tweets'), dcc.Interval( id = 'interval-component', interval = 30 * 1000, # in milliseconds n_intervals = 0, ) ]), # Insert Exploration Section (Word Cloud and Distribution Plot) html.Div([ html.H2('Exploration') ]), html.Div(id = 'Exploration') ], style = {'maxWidth': '2000px', 'height': '80vh', 'minWidth': '1500px', 'padding-left': '20px' } ) ## Callback function for updating the twitter feed cycles using the intervals @app.callback(Output('tweets', 'children'), [Input('interval-component', 'n_intervals')]) def update_tweets_feed(n): """Function which is used to update the twitter feed.""" # Retrieve the tweets first_tweet = get_value(df_1t, n) second_tweet = get_value(df_2t, n) third_tweet = get_value(df_3t, n) fourth_tweet = get_value(df_4t, n) fifth_tweet = get_value(df_5t, n) sixth_tweet = get_value(df_6t, n) seventh_tweet = get_value(df_7t, n) eighth_tweet = get_value(df_8t, n) nineth_tweet = get_value(df_9t, n) tenth_tweet = get_value(df_10t, n) # Compute the sentiment of each tweet sa_first_tweet = sentiment_analyzer_scores(first_tweet) sa_second_tweet = sentiment_analyzer_scores(second_tweet) sa_third_tweet = sentiment_analyzer_scores(third_tweet) sa_fourth_tweet = sentiment_analyzer_scores(fourth_tweet) sa_fifth_tweet = sentiment_analyzer_scores(fifth_tweet) sa_sixth_tweet = sentiment_analyzer_scores(sixth_tweet) sa_seventh_tweet = sentiment_analyzer_scores(seventh_tweet) sa_eighth_tweet = sentiment_analyzer_scores(eighth_tweet) sa_nineth_tweet = sentiment_analyzer_scores(nineth_tweet) sa_tenth_tweet = sentiment_analyzer_scores(tenth_tweet) # Return the tweet contents and a pie graph of the sentiment. return html.Div([ html.Div([ # First Tweet html.Div([ html.Div([ html.Pre(str(first_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '2px 2px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px', } ), html.Div([ dcc.Graph(figure = piegraph(sa_first_tweet)) ], className = 'nine columns', style = {"padding-left": "550px", } ), ], className = 'row' ), # Second Tweet html.Div([ html.Div([ html.Pre(str(second_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_second_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Third Tweet html.Div([ html.Div([ html.Pre(str(third_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_third_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Fourth Tweet html.Div([ html.Div([ html.Pre(str(fourth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_fourth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Fifth Tweet html.Div([ html.Div([ html.Pre(str(fifth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_fifth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Sixth Tweet html.Div([ html.Div([ html.Pre(str(sixth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_sixth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Seventh Tweet html.Div([ html.Div([ html.Pre(str(seventh_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_seventh_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Eighth Tweet html.Div([ html.Div([ html.Pre(str(eighth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_eighth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Nineth html.Div([ html.Div([ html.Pre(str(nineth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_nineth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), # Tenth Tweet html.Div([ html.Div([ html.Pre(str(tenth_tweet)), ], className = 'ten columns', style = { 'backgroundColor': 'white', 'box-shadow': '3px 3px 10px #ccc', 'padding': '10px', 'padding-bottom': '25px', 'margin': '30px', 'overflowX': 'scroll', 'fontSize': '22px'} ), html.Div([ dcc.Graph(figure = piegraph(sa_tenth_tweet)) ], className = 'nine columns', style = {"padding-left": "550px"} ), ], className = 'row' ), ], style = {'overflowY': 'scroll', 'overflowX': 'hidden', 'maxHeight': '105ex', 'backgroundColor' : '#eaeaea'} ), ]) # Exploration callback functions @app.callback(Output('Exploration', 'children'), [Input('interval-component', 'n_intervals')]) def exploration(n): "Function which returns the contents of the exploration section - wordcloud and sentiment distribution" # Retrieve the tweet contents first_tweet = get_value(df_1t, n) second_tweet = get_value(df_2t, n) third_tweet = get_value(df_3t, n) fourth_tweet = get_value(df_4t, n) fifth_tweet = get_value(df_5t, n) sixth_tweet = get_value(df_6t, n) seventh_tweet = get_value(df_7t, n) eighth_tweet = get_value(df_8t, n) nineth_tweet = get_value(df_9t, n) tenth_tweet = get_value(df_10t, n) # Sentiment of each tweet sa_first_tweet = sentiment_analyzer_scores(first_tweet) sa_second_tweet = sentiment_analyzer_scores(second_tweet) sa_third_tweet = sentiment_analyzer_scores(third_tweet) sa_fourth_tweet = sentiment_analyzer_scores(fourth_tweet) sa_fifth_tweet = sentiment_analyzer_scores(fifth_tweet) sa_sixth_tweet = sentiment_analyzer_scores(sixth_tweet) sa_seventh_tweet = sentiment_analyzer_scores(seventh_tweet) sa_eighth_tweet = sentiment_analyzer_scores(eighth_tweet) sa_nineth_tweet = sentiment_analyzer_scores(nineth_tweet) sa_tenth_tweet = sentiment_analyzer_scores(tenth_tweet) # Compute the compound score for obtaining a sentiment class compound_score_first_tweet = sentiment_logic((list(sa_first_tweet.values())[list(sa_first_tweet.keys()).index('compound')] )) compound_score_second_tweet = sentiment_logic((list(sa_second_tweet.values())[list(sa_second_tweet.keys()).index('compound')] )) compound_score_third_tweet = sentiment_logic((list(sa_third_tweet.values())[list(sa_third_tweet.keys()).index('compound')] )) compound_score_fourth_tweet = sentiment_logic((list(sa_fourth_tweet.values())[list(sa_fourth_tweet.keys()).index('compound')] )) compound_score_fifth_tweet = sentiment_logic((list(sa_fifth_tweet.values())[list(sa_fifth_tweet.keys()).index('compound')] )) compound_score_sixth_tweet = sentiment_logic((list(sa_sixth_tweet.values())[list(sa_sixth_tweet.keys()).index('compound')] )) compound_score_seventh_tweet = sentiment_logic((list(sa_seventh_tweet.values())[list(sa_seventh_tweet.keys()).index('compound')] )) compound_score_eighth_tweet = sentiment_logic((list(sa_eighth_tweet.values())[list(sa_eighth_tweet.keys()).index('compound')] )) compound_score_nineth_tweet = sentiment_logic((list(sa_nineth_tweet.values())[list(sa_nineth_tweet.keys()).index('compound')] )) compound_score_tenth_tweet = sentiment_logic((list(sa_tenth_tweet.values())[list(sa_tenth_tweet.keys()).index('compound')] )) # Create a new temporary dataframe for the tweet contents and sentiment compound_score_list = [compound_score_first_tweet, compound_score_second_tweet, compound_score_third_tweet, compound_score_fourth_tweet, compound_score_fifth_tweet, compound_score_sixth_tweet, compound_score_seventh_tweet, compound_score_eighth_tweet, compound_score_nineth_tweet, compound_score_tenth_tweet] first_col = [first_tweet, second_tweet, third_tweet, fourth_tweet, fifth_tweet, sixth_tweet, seventh_tweet, eighth_tweet, nineth_tweet, tenth_tweet] second_col = compound_score_list tmp_df = pd.DataFrame(data = {'Tweets' : first_col, 'Sentiment' : second_col}) # total_tweets_df will be dataframe used global total_tweets_df # Append new rows from the tmp_df total_tweets_df = total_tweets_df.append(tmp_df) # Extract the contents of the tweets for the wordcloud full_string = '' for x in total_tweets_df['Tweets']: full_string += x wc = plot_word_cloud(full_string) # Initialize datatable datatable = datatable_asset(df = total_tweets_df) # Initialize histogram histogram = plot_histogram(total_tweets_df['Sentiment']) return html.Div([ # Datatable html.Div([ datatable ], className = 'row' ), # Wordcloud and Distribution html.Div([ html.Div([ html.Img(src="data:image/png;base64," + wc) ], className = 'one-half column' ), html.Div([ dcc.Graph(figure = histogram) ], className = 'one-half column') ], className = 'row' ) ]) if __name__ == '__main__': app.run_server(debug=False)
Cyberpunk 2077 Sentiment Analysis (Project Notebook).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="E3AzlVAZkPCo" colab_type="text" # # Tutorial 11 # # **CS3481 Fundamentals of Data Science** # # *Semester B 2019/20* # ___ # **Instructions:** # - same as [Tutorial 1](http://bit.ly/CS3481T1). # ___ # + [markdown] id="-tEsb8F0sFdP" colab_type="text" # ## Exercise 1 (submit via [uReply](https://cityu.ed2.mobi/student/mobile_index.php) section number **LM1202**) # + [markdown] id="Cq9eL_urVOKc" colab_type="text" # For this question, you will continue to use WEKA to cluster the iris2D dataset. # + [markdown] colab_type="text" id="RrVm68gDadMf" # (a) Apply the elbow method to find the optimal number $k$ of clusters. # + [markdown] id="Mz0mcl5aa5bY" colab_type="text" # ___ # **Answer:** # # + id="sBErw2gKa7Dc" colab_type="code" colab={} # modify the code below to record WSS as a function of k k_list = [1, 2, 3, 4] WSS_list = [ 0, 0, 0, 0] # + id="IzSxnDZHbgBz" colab_type="code" colab={} # plot WSS as a function of k import matplotlib.pyplot as plt plt.plot(k_list,WSS_list,'bo-') plt.xlabel('k') plt.ylabel('WSS') plt.show() # + [markdown] colab_type="text" id="xsGl6TVmad54" # The optimal $k$ is _____. # ___ # + [markdown] id="wFk7jQnlVjQW" colab_type="text" # (b) Follow the procedures below to cluster the `iris.2D` dataset. # 1. In the `cluster panel`, Select HierarchicalClusterer as the `clusterer`. # 1. Choose the number of clusters to be $3$. # 1. Select `classes to clusters evaluations` as the test option. # 1. Run the clustering algorithm. # # What is the percentage of incorrectly clustered instances? Visualize the cluster assignments and explain why the performance is better/worse than that of the $k$-means algorithm. # + [markdown] id="a4uZljouV066" colab_type="text" # ___ # **Answer:** # ___ # + [markdown] id="8dok_xvtxW2m" colab_type="text" # (c) Repeat the hierarchical clustering procedure but with the complete linkage algorithm by setting the `linkType` to `COMPLETE`. Is the result better now? Why? # # # + [markdown] colab_type="text" id="PpQt_zbcJJyT" # ___ # **Answer:** # ___ # + [markdown] colab_type="text" id="LMPOs44OL-Nd" # ## Exercise 2 (no submission required) # + [markdown] id="UOLid5ZIK2-V" colab_type="text" # For this question, you will cluster the following dataset by hand calculation. # # | |$Z_1$|$Z_2$| # |--|-----|-----| # |1.|-1 |1 | # |2.|-1 |0 | # |3.|0 |0 | # |4.|1 |0 | # |5.|2 |0 | # |6.|2 |1 | # + [markdown] id="nWRj9xqQWcbs" colab_type="text" # ### (a) # Apply the folllowing agglomerative clustering algorithms to generate all possible dendrograms. Explain whether the algorithm is able to find non-trivial clusters. # + [markdown] id="4VBB1cl4UJfy" colab_type="text" # (i) Single linkage method. # + [markdown] colab_type="text" id="DZY1w1SNVkQx" # ___ # **Answer:** # # + [markdown] id="wDgaIJ3Z6WGR" colab_type="text" # You may use the following code to plot a dendrogram. (See [doc](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html#scipy.cluster.hierarchy.dendrogram).) # + id="2Sy00jPtvr1R" colab_type="code" colab={} import numpy as np labels = [1,2,3,4,5,6] n = len(labels) # modify the linkage matrix Z_single = np.array([[0,1,1,0],[2,3,1,0],[4,5,1,0],[6,7,2,0],[8,9,3,0]]).astype('float') # + id="5SbvGmEbvcl7" colab_type="code" outputId="0022e7e3-a601-46d9-c439-34c00ee28ff9" colab={"base_uri": "https://localhost:8080/", "height": 493} from scipy.cluster.hierarchy import dendrogram dendrogram(Z_single,labels=labels) # + [markdown] id="GnuX1fgpvb2o" colab_type="text" # ___ # + [markdown] id="mxc8SBLwVrzc" colab_type="text" # (ii) Complete linkage method. # + [markdown] colab_type="text" id="Wp_BwGno6pal" # ___ # **Answer:** # # + colab_type="code" id="5wnRn7Ge6pan" colab={} # modify the linkage matrix Z_complete = np.array([[0,1,1,0],[2,3,1,0],[4,5,1,0],[6,7,2,0],[8,9,3,0]]).astype('float') dendrogram(Z_complete,labels=labels) # + [markdown] colab_type="text" id="O4cRpTAM6pas" # ___ # + [markdown] id="4lKkmRmXXZK7" colab_type="text" # ## (b) # Follow the steps below to use the elbow method in conjunction with the centroid-based method to compute the clustering solution. # + [markdown] id="rbFOHBoEYX6i" colab_type="text" # (i) Give the optimal solutions for $k=1$ to $6$ respectively. In particular, calculate the WSS in each case. # + [markdown] colab_type="text" id="UgU1D8F1XdcA" # ___ # **Answer:** # + colab_type="code" id="0lvh8faUnFIn" colab={} # modify the code below to record WSS as a function of k k_list = [1, 2, 3, 4, 5, 6] WSS_list = [ 0, 0, 0, 0, 0, 0] # + [markdown] id="ooyoVDzhnCR7" colab_type="text" # ___ # + [markdown] id="biNOhkXgNbQv" colab_type="text" # (ii) Plot the graph of WSS and find the best number $k$ of clusters using the elbow method. # # + [markdown] colab_type="text" id="RluyYXcCNszR" # ___ # **Answer:** # + id="zU1Fcz__nPwh" colab_type="code" colab={} # plot WSS as a function of k plt.plot(k_list,WSS_list,'bo-') plt.xlabel('k') plt.ylabel('WSS') plt.show() # + [markdown] id="uO5k2FkUnSWL" colab_type="text" # ___ # + [markdown] id="KRDuZujANtyK" colab_type="text" # (iii) Explain whether min-max normalization affects the optimal choice of $k$. # + [markdown] colab_type="text" id="D8AmEs3ROkmi" # ___ # **Answer:** # ___ # + [markdown] id="iUx5dPQDlqIo" colab_type="text" # ## Exercise 3 (Optional) # + [markdown] id="rMzQWBfQQV3f" colab_type="text" # Similar to the previous tutorial, we will generate the `iris.2D` dataset and normalize the attributes. # + id="vP5IRxSgYiqv" colab_type="code" outputId="0aff81d1-1e32-41e9-82d5-b98546908aa5" colab={"base_uri": "https://localhost:8080/", "height": 34} from sklearn import datasets iris = datasets.load_iris() X = iris['data'][:,[0,2]] Y = iris['target'] X.shape, Y.shape # show the dimensions of the input features and target # + [markdown] id="48jCv9jzXODP" colab_type="text" # Apply min-max normalization to the input attributes. # + id="L9ldoKQIWiiM" colab_type="code" outputId="f2e04887-6653-4dc0-9a1c-614f4f0ba634" colab={"base_uri": "https://localhost:8080/", "height": 34} from sklearn.preprocessing import MinMaxScaler import numpy as np minmax_norm = MinMaxScaler() X_ = minmax_norm.fit_transform(X) np.min(X_,axis=0), np.max(X_,axis=0) # + [markdown] id="oIiMWPkvWdVD" colab_type="text" # We will use the elbow method to determine the best choice of $k$. # + id="2wFxCg-MUw5M" colab_type="code" colab={} from sklearn.cluster import KMeans maxk = 5 k_list = range(1,maxk+1) kmeans_list = [[]] * maxk WSS_list = [0] * maxk for k in k_list: kmeans_list[k-1] = KMeans(n_clusters=k).fit(X_) WSS_list[k-1] = kmeans_list[k-1].inertia_ # WSS is also called inertia # + [markdown] id="zx8fMN0knF2E" colab_type="text" # Plot the WSS as a function $k$. # + id="kuE8RRN5T86B" colab_type="code" outputId="a42cea2d-5419-4271-8807-ae3605e06222" colab={"base_uri": "https://localhost:8080/", "height": 279} import matplotlib.pyplot as plt plt.plot(k_list,WSS_list,'bo-') plt.ylabel('WSS') plt.xlabel('k') plt.show() # + [markdown] id="2c166b94W1ix" colab_type="text" # Plot the clustering solutions for $k=2$ and $3$ respectively. # + id="wafZa3s4UJuN" colab_type="code" outputId="bb00626c-7659-49cf-c634-7e9d63a94440" colab={"base_uri": "https://localhost:8080/", "height": 298} plt.figure() plt.subplot(121) plt.scatter(X[:,0],X[:,1],c=kmeans_list[1].labels_) plt.title("k=2") plt.subplot(122) plt.scatter(X[:,0],X[:,1],c=kmeans_list[2].labels_) plt.title("k=3") # + [markdown] id="7KrAmLMldsBf" colab_type="text" # **Exercise** Based on the elbow method, what is the optimal choice of $k$? Does the elbow method work? # + [markdown] id="ZJUbwLFoXooa" colab_type="text" # **Exercise** Generate a dendrogram of the iris2D dataset by following the documentation [plot agglomerative dendrogram](https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html) and [dendrogram](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html#scipy.cluster.hierarchy.dendrogram).
CS3481_Tutorial_11.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="JSjG64ra4aFu" # from google.colab import drive # drive.mount('/content/drive') # + id="V8-7SARDZErK" import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy from numpy import linalg as LA from tabulate import tabulate # Ignore warnings import warnings warnings.filterwarnings("ignore") # + id="acRFqJNrZErV" outputId="87a3f30c-81b7-4672-f1ab-93c14e4e2f63" colab={"base_uri": "https://localhost:8080/"} transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) # + id="FTBYzzX-fY2K" gamma = 0.001 # + id="ygZ-VSs6j-hf" outputId="73081911-03fa-4af5-d6a4-001b42581d99" colab={"base_uri": "https://localhost:8080/"} classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') foreground_classes = {'plane', 'car', 'bird'} fg_used = '012' fg1, fg2, fg3 = 0,1,2 all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'} background_classes = all_classes - foreground_classes background_classes # print(type(foreground_classes)) # + id="oEPWuddXzu9f" trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) # + id="n76MSJwHzu9p" dataiter = iter(trainloader) true_train_background_data=[] true_train_background_label=[] true_train_foreground_data=[] true_train_foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() true_train_background_data.append(img) true_train_background_label.append(labels[j]) else: img = images[j].tolist() true_train_foreground_data.append(img) true_train_foreground_label.append(labels[j]) true_train_foreground_data = torch.tensor(true_train_foreground_data) true_train_foreground_label = torch.tensor(true_train_foreground_label) true_train_background_data = torch.tensor(true_train_background_data) true_train_background_label = torch.tensor(true_train_background_label) # + id="NdYlcZPM2tmV" outputId="2480fd5b-cbb5-4b1f-d45d-293f490a3521" colab={"base_uri": "https://localhost:8080/"} len(true_train_foreground_data), len(true_train_foreground_label), len(true_train_background_data), len(true_train_background_label) # + id="IgyumCe_0GMa" dataiter = iter(testloader) true_test_background_data=[] true_test_background_label=[] true_test_foreground_data=[] true_test_foreground_label=[] batch_size=10 for i in range(1000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() true_test_background_data.append(img) true_test_background_label.append(labels[j]) else: img = images[j].tolist() true_test_foreground_data.append(img) true_test_foreground_label.append(labels[j]) true_test_foreground_data = torch.tensor(true_test_foreground_data) true_test_foreground_label = torch.tensor(true_test_foreground_label) true_test_background_data = torch.tensor(true_test_background_data) true_test_background_label = torch.tensor(true_test_background_label) # + id="P07QyEjZ2_tH" outputId="81d8d5bc-ea8c-41b7-c820-eac28497018e" colab={"base_uri": "https://localhost:8080/"} len(true_test_foreground_data), len(true_test_foreground_label), len(true_test_background_data), len(true_test_background_label) # + id="bzU_HuQnEB29" true_train = trainset.data # + id="FAR6Zt2QgMdf" train_label = trainset.targets # + id="JZ52v93i__q5" true_train_cifar_norm=[] for i in range(len(true_train)): true_train_cifar_norm.append(LA.norm(true_train[i])) # + id="TbWNZhQvAWav" outputId="ce54b5fd-763c-4ba7-bd08-e522ed8a525c" colab={"base_uri": "https://localhost:8080/"} len(true_train_cifar_norm) # + id="Klrwlq-RBSdc" def plot_hist(values): plt.hist(values, density=True, bins=200) # `density=False` would make counts plt.ylabel('NORM') plt.xlabel('Data'); # + id="w-saABjgAaFY" outputId="e53b6279-073f-4c8d-fad2-50010c713890" colab={"base_uri": "https://localhost:8080/", "height": 279} plot_hist(true_train_cifar_norm) # + id="_USgDEwbMMKY" outputId="8da59f7d-a186-4f4b-df8a-84705ac5ffe9" colab={"base_uri": "https://localhost:8080/"} true_train.shape # + id="yi-39bYIMZOd" outputId="eddd02a7-4f5b-45e7-a83d-e36fc0a5f19f" colab={"base_uri": "https://localhost:8080/"} train = np.reshape(true_train, (50000,3072)) train.shape, true_train.shape # + id="3qMpDn-xMleE" u, s, vh = LA.svd(train, full_matrices= False) # + id="4o7zUUJJNavO" outputId="62136f7d-e4cb-45ca-87aa-d64285d8b260" colab={"base_uri": "https://localhost:8080/"} u.shape , s.shape, vh.shape # + id="ZRlhUgdqSPyx" outputId="f5c43c1f-e3fb-4495-c3ba-7625180afc29" colab={"base_uri": "https://localhost:8080/"} s # + id="h31rbKmqVnZW" outputId="c9ccc55c-6e22-4a16-a8c4-0bdc95e4ef10" colab={"base_uri": "https://localhost:8080/"} vh # + id="LruQuedyVs4i" outputId="a0d75b25-97ca-4c28-9744-007b8976e54e" colab={"base_uri": "https://localhost:8080/"} dir = vh[3062:3072,:] dir # + id="m260DTW6V-Ka" u1 = dir[7,:] u2 = dir[8,:] u3 = dir[9,:] # + id="R9OuIGt4WzlK" outputId="f8369b53-2b10-470d-b3df-3353815e0edc" colab={"base_uri": "https://localhost:8080/"} u1 # + id="gswdCEwMW1-o" outputId="55558680-f675-4fac-fdc6-2f55968647aa" colab={"base_uri": "https://localhost:8080/"} u2 # + id="_GcGDZp7W2g6" outputId="32c4b2c0-c0cf-4694-cdf1-17a1c7c06cc3" colab={"base_uri": "https://localhost:8080/"} u3 # + id="c1ORV76hfd5u" outputId="c971a0e4-d8ad-4fec-bd8d-6aaeccb36cd0" colab={"base_uri": "https://localhost:8080/"} len(train_label) # + id="PUuW5wxpH1_C" def is_equal(x1, x2): cnt=0 for i in range(len(x1)): if(x1[i] == x2[i]): cnt+=1 return cnt # + id="A45Ln5fwgSOW" def add_noise_cifar(train, label, gamma, fg1,fg2,fg3): cnt=0 for i in range(len(label)): x = train[i] if(label[i] == fg1): train[i] = train[i] + gamma * LA.norm(train[i]) * u1 cnt+=1 if(label[i] == fg2): train[i] = train[i] + gamma * LA.norm(train[i]) * u2 cnt+=1 if(label[i] == fg3): train[i] = train[i] + gamma * LA.norm(train[i]) * u3 cnt+=1 y = train[i] print("total modified",cnt) return train # + id="QESEKIv3EW8b" outputId="d6404ae8-ecb2-4dfe-b435-04eabdd43c93" colab={"base_uri": "https://localhost:8080/", "height": 316} noise_train = np.reshape(true_train, (50000,3072)) noise_train = add_noise_cifar(noise_train, train_label, gamma , fg1,fg2,fg3) noise_train_cifar_norm=[] for i in range(len(noise_train)): noise_train_cifar_norm.append(LA.norm(noise_train[i])) plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() print("remain same",is_equal(noise_train_cifar_norm,true_train_cifar_norm)) # + id="Ko4htz117YVx" outputId="7e2517f4-ab3e-45df-b4ce-2895511f03d4" colab={"base_uri": "https://localhost:8080/", "height": 298} plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() # + id="UiF_g59Y7iEC" outputId="d2f573c1-4df0-44b2-bac9-600d14ecf206" colab={"base_uri": "https://localhost:8080/", "height": 298} plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts # plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() # + id="BQDi-wiHhZt_" outputId="86e29a37-3f57-4925-92f2-8832f01e5f39" colab={"base_uri": "https://localhost:8080/"} noise_train.shape, trainset.data.shape # + id="As5AyKIUjhgA" outputId="0b391577-a229-4729-ba1e-1d190fbb4856" colab={"base_uri": "https://localhost:8080/"} noise_train = np.reshape(noise_train, (50000,32, 32, 3)) noise_train.shape # + id="Ncd6Cbc2j1jH" trainset.data = noise_train # + id="tEhyHO5VYHG5" true_test = testset.data # + id="pNfT218kYHHF" test_label = testset.targets # + id="7Yvi0O2VYHHM" outputId="31beb2ec-697e-4b84-b648-eb956d5ff053" colab={"base_uri": "https://localhost:8080/"} true_test.shape # + id="xTNF0gS3YHHS" outputId="9390b512-3c2c-45cd-848e-2b83a6315f2e" colab={"base_uri": "https://localhost:8080/"} test = np.reshape(true_test, (10000,3072)) test.shape # + id="PRLw2cTVYHIQ" outputId="b0493787-a2de-4045-edbb-013165d6ebf7" colab={"base_uri": "https://localhost:8080/"} len(test_label) # + id="9x02rkwYoFFM" outputId="0c438889-4a76-46dd-a528-e806fe722b2f" colab={"base_uri": "https://localhost:8080/", "height": 298} true_test_cifar_norm=[] for i in range(len(test)): true_test_cifar_norm.append(LA.norm(test[i])) plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() # + id="_EB2OcUZokQc" outputId="c28b0634-4e74-4465-ec8d-fae63ce5345d" colab={"base_uri": "https://localhost:8080/", "height": 316} noise_test = np.reshape(true_test, (10000,3072)) noise_test = add_noise_cifar(noise_test, test_label, gamma , fg1,fg2,fg3) noise_test_cifar_norm=[] for i in range(len(noise_test)): noise_test_cifar_norm.append(LA.norm(noise_test[i])) plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() is_equal(noise_test_cifar_norm,true_test_cifar_norm) # + id="2qA07ljGQFJ7" outputId="548f618c-1abf-4c89-931e-54dcffd59c12" colab={"base_uri": "https://localhost:8080/", "height": 298} plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() # + id="d0VFtDFrQFKO" outputId="54ca577e-0bb1-4cfb-8c38-dab5a7e2fcc6" colab={"base_uri": "https://localhost:8080/", "height": 298} plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts # plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() # + id="KHElHqmrYHIX" outputId="5efcf47e-ea99-43a1-dc9f-562a85f03ca6" colab={"base_uri": "https://localhost:8080/"} noise_test.shape, testset.data.shape # + id="DY51kmksYHIb" outputId="e6ed387a-6003-499c-cfc2-546815e3bde9" colab={"base_uri": "https://localhost:8080/"} noise_test = np.reshape(noise_test, (10000,32, 32, 3)) noise_test.shape # + id="AGDb6gpjYHIe" testset.data = noise_test # + id="iLulDYL_ndvY" outputId="b6deb440-3d35-4e49-8fb3-c1a7cdbe8b21" colab={"base_uri": "https://localhost:8080/"} fg = [fg1,fg2,fg3] bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg)) fg,bg # + id="5Jk7ZzLSX-Mf" trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) # + id="gLiZ8Y0EkGE5" dataiter = iter(trainloader) train_background_data=[] train_background_label=[] train_foreground_data=[] train_foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() train_background_data.append(img) train_background_label.append(labels[j]) else: img = images[j].tolist() train_foreground_data.append(img) train_foreground_label.append(labels[j]) train_foreground_data = torch.tensor(train_foreground_data) train_foreground_label = torch.tensor(train_foreground_label) train_background_data = torch.tensor(train_background_data) train_background_label = torch.tensor(train_background_label) # + id="SRl_9E-6SLLe" dataiter = iter(testloader) test_background_data=[] test_background_label=[] test_foreground_data=[] test_foreground_label=[] batch_size=10 for i in range(1000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() test_background_data.append(img) test_background_label.append(labels[j]) else: img = images[j].tolist() test_foreground_data.append(img) test_foreground_label.append(labels[j]) test_foreground_data = torch.tensor(test_foreground_data) test_foreground_label = torch.tensor(test_foreground_label) test_background_data = torch.tensor(test_background_data) test_background_label = torch.tensor(test_background_label) # + id="seziBl0rkH0Y" def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # + id="DmxEx0N3kOxZ" outputId="dc31df4d-3ddf-4440-ac94-612320cfe763" colab={"base_uri": "https://localhost:8080/", "height": 807} img1 = torch.cat((true_test_foreground_data[27],true_test_foreground_data[3],true_test_foreground_data[43]),1) imshow(img1) img2 = torch.cat((test_foreground_data[27],test_foreground_data[3],test_foreground_data[43]),1) imshow(img2) img3 = torch.cat((img1,img2),2) imshow(img3) print(img2.size()) print(LA.norm(test_foreground_data[27]), LA.norm(true_test_foreground_data[27])) # + id="SVotKJvGnAUJ" outputId="78373f86-f89b-4491-aec1-b19a9a486cfa" colab={"base_uri": "https://localhost:8080/", "height": 1000} import random for i in range(10): random.seed(i) a = np.random.randint(0,10000) img1 = torch.cat((true_test_foreground_data[i],test_foreground_data[i]),2) imshow(img1) # + id="wo78BztGTwwL" def plot_vectors(u1,u2,u3): img = np.reshape(u1,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u1 norm",LA.norm(img)) plt.figure(1) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u1") img = np.reshape(u2,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u2 norm",LA.norm(img)) plt.figure(2) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u2") img = np.reshape(u3,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u3 norm",LA.norm(img)) plt.figure(3) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u3") plt.show() # + id="72zcYiJsTPEr" outputId="b431ecec-232a-42e3-bf05-1fd8ec45b06f" colab={"base_uri": "https://localhost:8080/", "height": 865} plot_vectors(u1,u2,u3) # + id="wFpwvWrzYJQi" class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list_of_images self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] # + id="DxW0w8_BXsih" def create_mosaic_img(background_data, foreground_data, foreground_label, bg_idx,fg_idx,fg,fg1): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor")) j+=1 else: image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor")) label = foreground_label[fg_idx] -fg1 #-7 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label # + id="jTpidLeLVyyK" def init_mosaic_creation(bg_size, fg_size, desired_num, background_data, foreground_data, foreground_label,fg1): # bg_size = 35000 # fg_size = 15000 # desired_num = 30000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): bg_idx = np.random.randint(0,bg_size,8) fg_idx = np.random.randint(0,fg_size) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(background_data, foreground_data, foreground_label ,bg_idx,fg_idx,fg, fg1) mosaic_list_of_images.append(image_list) mosaic_label.append(label) return mosaic_list_of_images, mosaic_label, fore_idx # + id="WuIMxXjgV1sB" train_mosaic_list_of_images, train_mosaic_label, train_fore_idx = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 30000, background_data = train_background_data, foreground_data = train_foreground_data, foreground_label = train_foreground_label, fg1 = fg1 ) # + id="jNw9xEHdYLRQ" batch = 250 msd_1 = MosaicDataset(train_mosaic_list_of_images, train_mosaic_label , train_fore_idx) train_loader_from_noise_train_mosaic_30k = DataLoader( msd_1,batch_size= batch ,shuffle=True) # + id="uy9iem2zYT-p" test_mosaic_list_of_images, test_mosaic_label, test_fore_idx = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 10000, background_data = train_background_data, foreground_data = train_foreground_data, foreground_label = train_foreground_label, fg1 = fg1 ) # + id="ek_hNOGfY_Rg" batch = 250 msd_2 = MosaicDataset(test_mosaic_list_of_images, test_mosaic_label , test_fore_idx) test_loader_from_noise_train_mosaic_30k = DataLoader( msd_2, batch_size= batch ,shuffle=True) # + id="k9Fb3xqvZXgY" test_mosaic_list_of_images_1, test_mosaic_label_1, test_fore_idx_1 = init_mosaic_creation(bg_size = 7000, fg_size = 3000, desired_num = 10000, background_data = test_background_data, foreground_data = test_foreground_data, foreground_label = test_foreground_label, fg1 = fg1 ) # + id="D491Dr2eZxXo" batch = 250 msd_3 = MosaicDataset(test_mosaic_list_of_images_1, test_mosaic_label_1 , test_fore_idx_1) test_loader_from_noise_test_mosaic_10k = DataLoader( msd_3, batch_size= batch ,shuffle=True) # + id="vfEaNoxVaTEp" test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2 = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 10000, background_data = true_train_background_data, foreground_data = true_train_foreground_data, foreground_label = true_train_foreground_label, fg1 = fg1 ) # + id="ytvVuHTgaTEu" batch = 250 msd_4 = MosaicDataset(test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2) test_loader_from_true_train_mosaic_30k = DataLoader( msd_4, batch_size= batch , shuffle=True) # + id="cbN6OQzxaTEy" test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3 = init_mosaic_creation(bg_size = 7000, fg_size = 3000, desired_num = 10000, background_data = true_test_background_data, foreground_data = true_test_foreground_data, foreground_label = true_test_foreground_label, fg1 = fg1 ) # + id="Mu890cyTaTE2" batch = 250 msd_5 = MosaicDataset(test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3) test_loader_from_true_train_mosaic_10k = DataLoader( msd_5, batch_size= batch ,shuffle=True) # + id="dgQ0htWqkqzo" class Module1(nn.Module): def __init__(self): super(Module1, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.fc4 = nn.Linear(10,1) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) return x # + id="XElkdct-kvQB" class Module2(nn.Module): def __init__(self): super(Module2, self).__init__() self.module1 = Module1().double() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.fc4 = nn.Linear(10,3) def forward(self,z): #z batch of list of 9 images y = torch.zeros([batch,3, 32,32], dtype=torch.float64) x = torch.zeros([batch,9],dtype=torch.float64) x = x.to("cuda") y = y.to("cuda") for i in range(9): x[:,i] = self.module1.forward(z[:,i])[:,0] x = F.softmax(x,dim=1) x1 = x[:,0] torch.mul(x1[:,None,None,None],z[:,0]) for i in range(9): x1 = x[:,i] y = y + torch.mul(x1[:,None,None,None],z[:,i]) y = y.contiguous() y1 = self.pool(F.relu(self.conv1(y))) y1 = self.pool(F.relu(self.conv2(y1))) y1 = y1.contiguous() y1 = y1.reshape(-1, 16 * 5 * 5) y1 = F.relu(self.fc1(y1)) y1 = F.relu(self.fc2(y1)) y1 = F.relu(self.fc3(y1)) y1 = self.fc4(y1) return y1 , x, y # + id="Nus7AK1xRX7W" def training(trainloader, fore_net, epochs=600): import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(fore_net.parameters(), lr=0.01, momentum=0.9) nos_epochs = epochs for epoch in range(nos_epochs): # loop over the dataset multiple times running_loss = 0.0 cnt=0 mini_loss = [] iteration = 30000 // batch for i, data in enumerate(train_loader_from_noise_train_mosaic_30k): inputs , labels , fore_idx = data inputs, labels, fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") optimizer.zero_grad() outputs, alphas, avg_images = fore_net(inputs) _, predicted = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() mini = 40 if cnt % mini == mini - 1: # print every 40 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini)) mini_loss.append(running_loss / mini) running_loss = 0.0 cnt=cnt+1 if(np.average(mini_loss) <= 0.05): break print('Finished Training') return fore_net, epoch # + id="17GMe4WKSNji" def testing(loader, fore_net): correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") outputs, alphas, avg_images = fore_net(inputs) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() return correct, total, focus_true_pred_true, focus_false_pred_true, focus_true_pred_false, focus_false_pred_false, argmax_more_than_half # + id="lp0cGt63YuUc" def enter_into(table, sno, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg, epoch = "NA"): entry = [] entry = [sno,'fg = '+ str(fg),'bg = '+str(bg), epoch, total, correct,] entry.append((100.0*correct/total)) entry.append((100 * ftpt / total)) entry.append( (100 * ffpt / total)) entry.append( ( 100 * ftpf / total)) entry.append( ( 100 * ffpf / total)) entry.append( alpha_more_half) table.append(entry) print(" ") print("="*160) print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) ) print(" ") print("="*160) return table # + id="uS6Gq-4VfX89" def add_average_entry(table): entry =[] entry = ['Avg', "","" ,"" ,"" , "",] entry.append( np.mean(np.array(train_table)[:,6].astype(np.float)) ) entry.append( np.mean(np.array(train_table)[:,7].astype(np.float)) ) entry.append( np.mean(np.array(train_table)[:,8].astype(np.float)) ) entry.append( np.mean(np.array(train_table)[:,9].astype(np.float)) ) entry.append( np.mean(np.array(train_table)[:,10].astype(np.float)) ) entry.append( np.mean(np.array(train_table)[:,11].astype(np.float)) ) table.append(entry) print(" ") print("="*160) print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) ) print(" ") print("="*160) return table # + id="M8ClgTOAbUQu" train_table=[] test_table1=[] test_table2=[] test_table3=[] test_table4=[] fg = [fg1,fg2,fg3] bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg)) # + id="TuIb2Y29kxWT" outputId="09553f63-d123-49db-c7cf-799d5d396857" colab={"base_uri": "https://localhost:8080/"} number_runs = 10 for i in range(number_runs): fore_net = Module2().double() fore_net = fore_net.to("cuda") fore_net, epoch = training(train_loader_from_noise_train_mosaic_30k, fore_net) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(train_loader_from_noise_train_mosaic_30k, fore_net) train_table = enter_into(train_table, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg, str(epoch) ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_train_mosaic_30k, fore_net) test_table1 = enter_into(test_table1, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_test_mosaic_10k, fore_net) test_table2 = enter_into(test_table2, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_30k, fore_net) test_table3 = enter_into(test_table3, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_10k, fore_net) test_table4 = enter_into(test_table4, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg ) # + id="kloPmAalgpIz" outputId="c2f52468-cd7b-43f5-d408-f34241f41c83" colab={"base_uri": "https://localhost:8080/"} train_table = add_average_entry(train_table) # + id="00KPkU7EhPJj" outputId="1558750e-b993-409c-f678-885d43fe2b6a" colab={"base_uri": "https://localhost:8080/"} test_table1 = add_average_entry(test_table1) # + id="pW_kUqi3hR6u" outputId="6ba2848b-af2b-46de-ed70-4a1d61e92155" colab={"base_uri": "https://localhost:8080/"} test_table2 = add_average_entry(test_table2) # + id="_ZlV6qErhUUL" outputId="48ec21d5-92d7-4816-c7cf-6912795c6979" colab={"base_uri": "https://localhost:8080/"} test_table3 = add_average_entry(test_table3) # + id="BOvl6fUChV5j" outputId="be182319-ad2b-4452-d0f5-ef433731662b" colab={"base_uri": "https://localhost:8080/"} test_table4 = add_average_entry(test_table4) # + id="nkyMi1VBpq9a" # torch.save(fore_net.state_dict(),"/content/drive/My Drive/Research/mosaic_from_CIFAR_involving_bottop_eigen_vectors/fore_net_epoch"+str(epoch)+"_fg_used"+str(fg_used)+".pt")
1_mosaic_data_attention_experiments/11_mosaic_from_CIFAR_involving_direction/using_least_variance_direction/extra/extra codes when seed not set/multiple runs for fg012_without avg/fg_012_multiple_runs_gamma_0_001.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="NsTy1Z27g-lB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2d7b73b1-7282-4e18-a6c9-0bad7da6dfbd" for i in range(1042000,702648265): num = i n = i d = 0 sum = 0 while(i >= 1): i= i/10 d = d+1 while(n > 0): a = int(n%10) n = int(n/10) sum = sum + a ** d if num == sum: print(sum) break # + id="sXgfrHcphEmp" colab_type="code" colab={}
day4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AliceInHunterland/2019_IT/blob/master/MelGAN_STFT_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="mT1B4yaVfR-V" # ## Install # + id="-bXow5e8fRdx" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="029f9329-970d-4127-e8f6-5612fd73f7f5" import os # !git clone https://github.com/TensorSpeech/TensorFlowTTS os.chdir("TensorFlowTTS") # !pip install . os.chdir("..") import sys sys.path.append("TensorFlowTTS/") # + [markdown] id="2ZLYVJ2kfxAz" # ## Download pretrained feature generation model # # You can select one from two models. Please only run the seletected model cells. # # + [markdown] id="26RoI5Yzf4e3" # ### (a) Tacotron-2 # + id="RxxlcbAjfjRb" colab={"base_uri": "https://localhost:8080/"} outputId="34b0df15-f9fc-4857-820c-21e9476f3f07" print("Downloading Tacotron2 model...") # !gdown --id {"12jvEO1VqFo1ocrgY9GUHF_kVcLn3QaGW"} -O tacotron2-120k.h5 # !gdown --id {"1OI86hkN1YCpHBsIKnkELNbSho5Pj-pPY"} -O tacotron2_config.yml # + [markdown] id="ssZlA2eOhMYW" # ### (b) FastSpeech # + id="qWujjzCug7vQ" colab={"base_uri": "https://localhost:8080/"} outputId="e18a88b4-6732-4f85-cc77-67ade2598409" print("Downloading FastSpeech model...") # !gdown --id {"1T5GOE_M27zJlCAjnanpOS9HBPUcdE9sB"} -O fastspeech-150k.h5 # !gdown --id {"1TnkL2-rIZ6N-n4z4oHp3X2wIpxiFwu2H"} -O fastspeech_config.yml # + [markdown] id="9qoGI89Q_ctL" # ### (c) FastSpeech2 # + id="dGyQeQs__f4e" colab={"base_uri": "https://localhost:8080/"} outputId="345d6b10-0a24-462f-8acf-655b6a7b300b" print("Downloading FastSpeech2 model...") # !gdown --id {"1EhMD20uAFlKsii1lMnlkrsenVTFKM0ld"} -O fastspeech2-150k.h5 # !gdown --id {"1wnbIgjTI2iUsCyVJ37ar9CS8-aEjVEee"} -O fastspeech2_config.yml # + [markdown] id="26gvDECihpqP" # ## Download pretrained Vocoder MelGAN + STFT Loss # # + id="x6QsaE-uh6gf" colab={"base_uri": "https://localhost:8080/"} outputId="2c505441-58d8-4125-da58-d27c68945372" print("Downloading MelGAN-STFT model...") # !gdown --id {"1WB5iQbk9qB-Y-wO8BU6S2TnRiu4VU5ys"} -O melgan.stft-2M.h5 # !gdown --id {"1OqdrcHJvtXwNasEZP7KXZwtGUDXMKNkg"} -O melgan.stft_config.yml # + [markdown] id="3D9cOkKni_Pu" # ## Load Model # + id="UKWOHNekjoX_" colab={"base_uri": "https://localhost:8080/"} outputId="e10d89ec-dc77-491d-c735-c5aab9ebe9fd" import tensorflow as tf import yaml import numpy as np import matplotlib.pyplot as plt import IPython.display as ipd from tensorflow_tts.inference import TFAutoModel from tensorflow_tts.inference import AutoConfig from tensorflow_tts.inference import AutoProcessor # + [markdown] id="EYzz_JkjjbSb" # ### (a) Tacotron 2 # + id="8LUmgY69inlf" tacotron2_config = AutoConfig.from_pretrained('TensorFlowTTS/examples/tacotron2/conf/tacotron2.v1.yaml') tacotron2 = TFAutoModel.from_pretrained( config=tacotron2_config, pretrained_path="tacotron2-120k.h5", name="tacotron2" ) # + [markdown] id="88Dnvb83kJeX" # ### (b) FastSpeech # + id="ML_v-BM3kGwr" fastspeech_config = AutoConfig.from_pretrained('TensorFlowTTS/examples/fastspeech/conf/fastspeech.v1.yaml') fastspeech = TFAutoModel.from_pretrained( config=fastspeech_config, pretrained_path="fastspeech-150k.h5", name="fastspeech" ) # + [markdown] id="78pHAU5LAB8i" # ### (c) FastSpeech2 # + id="mlqrwBh0AFKD" fastspeech2_config = AutoConfig.from_pretrained('TensorFlowTTS/examples/fastspeech2/conf/fastspeech2.v1.yaml') fastspeech2 = TFAutoModel.from_pretrained( config=fastspeech2_config, pretrained_path="fastspeech2-150k.h5", name="fastspeech2" ) # + [markdown] id="pcUeu-BylIDB" # ### MelGAN STFT # + id="F11-o-QrlBzN" melgan_stft_config = AutoConfig.from_pretrained('TensorFlowTTS/examples/melgan_stft/conf/melgan_stft.v1.yaml') melgan_stft = TFAutoModel.from_pretrained( config=melgan_stft_config, pretrained_path="melgan.stft-2M.h5", name="melgan_stft" ) # + [markdown] id="1dWX0JNslwHg" # ## Inference # - The first time model run inference will very slow cause by @tf.function. # + id="26TDBfn_05QG" colab={"base_uri": "https://localhost:8080/"} outputId="45540e95-8528-4d09-8666-6126531d9aca" print("Downloading ljspeech_mapper.json ...") # !gdown --id {"1YBaDdMlhTXxsKrH7mZwDu-2aODq5fr5e"} -O ljspeech_mapper.json # + id="5jUYfVKomNm5" processor = AutoProcessor.from_pretrained(pretrained_path="./ljspeech_mapper.json") # + id="ktHeraInlrsl" def do_synthesis(input_text, text2mel_model, vocoder_model, text2mel_name, vocoder_name): input_ids = processor.text_to_sequence(input_text) # text2mel part if text2mel_name == "TACOTRON": _, mel_outputs, stop_token_prediction, alignment_history = text2mel_model.inference( tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), tf.convert_to_tensor([len(input_ids)], tf.int32), tf.convert_to_tensor([0], dtype=tf.int32) ) elif text2mel_name == "FASTSPEECH": mel_before, mel_outputs, duration_outputs = text2mel_model.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), ) elif text2mel_name == "FASTSPEECH2": mel_before, mel_outputs, duration_outputs, _, _ = text2mel_model.inference( tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), ) else: raise ValueError("Only TACOTRON, FASTSPEECH, FASTSPEECH2 are supported on text2mel_name") # vocoder part if vocoder_name == "MELGAN" or vocoder_name == "MELGAN-STFT": audio = vocoder_model(mel_outputs)[0, :, 0] elif vocoder_name == "MB-MELGAN": audio = vocoder_model(mel_outputs)[0, :, 0] else: raise ValueError("Only MELGAN, MELGAN-STFT and MB_MELGAN are supported on vocoder_name") if text2mel_name == "TACOTRON": return mel_outputs.numpy(), alignment_history.numpy(), audio.numpy() else: return mel_outputs.numpy(), audio.numpy() def visualize_attention(alignment_history): import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) ax.set_title(f'Alignment steps') im = ax.imshow( alignment_history, aspect='auto', origin='lower', interpolation='none') fig.colorbar(im, ax=ax) xlabel = 'Decoder timestep' plt.xlabel(xlabel) plt.ylabel('Encoder timestep') plt.tight_layout() plt.show() plt.close() def visualize_mel_spectrogram(mels): mels = tf.reshape(mels, [-1, 80]).numpy() fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-after-Spectrogram') im = ax1.imshow(np.rot90(mels), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() plt.close() # + [markdown] id="OJ_Dzeq3QlTm" # ### Speech Example # + id="4uv_QngUmFbK" input_text = "Hello, it's me. I was wondering if after all these years you'd like to meet to go over everything. They say that time's supposed to heal ya But I ain't done much healing" # + id="wiCe8nU3qJa6" # setup window for tacotron2 if you want to try tacotron2.setup_window(win_front=10, win_back=10) # + [markdown] id="4EojaL8UpWv7" # ### (a) Tacotron2 + MELGAN-STFT # + id="OYsCG-10pH31" colab={"base_uri": "https://localhost:8080/", "height": 659} outputId="189cd660-8cfc-4db8-b47c-7c44c967116a" feature_gen_model = "TACOTRON" # или "FASTSPEECH" или "FASTSPEECH2" mels, alignment_history, audios = do_synthesis(input_text, tacotron2, melgan_stft, feature_gen_model, "MELGAN-STFT") visualize_attention(alignment_history[0]) visualize_mel_spectrogram(mels[0]) ipd.Audio(audios, rate=22050)
MelGAN_STFT_Demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Strings # Strings are ordered text based data which are represented by enclosing the same in single/double/triple quotes. String0 = '<NAME> is beautiful' String1 = "<NAME> is beautiful" String2 = '''<NAME> is beautiful''' print String0 , type(String0) print String1, type(String1) print String2, type(String2) # String Indexing and Slicing are similar to Lists which was explained in detail earlier. print String0[4] print String0[4:] # ### Built-in Functions # **find( )** function returns the index value of the given data that is to found in the string. If it is not found it returns **-1**. Remember to not confuse the returned -1 for reverse indexing value. print String0.find('al') print String0.find('am') # The index value returned is the index of the first element in the input data. print String0[7] # One can also input **find( )** function between which index values it has to search. print String0.find('j',1) print String0.find('j',1,3) # **capitalize( )** is used to capitalize the first element in the string. String3 = 'observe the first letter in this sentence.' print String3.capitalize() # **center( )** is used to center align the string by specifying the field width. String0.center(70) # One can also fill the left out spaces with any other character. String0.center(70,'-') # **zfill( )** is used for zero padding by specifying the field width. String0.zfill(30) # **expandtabs( )** allows you to change the spacing of the tab character. '\t' which is by default set to 8 spaces. s = 'h\te\tl\tl\to' print s print s.expandtabs(1) print s.expandtabs() # **index( )** works the same way as **find( )** function the only difference is find returns '-1' when the input element is not found in the string but **index( )** function throws a ValueError print String0.index('Taj') print String0.index('Mahal',0) # **endswith( )** function is used to check if the given string ends with the particular char which is given as input. print String0.endswith('y') # The start and stop index values can also be specified. print String0.endswith('l',0) print String0.endswith('M',0,5) # **count( )** function counts the number of char in the given string. The start and the stop index can also be specified or left blank. (These are Implicit arguments which will be dealt in functions) print String0.count('a',0) print String0.count('a',5,10) # **join( )** function is used add a char in between the elements of the input string. 'a'.join('*_-') # '*_-' is the input string and char 'a' is added in between each element # **join( )** function can also be used to convert a list into a string. a = list(String0) print a b = ''.join(a) print b # Before converting it into a string **join( )** function can be used to insert any char in between the list elements. c = '/'.join(a)[18:] print c # **split( )** function is used to convert a string back to a list. Think of it as the opposite of the **join()** function. d = c.split('/') print d # In **split( )** function one can also specify the number of times you want to split the string or the number of elements the new returned list should conatin. The number of elements is always one more than the specified number this is because it is split the number of times specified. e = c.split('/',3) print e print len(e) # **lower( )** converts any capital letter to small letter. print String0 print String0.lower() # **upper( )** converts any small letter to capital letter. String0.upper() # **replace( )** function replaces the element with another element. String0.replace('<NAME>','Bengaluru') # **strip( )** function is used to delete elements from the right end and the left end which is not required. f = ' hello ' # If no char is specified then it will delete all the spaces that is present in the right and left hand side of the data. f.strip() # **strip( )** function, when a char is specified then it deletes that char if it is present in the two ends of the specified string. f = ' ***----hello---******* ' f.strip('*') # The asterisk had to be deleted but is not. This is because there is a space in both the right and left hand side. So in strip function. The characters need to be inputted in the specific order in which they are present. print f.strip(' *') print f.strip(' *-') # **lstrip( )** and **rstrip( )** function have the same functionality as strip function but the only difference is **lstrip( )** deletes only towards the left side and **rstrip( )** towards the right. print f.lstrip(' *') print f.rstrip(' *') # ## Dictionaries # Dictionaries are more used like a database because here you can index a particular sequence with your user defined string. # To define a dictionary, equate a variable to { } or dict() d0 = {} d1 = dict() print type(d0), type(d1) # Dictionary works somewhat like a list but with an added capability of assigning it's own index style. d0['One'] = 1 d0['OneTwo'] = 12 print d0 # That is how a dictionary looks like. Now you are able to access '1' by the index value set at 'One' print d0['One'] # Two lists which are related can be merged to form a dictionary. names = ['One', 'Two', 'Three', 'Four', 'Five'] numbers = [1, 2, 3, 4, 5] # **zip( )** function is used to combine two lists d2 = zip(names,numbers) print d2 # The two lists are combined to form a single list and each elements are clubbed with their respective elements from the other list inside a tuple. Tuples because that is what is assigned and the value should not change. # # Further, To convert the above into a dictionary. **dict( )** function is used. a1 = dict(d2) print a1 # ### Built-in Functions # **clear( )** function is used to erase the entire database that was created. a1.clear() print a1 # Dictionary can also be built using loops. for i in range(len(names)): a1[names[i]] = numbers[i] print a1 # **values( )** function returns a list with all the assigned values in the dictionary. a1.values() # **keys( )** function returns all the index or the keys to which contains the values that it was assigned to. a1.keys() # **items( )** is returns a list containing both the list but each element in the dictionary is inside a tuple. This is same as the result that was obtained when zip function was used. a1.items() # **pop( )** function is used to get the remove that particular element and this removed element can be assigned to a new variable. But remember only the value is stored and not the key. Because the is just a index value. a2 = a1.pop('Four') print a1 print a2
.ipynb_checkpoints/04 - Strings and Dictionaries-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="KU5so-4hURax" executionInfo={"status": "ok", "timestamp": 1621622064005, "user_tz": -480, "elapsed": 1315, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} outputId="8ac67bbe-6cd4-4259-f662-3ec229627add" # link colab to google drive directory where this project data is placed from google.colab import drive drive.mount('/content/gdrive', force_remount=True) # + id="sLiADzRYUDtm" executionInfo={"status": "ok", "timestamp": 1621622064006, "user_tz": -480, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} # Need to set project path here !! projectpath = "/content/gdrive/MyDrive/GraphAttnProject/SpanTree [with start node]_[walklen=3]_[p=1,q=1]_[num_walks=50]/NIPS_Submission/" # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="0yY7zMECTaI5" executionInfo={"status": "ok", "timestamp": 1621622064006, "user_tz": -480, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} outputId="46e48ead-2c2a-4816-8742-a9b55711ebaa" import os os.chdir(projectpath) os.getcwd() # + id="Y-Av32fU_sDO" executionInfo={"status": "ok", "timestamp": 1621622077787, "user_tz": -480, "elapsed": 11966, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} #"graph_data/GKAT_walks_dict_train.pkl" #train_walks_dict = pickle.load(open(f"graph_data/GKAT_walks_dict_train.pkl", 'rb')) #val_walks_dict = pickle.load(open(f'graph_data/GKAT_walks_dict_val.pkl', 'rb')) # + colab={"base_uri": "https://localhost:8080/"} id="QXmSMWTFNRun" executionInfo={"status": "ok", "timestamp": 1621622184141, "user_tz": -480, "elapsed": 5653, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} outputId="f7ec405a-854e-44e7-951c-814f107e2304" #train_walks_dict = generate_trunc_walks_dict(train_walks_dict, 6) #val_walks_dict = generate_trunc_walks_dict(val_walks_dict, 6) # + id="V0ap97m2Nf48" executionInfo={"status": "ok", "timestamp": 1621622192293, "user_tz": -480, "elapsed": 5108, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} # save random walk lists as pickle file #a_file = open("graph_data/GKAT_walks_dict_train6.pkl", "wb") #pickle.dump(train_walks_dict, a_file) #a_file.close() #a_file = open("graph_data/GKAT_walks_dict_val6.pkl", "wb") #pickle.dump(val_walks_dict, a_file) #a_file.close() # + [markdown] id="AgcEAvJ__tdn" # ### load data # + colab={"base_uri": "https://localhost:8080/"} id="3fbmt9j0RbK_" executionInfo={"status": "ok", "timestamp": 1621621977453, "user_tz": -480, "elapsed": 2445, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} outputId="59558e2f-a6e8-4c7d-8d85-bb4b45e3aadc" from tqdm.notebook import tqdm, trange import networkx as nx import pickle import numpy as np import tensorflow as tf import torch print(tf.__version__) # + id="f4QSzVFWMnuT" train_freq_mat = pickle.load(open(f'graph_data/GKAT_freq_mats_train_len={walk_len}.pkl', 'rb')) val_freq_mat = pickle.load(open(f'graph_data/GKAT_freq_mats_val_len={walk_len}.pkl', 'rb')) # + colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["618b011f65b64750a85e3419582304dd", "41927a0fe2034613875b7ed93328625e", "ec08107c453b4ae4ad2ad4fad299fc88", "149c5ca45205437699d4895c121d5ee8", "cbc57b8f8c104fe59e0dca5eca0c32d1", "30adaf0be9b74c81b491efc3e19c73ec", "899410c3ad5d419983d8add5ecf4ee9e", "9e7cf948a6b54699918c8c65f05c64ac"]} id="bh1AkKncTaV9" outputId="22c3f6ab-d2a6-43df-d4a7-e9108c178551" # load train and validation graph data, with 1535 samples for training, and 512 samples for validation # each graph contains 50 nodes num_nodes = 50 num_train = 1535 num_val = 512 # load all train and validation graphs train_graphs = pickle.load(open(f'graph_data/train_graphs.pkl', 'rb')) val_graphs = pickle.load(open(f'graph_data/val_graphs.pkl', 'rb')) # load all labels train_labels = np.load('graph_data/train_labels.npy') val_labels = np.load('graph_data/val_labels.npy') # + id="PF3BrlsXxCZ5" # save random walk lists as pickle file a_file = open("graph_data/train_graphs.pkl", "wb") pickle.dump(train_graphs, a_file) a_file.close() a_file = open("graph_data/val_graphs.pkl", "wb") pickle.dump(val_graphs, a_file) a_file.close() # + id="77VeEQGI_m5A" # + [markdown] id="go5KHSJO_nQP" # ### generate random walks # + id="lhsmf_HoTadp" # generate random walks for GKAT from deepwalk import OnlyWalk path_length = 10 num_random_walk= 50 def generate_walks_GKAT(graphs, num_random_walk, path_length, stopping_prob = 0.0, p=1, q=1, ignore_start = False): walks = [] print('Start generating GWK masking') print("walk length = ", path_length) print("number of random walks = ", num_random_walk) for i in tqdm(range(len(graphs))): graph = (graphs[i]) n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True) walks.append(n2v.walker.walks_dict) return walks # + id="DtJjrMJQVJis" # start random walks GKAT_walks_train = generate_walks_GKAT(train_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = stopping_prob, p = p, q= q, ignore_start = ignore_start) GKAT_walks_val = generate_walks_GKAT(val_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = stopping_prob, p = p, q= q, ignore_start = ignore_start) # + id="QFXufVf2q004" # save random walk lists as pickle file a_file = open("graph_data/GKAT_walks_dict_train.pkl", "wb") pickle.dump(GKAT_walks_train, a_file) a_file.close() a_file = open("graph_data/GKAT_walks_dict_val.pkl", "wb") pickle.dump(GKAT_walks_val, a_file) a_file.close() # + id="SpkwMMXHvnaG" # + [markdown] id="jDtXR9uYvoSx" # ### generate random walk frequency matrix and GKAT dot product kernel # + id="whzP3wcivk9w" executionInfo={"status": "ok", "timestamp": 1621622077789, "user_tz": -480, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} def generate_trunc_walks_dict(walks, trunc_len): print("generating walks with length = ", trunc_len) num_random_walk = len(walks[0][0]) num_nodes = len(walks[0]) trunc_walks = [] num_graphs = len(walks) for i in range(num_graphs): g_dict = {} for j in range(num_nodes): walklist = [] for k in range(num_random_walk): walklist.append(walks[i][j][k][:trunc_len]) g_dict[j] = walklist trunc_walks.append(g_dict) return trunc_walks # + id="LAZM0smWz5Uz" executionInfo={"status": "ok", "timestamp": 1621600844646, "user_tz": -480, "elapsed": 1247, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} def generate_frequency_matrix_and_masking_GKAT(walks_dict): num_graphs = len(walks_dict) num_random_walk = len(walks_dict[0][0]) num_nodes = len(walks_dict[0]) walk_length = len(walks_dict[0][0][0]) freq_mat_list = [] dot_kernel_list = [] for graph in tqdm(walks_dict): freq_mat = np.zeros([num_nodes, num_nodes]) for key in graph: for i in range(num_random_walk): for j in range(walk_length): freq_mat[int(key),int(graph[key][i][j])] +=1 freq_mat /= num_random_walk dot_prod = np.matmul(freq_mat, np.transpose(freq_mat)) # divide the dot_prod kernel by the norm of the kernel deno = np.matmul(np.diagonal(dot_prod)[:, None], np.transpose(np.diagonal(dot_prod)[:, None])) dot_kernel = dot_prod / np.sqrt(deno) #np.diagonal(dot_prod)[:, None] freq_mat_list.append(freq_mat* num_random_walk) dot_kernel_list.append(dot_kernel) return freq_mat_list, dot_kernel_list # + colab={"base_uri": "https://localhost:8080/", "height": 545, "referenced_widgets": ["b2441d152ad94b9d8564a57cb151c11c", "4844848ea856400b8898ca303a2cd3af", "fe506fb005d44a42bc0be585564964a3", "4994b8e4af334a8e84c8f5c1e13a6ee7", "ca5b5ca8fdd44a3faf2b95f9e813b3d8", "<KEY>", "914ae43e2f9d424da6fec69062c88a95", "7be3cc7f1e4140cbacac9be8ebf11d80", "33054a94284a4cd794fe9544eb9e1a5a", "684da1f30e434fb3ba3c5aa4d9771435", "ccda7c9f1f084c23ab2315b4a25794d1", "<KEY>", "a2fb4a57b80c4e6bba851fabb8f94ed9", "<KEY>", "<KEY>", "c6660f4253a8473d96a76f68def898a6", "<KEY>", "<KEY>", "e7bea678f10e4148bd73fd179767f1ca", "<KEY>", "cd864d918abc470a9ca3d33551dd7119", "<KEY>", "a7807397f5f24035a7266f4c1fe33606", "31473f64041545eb9ae25b4ade1f59dd", "<KEY>", "<KEY>", "<KEY>", "aad08c14c6b247e9b537724c9a53ccc6", "df38cd9ae7554230992fad6d8a7c7380", "b419d2f8fa564e54ad3c7f98944e0026", "ea6b314a647d41b3806d538629d35457", "<KEY>", "2bc5433528e6436a9cf1419a83723b17", "eca271cd5db64d8da14b604e128f43ff", "<KEY>", "a4eac2e274ca457793bcead69dfc56d1", "<KEY>", "6b3e902ca4674c50982841c39ed2f462", "<KEY>", "<KEY>", "75f3ba5320e44233b5db8e918686e25d", "c71afac042e243d99c3492ff6770ca7c", "fa23f077405649079dc2a11e4576c2a4", "9f2b7c1e8daf4168a206e53e2116d7db", "e3557f8c6bd342b89c7ed0f03404e74d", "cecfa08772664cd691e41cdf904a1346", "<KEY>", "<KEY>", "34f443c065d7472babcac8adce0eb0ca", "be4dc675f8844a2c84547de70a0fca20", "b3e48dcd7e8e4f39bcc7ba4dab065e8b", "7e5dc7e1f5194417ae2337e625f6876f", "0ba7dbd1839944aea618e0c1a25e1571", "473cda2391fb4c288abee8d38945bc02", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "2e840b8aef2a4a709cb07a46dacfd3e7", "4ab615edc47f4ed19e5db03c418bcda9", "<KEY>", "b4e50b77a9744bca91910b052f9e4d39", "b3e8d191e14f4a31b6564fd0052fabda", "<KEY>"]} id="RfxH5szQvtPj" executionInfo={"status": "ok", "timestamp": 1621601274021, "user_tz": -480, "elapsed": 394445, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10980112682964758327"}} outputId="a02ac0f7-2a69-49c7-c587-c5086a8ba894" # generate random walk frequency matrix and GKAT dot product kernel with different random walk lengths # the generated data are saved also as pickle files for trunc_len in range(2, 11): trunc_walks_train = generate_trunc_walks_dict(GKAT_walks_train, trunc_len) trunc_walks_val = generate_trunc_walks_dict(GKAT_walks_val, trunc_len) freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_train) a_file = open(f"graph_data/GKAT_freq_mats_train_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/GKAT_dot_kernels_train_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_val) a_file = open(f"graph_data/GKAT_freq_mats_val_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/GKAT_dot_kernels_val_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() # + id="CiysVGUhTakh" # + id="ts2D0xMxTar_" # + id="16gycyQwFWDt"
Synthetic_InducedCycle/Generate_GKAT_masking_and_walks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Week 1 Homework and/or In-Class Activity # ## Working with jupyter notebooks and Python fundamentals # ### In the next cell, do the following: # 1. Create markdown cell # 2. Create a level 1 header with the text "Week 1" as the header text # 3. Create a level 2 header with the text "Learning Jupyter Notebooks and Basic Python" # 4. Create a level 3 header with your name as the text # 5. Run the cell # ### In the next cell, do the following: # 1. Create a comment in a code cell with the text "this is python" as the comment # 2. Write the python code to print the statement "Hello, buddy!" # ### In the next cell, do the following: # 1. Make sure you can see line numbers in the left margin of the cell # >- There are several ways you can do this but I'll leave it to you to find one # 2. Write a comment on line 1 that has the text "Learning some Python fundamentals" # 3. On line 2, write the code to print the string "Hello" on line # 4. On line 4, start a multi-line comment then include the following text on lines 5-7 # >- Line 5 text: this is the first line of a multi-line paragraph # >- Line 6 text: this is the second line of a multi-line paragraph # >- Line 7 text: this is the third line of a multi-line paragraph # 5. On line 8, close the multi-line quote # 6. Run the cell # ### In the next cell, do the following: # 1. Print the following to the screen # >- A blank character # >- The text: There's a snake in my boot! # ### In the next cell, do the following: # 1. Create a variable, `firstName`, and assign it the value "John" # ### In the next cell, do the following: # 1. On line 1, create a variable, `Sport`, and assign it the value "baseball" # 2. On line 2, call the `print()` function and pass the variable `Sport` to it # 3. On line 3, call the `print()` function and pass the variable `sport` to it # ##### Type the error code you received from the previous code cell in this markdown cell, use a bullet point # ### In the next cell: # 1. On line 1, define a variable, `num1`, and assign it the integer value 5 # 2. On line 2, define a variable, `num2`, and assign it the float value of 8.4 # 3. On line 3, print `num1` and `num2` in the same print function # ### In the next cell: # 1. On line 1 type a comment with the text: "Storing strings into variables" # 2. On line 2, create a variable, `name` and assign it the value "<NAME>" # 3. On line 3, cretae a variable, `favNum` and assign it the value '9' # 4. Print `name` and `favNum` using the same call to the print function # ### In the next cell: # 1. On line 1, create a variable, `result`, that is assigned the sum of `num1` and `num2` using variable names # 2. Print `result` # ### In the next cell: # 1. Type, result += 1 # 2. Print `result` # 3. Type, result \*= num1 # 4. Print `result` # ##### Now, go back to the prior code cell and write a comment on each line explaining what the code was doing # >- What is the current value of result? # ### In the next cell: # 1. Define a variable, `name`, and assign it the string value "John" # 2. Print `name` # 3. Assign the variable, `name` the string value "Sam" # 4. Print `name` # ##### What is the current value of the `name` variable? # >- In the next cell write a multi-line comment explaining your answer.
Week 1/Week1_HW_InClassActivity_JupyterPython_Basics.ipynb