text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Ray Serve - Model Serving Challenges © 2019-2022, Anyscale. All Rights Reserved ## The Challenges of Model Serving Model development happens in a data science research environment. There are many challenges, such as feature engineering, model selection, missing or messy data, yet there are tools at the data scientists' disposal. By contrast, model deployment to production faces an entirely different set of challenges and require different tools. We must bridge the divide as much as possible. So what are some of the challenges of model serving? <img src="https://images.ctfassets.net/xjan103pcp94/6IcTIir1U1WBJdSbdygQ08/70ceeb0e4f5c8b72b7007c61cb19eed8/WhereRayServeFitsIn.png" width="70%" height="40%"> ### 1. It Should Be Framework Agnostic First, model serving frameworks must be able to serve models from popular frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or even arbitrary Python functions. Even within the same organization, it is common to use several machine learning frameworks, in order to get the best model. Second, machine learning models are typically surrounded by (or work in conjunction with) lots of application or business logic. For example, some model serving is implemented as a RESTful service to which scoring requests are made. Often this is too restrictive, as some additional processing, such as fetching additional data from a online feature store, to augment the request data, may be desired as part of the scoring process, and the performance overhead of remote calls may be suboptimal. ### 2. Pure Python or Pythonic In general, model serving should be intuitive for developers and simple to configure and run. Hence, it is desirable to use pure Python and to avoid verbose configurations using YAML files or other means. Data scientists and engineers use Python and Python-based ML frameworks to develop their machine learning models, so they should also be able to use Python to deploy their machine learning applications. This need is growing more critical as online learning applications combine training and serving in the same applications. ### 3. Simple and Scalable Model serving must be simple to scale on demand across many machines. It must also be easy to upgrade models dynamically, over time. Achieving production uptime and performance requirements are essential for success. ### 4. DevOps/MLOps Integrations Model serving deployments need to integrate with existing "DevOps" CI/CD practices for controlled, audited, and predicatble releases. Patterns like [Canary Deployment](https://martinfowler.com/bliki/CanaryRelease.html) are particularly useful for testing the efficacy of a new model before replacing existing models, just as this pattern is useful for other software deployments. ### 5. Flexible Deployment Patterns There are unique deployment patterns, too. For example, it should be easy to deploy a forest of models, to split traffic to different instances, and to score data in batches for greater efficiency. See also this [Ray blog post](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) on the challenges of model serving and the way Ray Serve addresses them. It also provides an example of starting with a simple model, then deploying a more sophisticated model into the running application. Along the same lines, this blog post, [Serving ML Models in Production Common Patterns](https://www.anyscale.com/blog/serving-ml-models-in-production-common-patterns) discusses how deployment patterns for model serving and how you can use Ray Serve. Additionally, listen to this webinar [Building a scalable ML model serving API with Ray Serve](https://www.anyscale.com/events/2021/09/09/building-a-scalable-ml-model-serving-api-with-ray-serve). This introductory webinar highlights how Ray Serve makes it easy to deploy, operate and scale a machine learning API. <img src="images/PatternsMLProduction.png" width="70%" height="40%"> ## Why Ray Serve? [Ray Serve](https://docs.ray.io/en/latest/serve/index.html) is a scalable, framework-agnostic and Python-first model serving library built on [Ray](https://ray.io). <img src="images/ray_serve_overview.png" width="70%" height="40%"> For users, Ray Serve offers these benefits: * **Framework Agnostic**: You can use the same toolkit to serve everything from deep learning models built with [PyTorch](https://docs.ray.io/en/latest/serve/tutorials/pytorch.html#serve-pytorch-tutorial), [Tensorflow](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), or [Keras](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), to [scikit-Learn](https://docs.ray.io/en/latest/serve/tutorials/sklearn.html#serve-sklearn-tutorial) models, to arbitrary business logic. * **Python First:** Configure your model serving with pure Python code. No YAML or JSON configurations required. Since Serve is built on Ray, it also allows you to scale to many machines, in your datacenter or in cloud environments, and it allows you to leverage all of the other Ray frameworks. ## Ray Serve Archictecture and components <img src="images/architecture.png" height="40%" width="70%"> There are three kinds of actors that are created to make up a Serve instance: **Controller**: A global actor unique to each Serve instance that manages the control plane. The Controller is responsible for creating, updating, and destroying other actors. Serve API calls like creating or getting a deployment make remote calls to the Controller. **Router**: There is one router per node. Each router is a Uvicorn HTTP server that accepts incoming requests, forwards them to replicas, and responds once they are completed. **Worker Replica**: Worker replicas actually execute the code in response to a request. For example, they may contain an instantiation of an ML model. Each replica processes individual requests from the routers (they may be batched by the replica using `@serve.batch`, see the [batching docs](https://docs.ray.io/en/latest/serve/ml-models.html#serve-batching)). For more details, see this [key concepts](https://docs.ray.io/en/latest/serve/index.html) and [architecture](https://docs.ray.io/en/latest/serve/architecture.html) documentation. ### Lifetime of a Request When an HTTP request is sent to the router, the follow things happen: * The HTTP request is received and parsed. * The correct deployment associated with the HTTP url path is looked up. The request is placed on a queue. * For each request in a deployment queue, an available replica is looked up and the request is sent to it. If there are no available replicas (there are more than max_concurrent_queries requests outstanding), the request is left in the queue until an outstanding request is finished. Each replica maintains a queue of requests and executes one at a time, possibly using asyncio to process them concurrently. If the handler (the function for the deployment or __call__) is async, the replica will not wait for the handler to run; otherwise, the replica will block until the handler returns. ## Two Simple Ray Serve Examples We'll explore a more detailed example in the next lesson, where we actually serve ML models. Here we explore how deployments are simple with Ray Serve! We will first use a function that does "scoring," sufficient for _stateless_ scenarios, then a use class, which enables _stateful_ scenarios. <img src="images/func_class_deployment.png" width="80%" height="50%"> But first, initialize Ray as before: ``` import ray from ray import serve import requests # for making web requests ``` Now we initialize Ray Serve itself. Note that we did not have to start a Ray cluster explicity. If one is not running `serve.start()` will automatically launch a Ray cluster, otherwise it'll connect to an exisisting instance. ``` serve.start() ``` Next, define our stateless function for processing requests. Let's define a simple function that will be served by Ray. As with Ray Tasks, we can decoarte this function with `@serve.deployment`, meaning this is going to be deployed on Ray Serve as function to which we can send Flask requests. It takes in a `request`, extracts the request parameter with key "name," and returns an echoed string. Simple to illustrate that Ray Serve can also serve Python functions. ### Create a Python function deployment ``` @serve.deployment def hello(request): name = request.query_params["name"] return f"Hello {name}!" ``` Use the `<func_name>.deploy()` method to deploy in on Ray Serve ### Deploy a Python function for serving ``` hello.deploy() ``` ### Send some requests to our Python function ``` for i in range(10): response = requests.get(f"http://127.0.0.1:8000/hello?name=request_{i}").text print(f'{i:2d}: {response}') ``` You should see `hello request_N` in the output. Now let's serve another "model" in the same Ray Serve instance: ``` from random import random import starlette from starlette.requests import Request @serve.deployment class SimpleModel: def __init__(self): self.weight = 0.5 self.bias = 1 self.prediction = 0.0 def __call__(self, starlette_request): if isinstance(starlette_request, starlette.requests.Request): data = starlette_request.query_params['data'] else: # Request came via a ServerHandle API method call. data = starlette_request self.prediction = float(data) * self.weight * random() + self.bias return {"prediction": self.prediction} SimpleModel.deploy() ``` ### Send some requests to our Model ``` url = f"http://127.0.0.1:8000/SimpleModel" for i in range(5): print(f"prediction : {requests.get(url, params={'data': random()}).text}") ``` ### List Deployments ``` serve.list_deployments() serve.shutdown() ``` ## Exercise - Try Adding more examples Here are some things you can try: 1. Add a function, deploy, and send requests. 2. Add a classs, deploy, and send requests
github_jupyter
# Home Depot Product Search Relevance The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters. ## LabGraph Create This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code. ``` import graphlab as gl ``` ### Load data from CSV files ``` train = gl.SFrame.read_csv("../data/train.csv") test = gl.SFrame.read_csv("../data/test.csv") desc = gl.SFrame.read_csv("../data/product_descriptions.csv") ``` ### Data merging ``` # merge train with description train = train.join(desc, on = 'product_uid', how = 'left') # merge test with description test = test.join(desc, on = 'product_uid', how = 'left') ``` ### Let's explore some data Let's examine 3 different queries and products: * first from the training set * somewhere in the moddle in the training set * the last one from the training set ``` first_doc = train[0] first_doc ``` **'angle bracket'** search term is not contained in the body. **'angle'** would be after stemming however **'bracket'** is not. ``` middle_doc = train[37033] middle_doc ``` only **'wood'** is present from search term ``` last_doc = train[-1] last_doc ``` **'sheer'** and **'courtain'** are present and that's all ### How many search terms are not present in description and title for ranked 3 documents Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title ``` train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term']) ranked3doc = train[train['relevance'] == 3] print ranked3doc.head() len(ranked3doc) words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True) words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True) words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True) wordsdiff_desc = [] wordsdiff_title = [] puid = [] search_term = [] ws_count = [] ws_count_used_desc = [] ws_count_used_title = [] for item in xrange(len(ranked3doc)): ws = words_search[item] pd = words_description[item] pt = words_title[item] diff = set(ws) - set(pd) if diff is None: diff = 0 wordsdiff_desc.append(diff) diff2 = set(ws) - set(pt) if diff2 is None: diff2 = 0 wordsdiff_title.append(diff2) puid.append(ranked3doc[item]['product_uid']) search_term.append(ranked3doc[item]['search_term']) ws_count.append(len(ws)) ws_count_used_desc.append(len(ws) - len(diff)) ws_count_used_title.append(len(ws) - len(diff2)) differences = gl.SFrame({"puid" : puid, "search term": search_term, "diff desc" : wordsdiff_desc, "diff title" : wordsdiff_title, "ws count" : ws_count, "ws count used desc" : ws_count_used_desc, "ws count used title" : ws_count_used_title}) differences.sort(['ws count used desc', 'ws count used title']) print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0])) print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0])) print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) & (differences['ws count used title'] == 0)])) import matplotlib.pyplot as plt %matplotlib inline ``` ### TF-IDF with linear regression ``` train_search_tfidf = gl.text_analytics.tf_idf(train['search_term_word_count']) train['search_tfidf'] = train_search_tfidf train['product_desc_word_count'] = gl.text_analytics.count_words(train['product_description']) train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count']) train['desc_tfidf'] = train_desc_tfidf train['product_title_word_count'] = gl.text_analytics.count_words(train['product_title']) train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count']) train['title_tfidf'] = train_title_tfidf train['distance'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf'])) train['distance2'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf'])) model1 = gl.linear_regression.create(train, target = 'relevance', features = ['distance', 'distance2'], validation_set = None) #let's take a look at the weights before we plot model1.get("coefficients") test['search_term_word_count'] = gl.text_analytics.count_words(test['search_term']) test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count']) test['search_tfidf'] = test_search_tfidf test['product_desc_word_count'] = gl.text_analytics.count_words(test['product_description']) test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count']) test['desc_tfidf'] = test_desc_tfidf test['product_title_word_count'] = gl.text_analytics.count_words(test['product_title']) test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count']) test['title_tfidf'] = test_title_tfidf test['distance'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf'])) test['distance2'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf'])) ''' predictions_test = model1.predict(test) test_errors = predictions_test - test['relevance'] RSS_test = sum(test_errors * test_errors) print RSS_test ''' output submission = gl.SFrame(test['id']) submission.add_column(output) submission.rename({'X1': 'id', 'X2':'relevance'}) submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: str(x['relevance'])) submission.export_csv('../data/submission.csv', quote_level = 3) #gl.canvas.set_target('ipynb') ```
github_jupyter
<h1>Data Interpretation and Storytelling</h1> <p>Authored and presented by <a href="https://www.iwi.unibe.ch/ueber_uns/personen/prof_dr_khobzi_hamid/index_ger.html">Prof. Dr. Hamid Khobzi</a> from the Universtiy of Bern for PyLadies, Amsterdam on 30.09.2020.</p> <p>The link to the data resource: <a href="https://archive.ics.uci.edu/ml/datasets/Online+Retail+II">Click Here.</a></p> <h2>Business Problem</h2> <p>In this exercise, the focus is on a business case from a real online retail store based in UK. As the CIO of the company has declared to the data science team, the goal of the project is to find hidden patterns and insights from data that can help business with better targeting strategies.</p> <p>You, as a member of the data science team, are provided with a dataset that contains purchase transactions of the retail store. The first challenge is to conduct exploratory data analysis using data transformation and visualization. To serve this purpose, the data science team has decided to use a particular conceptual framework, namely RFM, to conduct the analysis and interpret the data. RFM Analysis is a conceptually-based analysis that sheds light on behavioral characteristics of customers/users, and is open for interpretation in different contexts. RFM stands for:<p> <ul> <li><b>(R) Recency:</b> The interval between the purchase and the time of analysis.</li> <li><b>(F) Frequency:</b> The number of purchases within a certain period.</li> <li><b>(M) Monetary:</b> The amount of money spent during a certain period.</li> </ul> <p>The second challenge is to generate information in the form of visualizations and tables, that can contribute to a suitable storytelling.</p> <p>With all that being said, let's begin the analysis!</p> <h2>Reading Data</h2> <p>We need to load pandas library for data wrangling. Pandas is a Python library that allows us to work with data structures and perform various operations on them. Afterward, we should use a pandas method to read the data for further processing.</p> ``` import pandas as pd import zipfile ZippedFile = zipfile.ZipFile('online_retail_v2.0.zip') org_data = pd.read_csv(ZippedFile.open('online_retail_v2.0.csv')) ``` <p>Let's take a look at the data by showing the first 3 records.</p> ``` org_data.head(3) ``` <p>Question: What part of this data could be useful for generating insights?<p> <h2>Data Cleansing</h2> <p>Since the data was stored as a CSV file, the values need to be converted from a string to their natural type. For instance:</p> <ul> <li>"Quantity" should be converted to an integer number.</li> <li>"Price" should be converted to a float number.</li> <li>"InvoiceDate" should be converted to a datetime format.</li> </ul> ``` org_data['Quantity'] = org_data['Quantity'].astype('int64') #converting Quantity to an integer number org_data['InvoiceDate'] = pd.to_datetime(org_data['InvoiceDate'], format='%Y-%m-%d %H:%M:%S') #converting InvoiceDate to a datetime format ``` <h3>Exercise:</h3> <p>Now, you may try to convert the attribute "Price" to a data type float.</p> ``` org_data['Price'] = org_data['Price'].astype('float') #converting Price to an float number ``` <p>In the next step, to make sure there is no missing values in the data, we exclude any record of data that contains a missing value for any of the data attributes.</p> ``` org_data = org_data.dropna(how='any', axis='rows') ``` <p>Now, we need to inspect the data for any existing problem. For this purpose, we can draw a box plot for variables to check their distribution, outliers, or any other issues. A box plot is a very useful visualization technique for inspecting the data and finding potential problems. Now, we must load pyplot API, which is a Matlab-like plotting framework, to be able to visualize the data using box plots.<p> ``` import matplotlib.pyplot as plt plt.boxplot(org_data['Quantity']) plt.ylabel('Order Quantity') ``` <h3>Exercise:</h3> <p>Now, you may try to visualize the attribute "Price" by using a box plot.</p> ``` plt.boxplot(org_data['Price']) plt.ylabel('Product Price') ``` <p>We can observe two major problems for both variables. First, both variables contain negative values, while order quantity and product price cannot be negative or zero. Second, there are outliers in the data that is better to be excluded from the analysis.<p> ``` CleanData = org_data[org_data.Quantity>0] #Keeping data records with a value of higher than 0 for Quantity CleanData = CleanData[CleanData.Price>0] #Keeping data records with a value of higher than 0 for Price CleanData = CleanData[CleanData.Quantity < org_data['Quantity'].quantile(0.9)] #Erasing (triming) outliers based on Quantity CleanData = CleanData[CleanData.Price < org_data["Price"].quantile(0.9)] #Erasing (triming) outliers based on Price ``` <h2>Data Transformation for RFM Analysis</h2> <p>Now that the data is cleansed, it is time to prepare the data for RFM analysis. Accordingly, we need to transform our data. In other words, we need to generate new attributes in the data based on the existing attributes.</p> <p>First, we need to generate a new attribute that shows how many days ago the transaction was performed. So, we need to generate this new attribute based on "InoviceDate." In the first step, we only keep the date and store it in a new column called "Date." Then, we calculate the interval between the transaction date and the date of the last transaction in the data in terms of days.</p> <p>Second, we need to generate another attribute that shows the amount of sales for each transaction. We already have the order quantity and the product price in the data. So, we need to multiply those two values and store the outcome in a new attribute called "Sales."</p> ``` #storing transaction date in a new attribute CleanData['Date'] = CleanData['InvoiceDate'].dt.date #calculating and storing interval between transaction date and the day in which the last transaction was done in a new attribute CleanData['PurchaseInterval'] = (max(CleanData.Date) - CleanData['Date']).dt.days ``` <h3>Exercise:</h3> <p>Now, you may try to generate the Sales from the existing attributes including order quantity and product price.</p> ``` #calculating and storing sales amount for each transaction in a new attribute CleanData['Sales'] = CleanData.Quantity * CleanData.Price ``` <p>All right! Let's take a look at the data.</p> ``` CleanData.head(3) ``` <p>Now, we need to aggregate the data based on customers and find out when they have done their last transaction, how many transactions they have performed, and how much they have spent in the online retail store in the whole time.</p> Then, we store these newly generated attributes for further analysis.</p> ``` RFM_Data = CleanData.groupby('Customer ID').agg(Recency=('PurchaseInterval','min'), Frequency=('Sales','count'), Monetary=('Sales','sum')) ``` <h3>Exercise:</h3> <p>Now, you may try to inspect the first three rows in the newly generated dataset for RFM attributes.</p> ``` RFM_Data.head(3) ``` <p>Question: How do we interpret the numbers in the above table?</p> <p>Let's look at the first row.</p> <p>This customer has ordered one or more products from the retail store 2 days ago. This shows that this customer is likely to be coming back to the store for shopping more often. Also, he has shopped a total of 140 times. This also supports the assumption that this customer shops from the retail store regularly. Moreover, this customer has spent a total of more than 2700 pounds. On the other hand, the customer in the second row, has only shopped one time and that was done 248 days ago with only spending 17 pounds. A simple comparison shows customer "12347.0" is more loyal to the retail store than customer "12348.0"</p> <h2>Visualizing RFM Attributes</h2> <p>Now, let's find some patterns based on RFM attributes by visualizing the data. We can start with a scatter matrix, which show us the distribution of all data attributes and their plausible correlations.</p> ``` pd.plotting.scatter_matrix(RFM_Data, alpha=0.2) ``` <p>Among the above visualizations, the one on top left is likely to give us some interesting insights. Let's look at it closely. </p> <p>That visualization is a histogram. So, we need to draw a histogram for Recency attribute.</p> ``` plt.hist(RFM_Data['Recency'], bins=15, edgecolor = 'black', alpha=0.8) plt.xlabel('Number of Days since the Last Purchase') plt.ylabel('Number of Customers') ``` <h3>Exercise:</h3> <p>How can we interpret the information presented in this plot?</p> <p>Is there a way we can make this figure more informative and interesting?</p> ``` plt.hist(RFM_Data['Recency'], bins=15, edgecolor = 'black', alpha=0.8) plt.axvline(RFM_Data['Recency'].median(), color='red', linestyle='dashed', linewidth=1.5) plt.text(60, 1200, 'Median', color='red', fontsize=15) plt.xlabel('Number of Days since the Last Purchase') plt.ylabel('Number of Customers') plt.savefig('RecencyHistogram.png', dpi=300) ``` <p>This plot is actually a useful piece of information that could be a part of the report. In other words, we can tell a good story based on this plot, and that will help front-line decision makers to design better targetting strategies.</p> <h3>Exercise</h3> <p>Now that you are familiar with RFM as a conceptual framework for analyzing data, you may redo the analysis on this data but with a focus on countries instead of customers. In the above example, we generated RFM parameters for each customer. This time, we want to generate those parameters for each country. Let's see whether you can find interesting insights. </p> ``` RFM_Data_Countries = CleanData.groupby('Country').agg(Recency=('PurchaseInterval','min'), Frequency=('Sales','count'), Monetary=('Sales','sum')) ``` <p>All right! Let's take a look at the newly generated data for RFM attributes.</p> ``` RFM_Data_Countries.head(3) ``` <p>Question: How can we interpret the RFM attributes for different countries?</p> <p>Let's look at Austria and Bahrain:</p> <p>Austrian customers have ordered one or more products from the retail store 1 days ago. Also, they have shopped a total of 370 times with a total value of more than 5500 pounds. On the other hand, the Bahraini customers, has only shopped 10 times and their last purchase was done 204 days. Bahraini customers have generated a total revenue of nearly 220 pounds. A simple comparison shows Austria is a more important market for the retail store compared to Bahrain.</p> <p>Now, let's rank the countries based on Monetary and Frequency attributes and look at the top 10 of them. Perhaps, a bar plot could be a good option to show such information.</p> ``` Sorted_Countries_data = RFM_Data_Countries.sort_values(['Monetary','Frequency'] , ascending=[False,False]) F_values_2 = Sorted_Countries_data.iloc[:10,1] M_values_2 = Sorted_Countries_data.iloc[:10,2] import numpy as np bar_width = 0.4 r1 = np.arange(len(F_values_2)) r2 = [x + bar_width for x in r1] plt.bar(r1, M_values_2, width = bar_width, color = 'blue', edgecolor = 'black', capsize=7, label='Monetary') plt.bar(r2, F_values_2, width = bar_width, color = 'cyan', edgecolor = 'black', capsize=7, label='Frequency') plt.xticks([r + bar_width for r in range(len(F_values_2))], Sorted_Countries_data.index, rotation='70') plt.legend() plt.show() ``` <p>Since the retail store is based in the UK, a big part of the observations is related to the UK. Presenting this chart as a part of the story is not really adequate. Because the chart is dominated by one country and the presented information for other countries are useless. </p> <p>Let's suppose we want to focus on the international market. That could be a better story to tell. Thus, we repeat the above visualization after exclusion of the UK from the analysis.</p> <h3>Exercise:</h3> <p>How can we change the presentation of information in the above plot for the international market?</p> <p>The first step is to only keep foreign countries. You may try to do that now.</p> ``` Sorted_Countries_data = Sorted_Countries_data.iloc[1:,] #erasing the record related to the United Kingdom ``` <p>Now, we should calculate the Monetary ratio for each foreign country and prepare the information for visualization.</p> ``` Sorted_Countries_data['MonetaryRatio'] = round(Sorted_Countries_data['Monetary']/sum(Sorted_Countries_data['Monetary'])*100,2) #calculating the Monetary ratio for each foreign country Sorted_Countries_data['Country'] = Sorted_Countries_data.index #Creating a new column with country names Sorted_Countries_data.head(5) PieData = Sorted_Countries_data.iloc[:5,[3,4]] OthersRatio = round(100 - sum(Sorted_Countries_data.head(5)['MonetaryRatio']),2) PieData = PieData.append({'Country':'Others','MonetaryRatio': OthersRatio}, ignore_index=True) ``` <p>Now, let's draw a pie chart to present the top 5 countries along with others with their market share.</p> ``` figureObject, axesObject = plt.subplots() axesObject.pie(PieData['MonetaryRatio'], labels=PieData['Country'], #shows the country names next to each slice autopct='%1.2f') #shows the percentages on each slice figureObject.savefig('PieChart.png', dpi=300) ``` <p>As we can see in the above bar chart, Germany is the largest international market followed by France and Ireland. Such information could be useful for developing marketing strategies. This pie chart is a good way of adding to the story. By presenting this pie chart to the frontline decision makers, we can help them improve their targetting strategies.</p>
github_jupyter
# Artificial Intelligence in Finance ## Interactive Neural Networks ## Tensors & Tensor Operations ``` import math import numpy as np import pandas as pd from pylab import plt, mpl np.random.seed(1) plt.style.use('seaborn') mpl.rcParams['savefig.dpi'] = 300 mpl.rcParams['font.family'] = 'serif' np.set_printoptions(suppress=True) t0 = np.array(10) t0 t1 = np.array((2, 1)) t1 t2 = np.arange(10).reshape(5, 2) t2 t3 = np.arange(16).reshape(2, 4, 2) t3 t2 + 1 t2 + t2 t1 t2 np.dot(t2, t1) t2[:, 0] * 2 + t2[:, 1] * 1 np.dot(t1, t2.T) ``` ## Simple Neural Network ### Estimation ``` features = 3 samples = 5 l0 = np.random.random((samples, features)) l0 w = np.random.random((features, 1)) w l2 = np.dot(l0, w) l2 y = l0[:, 0] * 0.5 + l0[:, 1] y = y.reshape(-1, 1) y e = l2 - y e mse = (e ** 2).mean() mse d = e * 1 d a = 0.01 u = a * np.dot(l0.T, d) u w w -= u w l2 = np.dot(l0, w) e = l2 - y mse = (e ** 2).mean() mse a = 0.025 w = np.random.random((features, 1)) w steps = 800 for s in range(1, steps + 1): l2 = np.dot(l0, w) e = l2 - y u = a * np.dot(l0.T, e) w -= u mse = (e ** 2).mean() if s % 50 == 0: print(f'step={s:3d} | mse={mse:.5f}') l2 - y w ``` ### Classification ``` def sigmoid(x, deriv=False): if deriv: return sigmoid(x) * (1 - sigmoid(x)) return 1 / (1 + np.exp(-x)) x = np.linspace(-10, 10, 100) plt.figure(figsize=(10, 6)) plt.plot(x, np.where(x > 0, 1, 0), 'y--', label='step function') plt.plot(x, sigmoid(x), 'r', label='sigmoid') plt.plot(x, sigmoid(x, True), '--', label='derivative') plt.legend(); features = 4 samples = 5 l0 = np.random.randint(0, 2, (samples, features)) l0 w = np.random.random((features, 1)) w np.dot(l0, w) l2 = sigmoid(np.dot(l0, w)) l2 l2.round() y = np.random.randint(0, 2, samples) y = y.reshape(-1, 1) y e = l2 - y e mse = (e ** 2).mean() mse a = 0.02 d = e * sigmoid(l2, True) d u = a * np.dot(l0.T, d) u w w -= u w steps = 3001 a = 0.025 w = np.random.random((features, 1)) w for s in range(1, steps + 1): l2 = sigmoid(np.dot(l0, w)) e = l2 - y d = e * sigmoid(l2, True) u = a * np.dot(l0.T, d) w -= u mse = (e ** 2).mean() if s % 200 == 0: print(f'step={s:4d} | mse={mse:.4f}') l2 l2.round() l2.round() == y w ``` ## Learning &mdash; One Hidden Layer ### Estimation ``` features = 5 samples = 5 l0 = np.random.random((samples, features)) l0 np.linalg.matrix_rank(l0) units = 3 w0 = np.random.random((features, units)) w0 l1 = np.dot(l0, w0) l1 w1 = np.random.random((units, 1)) w1 l2 = np.dot(l1, w1) l2 y = np.random.random((samples, 1)) y e2 = l2 - y e2 mse = (e2 ** 2).mean() mse d2 = e2 * 1 d2 a = 0.05 u2 = a * np.dot(l1.T, d2) u2 w1 w1 -= u2 w1 e1 = np.dot(d2, w1.T) d1 = e1 * 1 u1 = a * np.dot(l0.T, d1) w0 -= u1 w0 a = 0.015 steps = 5000 for s in range(1, steps + 1): l1 = np.dot(l0, w0) l2 = np.dot(l1, w1) e2 = l2 - y u2 = a * np.dot(l1.T, e2) w1 -= u2 e1 = np.dot(e2, w1.T) u1 = a * np.dot(l0.T, e1) w0 -= u1 mse = (e2 ** 2).mean() if s % 750 == 0: print(f'step={s:5d} | mse={mse:.6f}') l2 y (l2 - y) ``` ### Classification ``` features = 5 samples = 10 units = 10 np.random.seed(200) l0 = np.random.randint(0, 2, (samples, features)) w0 = np.random.random((features, units)) w1 = np.random.random((units, 1)) y = np.random.randint(0, 2, (samples, 1)) l0 y a = 0.1 steps = 20000 for s in range(1, steps + 1): l1 = sigmoid(np.dot(l0, w0)) l2 = sigmoid(np.dot(l1, w1)) e2 = l2 - y d2 = e2 * sigmoid(l2, True) u2 = a * np.dot(l1.T, d2) w1 -= u2 e1 = np.dot(d2, w1.T) d1 = e1 * sigmoid(l1, True) u1 = a * np.dot(l0.T, d1) w0 -= u1 mse = (e2 ** 2).mean() if s % 2000 == 0: print(f'step={s:5d} | mse={mse:.5f}') l2.round() acc = l2.round() == y acc sum(acc) / len(acc) ```
github_jupyter
# Exercise 5 - Categorizing Facial Expressions ## Modularize generateTrials Now that you have a working version of the basic expression-categorization study (a fixed debugThis2.py), let's split up the trial-generation part from the rest of the experimental script: 1. Place the code and functions related to generating trials into a separate file `generateTrials.py`. 1. Edit generateTrials.py code so that instead of returning `trials`, it writes the trial info to a CSV file called trials.csv which contains in each row all the information needed for the current trial, separated by commas. This first row of this file should contain a column header: >isMatch,emotionPrompt,shownActor,shownCategory,targetFaceImage 1. Inside the main script, import you trial-generation function like so: ```Python from generateTrials import * ``` (your generateTrials.py file should be in the same directory as your main experiment .py file): 1. In your main experiment script, call `generateTrials()` This should have the effect of creating trials.csv. 1. Now let's read trials.csv into a list of dictionaries using [this importTrials function](http://sapir.psych.wisc.edu/programming_for_psychologists/notebooks/Psychopy_reference.html#Importing-a-trial-list) You should now have a trialList that you can access like so: ```Python trialList = importTrials('trialList.txt') for curTrial in trialList: curTrial['isMatch'] #contains 1/0 depending on whether the current trial is a match or mismatch ``` Why did we go through the trouble of writing to a reading from a file? To have an extra record of the trial list to which a particular subject was exposed and to double-check that the distributions of different conditions are what we want. ## Prompt for the subject code Let's add the capability to collect the subject's code. Pop up a box (as you did in [Exercise 2](http://sapir.psych.wisc.edu/programming_for_psychologists/notebooks/Exercise2-names.html) prompting the experimenter to provide a subject code. Pass this values to `generateTrials()` and have generateTrials use it to create not a generically named trials.csv, but a trials file specific to this participant, e.g., if the subject code is `ec_101`, the trials-file should be `ec_101_trials.csv` <div class="alert alert-block alert-info"> You'll need to modify your importTrials call so that you're including the subject name there as well instead of the generic trialList.txt. Also remember to add gui to the `from psychopy import...` statement at the start of the file</div> ## Create an output file Now let's create an output file containing our data! Have your main script write to a subjCode_data.csv file. Each line should correspond to a trial and contain the following information, in this order: Independent Variables: * Subject Code * isMatch (0/1) * emotionPrompt ('Happy','Angry', or 'Sad') * shownActor (must be one of actors.keys()) * shownCategory ('Happy','Angry', or 'Sad') * targetFaceImage (the filename of the face being shown, e.g., 005wN_90_60.jpg) Dependent Variables: * accuracy (1 for correct/0 for incorrect) * Reaction time (in milliseconds) # Exercise 5b: A more interesting face categorization study Here's a more interesting version of the expressions categorization experiment. The code below will show a prompt (Happy, Angry, or Sad) as before, and then show you three faces (one of which displayes the prompted expression). The user should respond with the the 1 key the prompted expression is on the left, the 2 key if it's in the middle, and the 3 key if on the right. <div class="alert alert-block alert-info"> Note that the key codes are strings '1', '2', '3' not integers 1, 2, 3 </div> ``` import random import random import sys import numpy as np from psychopy import visual, core, event categories = {'Happy':'F', 'Angry':'W', 'Sad':'T'} actors = ['001m', '001w', '002m', '002w', '003m', '003w', '004m', '004w', '005m', '005w'] suffix = '_90_60.jpg' positions = {'left':(-190,0), 'middle':(0,0), 'right':(190,0)} responseMapping = {'left':'1','middle':'2','right':'3'} def randomButNot(l,toExclude,num): chosen = random.sample(l,num) while toExclude in chosen: chosen = random.sample(l,num) return chosen def generateTrials(numTrials): trials=[] for i in range(numTrials): targetCategory = random.choice(categories.keys()) distractorCategories = randomButNot(categories.keys(),targetCategory,2) actorsToShow = np.random.choice(actors,3) #this is the random.choice() function from the numpy library which samples #with replacement. cf. random.sample() samples WITHOUT replacement targetLocation = random.choice(positions.keys()) trials.append({ 'emotionPrompt':targetCategory, 'targetImage':actorsToShow[0]+categories[targetCategory]+suffix, 'distractorImage1': actorsToShow[1]+categories[distractorCategories[0]]+suffix, 'distractorImage2': actorsToShow[2]+categories[distractorCategories[1]]+suffix, 'targetLocation': targetLocation }) return trials trials = generateTrials(40) win = visual.Window([1024,700],color="black", units='pix') prompt = visual.TextStim(win=win,text='',color="white",height=60) correctFeedback = visual.TextStim(win=win,text='CORRECT',color="green",height=60) incorrectFeedback = visual.TextStim(win=win,text='ERROR',color="red",height=60) pic1 = visual.ImageStim(win=win, mask=None,interpolate=True) pic2 = visual.ImageStim(win=win, mask=None,interpolate=True) pic3 = visual.ImageStim(win=win, mask=None,interpolate=True) for curTrial in trials: win.flip() core.wait(.25) prompt.setText(curTrial['emotionPrompt']) prompt.draw() win.flip() core.wait(.5) win.flip() core.wait(.1) pic1.setImage('faces/'+curTrial['targetImage']) pic2.setImage('faces/'+curTrial['distractorImage1']) pic3.setImage('faces/'+curTrial['distractorImage2']) pic1.setPos(positions[curTrial['targetLocation']]) distractorPositions = randomButNot(positions.keys(),curTrial['targetLocation'],2) pic2.setPos(positions[distractorPositions[0]]) pic3.setPos(positions[distractorPositions[1]]) pic1.draw() pic2.draw() pic3.draw() win.flip() response = event.waitKeys(keyList=responseMapping.values())[0] print response,responseMapping[curTrial['targetLocation']] if response==responseMapping[curTrial['targetLocation']]: correctFeedback.draw() else: incorrectFeedback.draw() core.wait(.5) ``` ## Modularize the generateTrials code Begin by modularizing the generateTrials() code as in Exercise 5a. ## Effect of spatial grouping? Notice how we're displaying the faces in a horizontal orientation. This allows for having the mouths and eyes nicely aligned which may help comparing faces. Let's see if there's an effect of this by intermixing trials with the three faces horizontally oriented as in the code above, and trials that are vertically oriented. To get you started, you'll want to do is update your positions dictionary to this: ```Python positions = { 'vertical': {'bottom':(0,-190), 'middle':(0,0), 'top':(0,190)}, 'horizontal': {'left':(-190,0), 'middle':(0,0), 'right':(190,0)} } ``` You'll then want to introduce a position factor `positionType` which is 'vertical' or 'horizontal' (`positions.keys()`) and based on whether a given trial is 'vertical' or 'horizontal' you'll want to: 1. Access the appropriate positions for setting where your pictures appear. E.g., if your current position type is stored in `curPositionType`, use `positions[curPositionType]` to access the dictionary containing the possible positions. 2. Set the location of the matching face by using e.g., `random.choice(positions[curPositionType].keys())` ## Use a mouse for responding It becomes awkward to use a keyboard for responding in a task like this, so let's use a mouse for responding. See [here](http://sapir.psych.wisc.edu/programming_for_psychologists/notebooks/Psychopy_reference.html#How-do-I-have-people-respond-with-a-mouse?) for sample mouse code. You'll want to display the three faces until a person clicks on one of them. ## Create an output file Now let's create an output file containing our data! Have your main script write to a subjCode_data.csv file. Each line should correspond to a trial and contain the following information, precisely in this order: Independent Variables: * subject code (the unique subject code for the subject being run) * position type ('vertical' or 'horizontal') * emotion prompt (the string 'Happy', 'Angry' or 'Sad') * targetActor (must be one of actors.keys()) * distractor1Actor (must be one of actors.keys()) * distractor2Actor (must be one of actors.keys()) * distractorEmotion1 (the string 'Happy', 'Angry' or 'Sad') * distractorEmotion2 (the string 'Happy', 'Angry' or 'Sad') * targetImage (the filename corresponding to the correct response, e.g., '005wN_90_60.jpg') * distractorImage1 (the filename corresponding to the first distractor) * distractorImage2 (the filename corresponding to the second distractor) * targetLocation (the location of the target: bottom/middle/top/left/right) Dependent Variables: * isRight (1 if response is correct; 0 if incorrect) * emotionChosen (the chosen emotion (Happy/Angry/Sad; should equal emotionPrompt if the response is correct) * RT (Reaction time in milliseconds from when faces appeared to mouseclick on one of them) You may run into some trouble figuring out the emotion of the face that the participant clicked on. If you set your `generateTrials()` function correctly, the following code will work: ```Python isRight = int(pic1.contains(response)) if isRight: correctFeedback.draw() emotionChosen = curTrial['emotionPrompt'] else: incorrectFeedback.draw() if pic2.contains(response): emotionChosen = curTrial['distractorEmotion1'] elif pic3.contains(response): emotionChosen = curTrial['distractorEmotion2'] ``` ## Run yourself on the task! Please run yourself on this task to produce 100 trials of data. Should take <10 mins. Please take care to have your output file be precisely in the above-mentioned format so that we can combine data from everyone in the class. Here is a [sample output file](http://sapir.psych.wisc.edu/classMaterials/psych711/sample_data.csv). To check that your data is in the correct format, do the following: At the terminal, inside `cat sample.csv your_data.csv > data_format_test.csv` (replacing your_data.csv with the name of your output file, and including the appropriate path to sample_data.csv if it's not in the same directory as your data) Load the data into R: In R: `dat <- read.csv('data_format_test.csv')` Look at the summary: `summary(dat)` ## Bonus: Actor + positionType Let's cross the actor and positionType to see if e.g., having faces lined up in horizontally helps especially when they're all of same gender. You want to have the following trial distribution ``` horizontal (50%). Of these: same-gender(Male) - 25% same-gender(Female) - 25% different-gender - 50% vertical (50%). Of these: same-gender(Male) - 25% same-gender(Female) - 25% different-gender - 50% ``` To cross factors, you can use for-loops, but a more compact way is to use the [`itertools` package](https://docs.python.org/2/library/itertools.html). So you might want to do something like this: ``` from itertools import product positions = { 'vertical': {'bottom':(-190,0), 'middle':(0,0), 'top':(190,0)}, 'horiontal': {'left':(0,-190), 'middle':(0,0), 'right':(0,190)} } genderMix = {'same-gender':['male','female'], 'diff-gender':[]} trialTypes = list(product(positions.keys(), genderMix.keys())) print trialTypes ```
github_jupyter
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> # Parallelize code with `dask.delayed` In this section we parallelize for-loop style code with Dask Delayed. This approach is more flexible and more manual than automatic approaches like Dask Dataframe. It is commonly useful to parallelize existing codebases, or to build tools like Dask Dataframe. To see a real-world application you may with to read about [Credit Modeling with Dask](https://blog.dask.org/2018/02/09/credit-models-with-dask). This will also help us to develop an understanding for Dask in general. In this notebook we start with toy examples to build understanding. Then we end with two examples that build a dataframe and machine learning algorithm. Objectives: 1. Use Dask delayed to build custom task graphs 2. Build an intuition for computational task scheduling generally 3. Learn how tools like Dask Dataframe and Dask-ML work internally ## Basics First let's make some toy functions, `inc` and `add`, that sleep for a while to simulate work. We'll then time running these functions normally. In the next section we'll parallelize this code. ``` from time import sleep def inc(x): sleep(1) return x + 1 def add(x, y): sleep(1) return x + y ``` We time the execution of this normal code using the `%%time` magic, which is a special function of the Jupyter Notebook. ``` %%time # This takes three seconds to run because we call each # function sequentially, one after the other x = inc(1) y = inc(2) z = add(x, y) ``` ### Parallelize with the `dask.delayed` decorator Those two increment calls *could* be called in parallel because they are independent of one-another. We'll transform the `inc` and `add` functions using the `dask.delayed` function. When we call the delayed version by passing the arguments, exactly as before, but the original function isn't actually called yet - which is why the cell execution finishes very quickly. Instead, a *delayed* object is made, which keeps track of the function to call and the arguments to pass to it. ``` import dask %%time # This executes actual code sequentially # x = inc(1) # y = inc(2) # z = add(x, y) # This runs lazily, all it does is build a graph x = dask.delayed(inc)(1) y = dask.delayed(inc)(2) z = dask.delayed(add)(x, y) ``` This ran immediately, since nothing has really happened yet. To get the result, call `compute`. Notice that this runs faster than the original code. ``` %%time # This actually runs our computation using a local thread pool z.compute() ``` ## What just happened? The `z` object is a lazy `Delayed` object. This object holds everything we need to compute the final result, including references to all of the functions that are required and their inputs and relationship to one-another. We can evaluate the result with `.compute()` as above or we can visualize the task graph for this value with `.visualize()`. ``` z # Look at the task graph for `z` z.visualize() ``` Notice that this includes the names of the functions from before, and the logical flow of the outputs of the `inc` functions to the inputs of `add`. ### Some questions to consider: - Why did we go from 3s to 2s? Why weren't we able to parallelize down to 1s? - What would have happened if the inc and add functions didn't include the `sleep(1)`? Would Dask still be able to speed up this code? - What if we have multiple outputs or also want to get access to x or y? ## Exercise: Parallelize a for loop `for` loops are one of the most common things that we want to parallelize. Use `dask.delayed` on `inc` and `sum` to parallelize the computation below: ``` data = [1, 2, 3, 4, 5, 6, 7, 8] %%time # Sequential code results = [] for x in data: y = inc(x) results.append(y) total = sum(results) total %%time # Your parallel code here... %load solutions/02-delayed-loop.py ``` How do the graph visualizations compare with the given solution, compared to a version with the `sum` function used directly rather than wrapped with `delay`? Can you explain the latter version? You might find the result of the following expression illuminating ```python delayed(inc)(1) + delayed(inc)(2) ``` ## Start a Dask Cluster on Kubernetes The code above ran in threads in your local Jupyter notebook. Now we'll now start up a Dask cluster on Kubernetes. *Note: if you still have a cluster running in your other notebook you may want to close that cluster, or restart your notebook. Otherwise your dashboard will point to the previous cluster.* cluster.close() ``` from dask_kubernetes import KubeCluster cluster = KubeCluster(n_workers=10) cluster from dask.distributed import Client client = Client(cluster) ``` ## Exercise: Parallelizing a for-loop code with control flow Often we want to delay only *some* functions, running a few of them immediately. This is especially helpful when those functions are fast and help us to determine what other slower functions we should call. This decision, to delay or not to delay, is usually where we need to be thoughtful when using `dask.delayed`. In the example below we iterate through a list of inputs. If that input is even then we want to call `inc`. If the input is odd then we want to call `double`. This `is_even` decision to call `inc` or `double` has to be made immediately (not lazily) in order for our graph-building Python code to proceed. ``` def inc(x): sleep(1) return x + 1 def double(x): sleep(1) return 2 * x def is_even(x): return not x % 2 data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] %%time # Sequential code results = [] for x in data: if is_even(x): y = double(x) else: y = inc(x) results.append(y) total = sum(results) print(total) %%time # Your parallel code here... # TODO: parallelize the sequential code above using dask.delayed # You will need to delay some functions, but not all %load solutions/02-delayed-control-flow.py %time total.compute() total.visualize() ``` ### Some questions to consider: - What are other examples of control flow where we can't use delayed? - What would have happened if we had delayed the evaluation of `is_even(x)` in the example above? - What are your thoughts on delaying `sum`? This function is both computational but also fast to run. ## Rebuild Dataframe algorithms manually In the last notebook we used Dask Dataframe to load CSV data from the cloud and then perform some basic analyses. In these examples Dask dataframe automatically built our parallel algorithms for us. In this section we'll do that same work, but now we'll use Dask delayed to construct these algorithms manually. In practice you don't have to do this because Dask dataframe already exists, but doing it once, manually can help you understand both how Dask dataframe works, and how to parallelize your own code. To make things a bit faster we've also decided to store data in the Parquet format. We'll use Dask delayed along with Arrow to read this data in many small parts, convert those parts to Pandas dataframes, and then do a groupby-aggregation. ### Inspect Parquet data on the cloud Just like our CSV files, we also have many parquet files for the same data ``` import gcsfs gcs = gcsfs.GCSFileSystem() filenames = sorted(gcs.glob('anaconda-public-data/nyc-taxi/nyc.parquet/part.*.parquet')) filenames[:5] ``` We can read that data using the [gcsfs](https://gcsfs.readthedocs.io/en/latest/) library to access data on Google Cloud Storage, and either [PyArrow](https://arrow.apache.org/docs/python/) or [Fastparquet](https://fastparquet.readthedocs.io/en/latest/) to read those bytes. Here we'll use Arrow to turn one Parquet file into one Pandas dataframe. ``` import pyarrow.parquet as pq fn = filenames[0] with gcs.open(fn) as f: pf = pq.ParquetFile(f) # Arrow ParquetFile table = pf.read() # Arrow Table df = table.to_pandas() # Pandas DataFrame df ``` ### Build aggregation piece by piece We want to compute the following operation on all of our data: ```python import dask.dataframe as dd df = dd.read_parquet('gcs://anaconda-public-data/nyc-taxi/nyc.parquet/part.*.parquet') df.passenger_count.mean().compute() ``` This actually works, but lets pretend that it didn't, and lets build this up, chunk by chunk, file by file. We do this for you sequentially below with a for loop: ``` sums = [] counts = [] def filename_to_dataframe(fn): gcs = gcsfs.GCSFileSystem() with gcs.open(fn) as f: pf = pq.ParquetFile(f) # Arrow ParquetFile table = pf.read() # Arrow Table df = table.to_pandas() # Pandas DataFrame return df for fn in filenames[:3]: # Read in parquet file to Pandas Dataframe df = filename_to_dataframe(fn) # Groupby origin airport total = df.passenger_count.sum() # Number of flights by origin count = df.passenger_count.count() # Save the intermediates sums.append(total) counts.append(count) # Combine intermediates to get total mean-delay-per-origin total_sums = sum(sums) total_counts = sum(counts) mean = total_sums / total_counts mean ``` ### Parallelize the code above Use `dask.delayed` to parallelize the code above. Some extra things you will need to know. 1. Methods and attribute access on delayed objects work automatically, so if you have a delayed object you can perform normal arithmetic, slicing, and method calls on it and it will produce the correct delayed calls. ```python x = dask.delayed(np.arange)(10) y = (x + 1)[::2].sum() # everything here was delayed ``` So your goal is to parallelize the code above (which has been copied below) using `dask.delayed`. You may also want to visualize a bit of the computation to see if you're doing it correctly. ``` %%time # copied sequential code sums = [] counts = [] def filename_to_dataframe(fn): gcs = gcsfs.GCSFileSystem() with gcs.open(fn) as f: pf = pq.ParquetFile(f) # Arrow ParquetFile table = pf.read() # Arrow Table df = table.to_pandas() # Pandas DataFrame return df for fn in filenames[:3]: # Read in parquet file to Pandas Dataframe df = filename_to_dataframe(fn) # Groupby origin airport total = df.passenger_count.sum() # Number of flights by origin count = df.passenger_count.count() # Save the intermediates sums.append(total) counts.append(count) # Combine intermediates to get total mean-delay-per-origin total_sums = sum(sums) total_counts = sum(counts) mean = total_sums / total_counts mean ``` If you load the solution, add `%%time` to the top of the cell to measure the running time. ``` %load solutions/02-delayed-dataframe.py ``` ### Cleanup Shutdown the running cluster ``` cluster.close() ```
github_jupyter
# Generate noised synthetic data ``` import numpy as np import pandas as pd import librosa import librosa.display import math import matplotlib.pyplot as plt from scipy import signal from pathlib import Path import torch from torch.utils.data import DataLoader, Dataset COMP_NAME = "g2net-gravitational-wave-detection" INPUT_PATH = Path(f"/mnt/storage_dimm2/kaggle_data/{COMP_NAME}/") OUTPUT_PATH = Path(f"/mnt/storage_dimm2/kaggle_output/{COMP_NAME}/") gw_paths = list((INPUT_PATH / "gw_sim").glob("*.npy")) len(gw_paths) gw = np.load(gw_paths[0]) gw.shape plt.plot(gw[0, -50:], gw[1, -50:]); def prepare_gw(sig, sr=2048, error_ms=5): if len(sig) < 4096: pad = 4096 - len(sig) sig = np.pad(sig, (0, pad)) # Time lags https://arxiv.org/abs/1706.04191 # https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/251934#1387136 # Start the GW from Hanford if np.random.rand() > 0.5: hanford_delay = 0 + np.random.normal(scale=error_ms) livingston_delay = 10 + np.random.normal(scale=error_ms) virgo_delay = 26 + np.random.normal(scale=error_ms) # Start the GW from Virgo else: hanford_delay = 27 + np.random.normal(scale=error_ms) livingston_delay = 26 + np.random.normal(scale=error_ms) virgo_delay = 0 + np.random.normal(scale=error_ms) random_pad = np.random.randint(30, int(4096 * 0.85)) hanford_pad = int(sr * hanford_delay / 1000) + random_pad livingston_pad = int(sr * livingston_delay / 1000) + random_pad virgo_pad = int(sr * virgo_delay / 1000) + random_pad hanford_sig = np.pad(sig, (0, hanford_pad))[-4096:] livingston_sig = np.pad(sig, (0, livingston_pad))[-4096:] virgo_sig = np.pad(sig, (0, virgo_pad))[-4096:] return np.stack([hanford_sig, livingston_sig, virgo_sig]) gw_test = prepare_gw(gw[1]) plt.plot(gw_test.T); def apply_bandpass(x, lf=25, hf=1000, order=4, sr=2048): # sos = signal.butter(order, [lf * 2.0 / sr, hf * 2.0 / sr], btype="bandpass", output="sos") sos = signal.butter(order, [lf, hf], btype="bandpass", output="sos", fs=sr) normalization = np.sqrt((hf - lf) / (sr / 2)) return signal.sosfiltfilt(sos, x) / normalization def make_spec(sig, hop_length=64, sr=2048): sig = apply_bandpass(sig) d1 = sig * signal.tukey(len(sig), 0.2) # d1 = sig C = np.abs( librosa.cqt( d1 / np.max(d1), sr=sr, hop_length=hop_length, fmin=8, filter_scale=0.8, bins_per_octave=12, ) ) print(C.min(), C.max()) fig, ax = plt.subplots(figsize=(10, 10)) img = librosa.display.specshow( C, sr=sr * 2, hop_length=hop_length, bins_per_octave=12, ax=ax ) ax.set_title("Constant-Q power spectrum") make_spec(gw_test[2]) ``` # Combine with noise ``` df = pd.read_csv(INPUT_PATH / "training_labels.csv").query("target == 0") print(df.shape) df.head(10) def load_file(id_, folder="train"): path = INPUT_PATH / folder / id_[0] / id_[1] / id_[2] / f"{id_}.npy" waves = np.load(path) # return waves / np.max(np.abs(waves), axis=1).reshape(3, 1) return waves # / np.max(np.abs(waves)) noise = load_file("00001f4945") gw_synthetic = prepare_gw(gw[1]) synthetic = 0.5 * gw_synthetic + noise # make_spec(noise[0]) make_spec(synthetic[0]) ``` # PyTorch dataset ``` class GWSyntheticDataset(Dataset): def __init__( self, df, tukey_alpha=0.2, bp_lf=25, bp_hf=500, bp_order=4, whiten=False, folder="train", channel_shuffle=False, **kwargs, ): self.df = df.query("target == 0").reset_index(drop=True) self.folder = folder self.window = torch.tensor(signal.tukey(4096, tukey_alpha)) self.lf = bp_lf self.hf = bp_hf self.order = bp_order self.gw_paths = list((INPUT_PATH / "gw_sim").glob("*.npy")) def load_file(self, id_): path = INPUT_PATH / self.folder / id_[0] / id_[1] / id_[2] / f"{id_}.npy" waves = np.load(path) return waves def prepare_gw(self, sig, sr=2048, error_ms=5): if len(sig) < 4096: pad = 4096 - len(sig) sig = np.pad(sig, (0, pad)) # Time lags https://arxiv.org/abs/1706.04191 # https://www.kaggle.com/c/g2net-gravitational-wave-detection/discussion/251934#1387136 # Start the GW from Hanford if np.random.rand() > 0.5: hanford_delay = 0 + np.random.normal(scale=error_ms) livingston_delay = 10 + np.random.normal(scale=error_ms) virgo_delay = 26 + np.random.normal(scale=error_ms) # Start the GW from Virgo else: hanford_delay = 27 + np.random.normal(scale=error_ms) livingston_delay = 26 + np.random.normal(scale=error_ms) virgo_delay = 0 + np.random.normal(scale=error_ms) random_pad = np.random.randint(30, int(4096 * 0.85)) hanford_pad = int(sr * hanford_delay / 1000) + random_pad livingston_pad = int(sr * livingston_delay / 1000) + random_pad virgo_pad = int(sr * virgo_delay / 1000) + random_pad hanford_sig = np.pad(sig, (0, hanford_pad))[-4096:] livingston_sig = np.pad(sig, (0, livingston_pad))[-4096:] virgo_sig = np.pad(sig, (0, virgo_pad))[-4096:] return np.stack([hanford_sig, livingston_sig, virgo_sig]) def __len__(self): return len(self.df) def __getitem__(self, index): data = self.load_file(self.df.loc[index, "id"]) data = torch.tensor(data, dtype=torch.float32) scale = torch.abs(data).max() if np.random.rand() > 0.5: target = torch.tensor([1], dtype=torch.float32) gw = np.load(np.random.choice(self.gw_paths))[1] data_clean = torch.tensor(self.prepare_gw(gw), dtype=torch.float32) amplitude = np.random.uniform(low=0.1, high=0.5) print(amplitude) data_clean *= amplitude # Random amplitude data += data_clean data_clean /= scale else: target = torch.tensor([0], dtype=torch.float32) data_clean = torch.normal(mean=0, std=1e-10, size=data.shape) data /= scale data *= self.window # data = biquad_bandpass_filter(data, self.lf, self.hf, 2048) return data, data_clean, target ds = GWSyntheticDataset(df) x = ds[0] print(x[2]) make_spec(x[0][0].numpy()) if x[2] > 0: make_spec(x[1][0].numpy()) a = torch.randn(size=(4, 3, 2, 2)) target = torch.tensor([1, 0, 1, 0], dtype=torch.float).view(-1, 1) a[~target.flatten().bool()] = 0 a ```
github_jupyter
# Workshop 10: Linear Algebra Refresher A matrix is a collection of numbers in a grid of dimensions $m$ by $n$. This means the grid has $m$ rows and $n$ columns. Example: $$A = \begin{pmatrix} 3 & 1 \\ 4 & -0.5 \\ \pi/2 & 0 \\ \end{pmatrix} $$ This matrix has $m=3$ rows and $n=2$ columns. A single element of the matrix $A$ is denoted as $A_{ij}$ where $i$ denotes the row of the element and $j$ denotes the column. For example $A_{12}$ has the value 1 above. ## Addition & Subtraction Let us define addition and subtraction on matrices: $$\begin{pmatrix} 3 & 1 \\ 4 & -0.5 \\ \end{pmatrix} + \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} = \begin{pmatrix} 4 & 1 \\ 4 & 0.5 \\ \end{pmatrix} $$ Symbolically, if the first matrix is $A$, the second matrix is $B$, and the third matrix is $C$, then the element $C_{ij} = A_{ij} + B_{ij}$. This means that you cannot add or subtract matrices with different dimensions (for example, you cannot add a $3\times 2$ matrix to a $5\times 5$ matrix). ## Scalar Multiplication We multiply matrices and vectors by scalars as follows. Let $$A = \begin{pmatrix} 3 & 1 \\ 4 & -0.5 \\ \end{pmatrix}$$ Then $5A$ is $$5A = \begin{pmatrix} 5\times 3 & 5\times 1 \\ 5\times 4 & 5 \times (-0.5) \\ \end{pmatrix}=\begin{pmatrix} 15 & 5 \\ 20 & -2.5 \\ \end{pmatrix}$$ and for a vector, if $$v = \begin{pmatrix} 1 \\ -1 \\ \end{pmatrix}$$ then $5v$ is $$5v = \begin{pmatrix} 5 \\ -5 \\ \end{pmatrix}$$ ## Matrix Multiplication The definition of multiplication is not element-wise like it is for addition above. Instead, the formula for the product $C$ of two matrices $A$ and $B$ is $$C_{ij} = \sum_k A_{ik}B_{kj}$$ Let us put this abstract formula to use. If $$A = \begin{pmatrix} 3 & 1 \\ 4 & -0.5 \\ \end{pmatrix}, B = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} $$ then the product denoted $AB$ is another $2\times 2$ matrix $C$. Let us compute one element $C_{11}$ using the formula above. $$C_{11} = A_{11}B_{11} + A_{12}B_{21} = (3)(0) + (1)(1) = 1$$ Test your understanding by computing the other 3 elements. You should ultimately find that $$C = \begin{pmatrix} 1 & 3 \\ -0.5 & 4 \\ \end{pmatrix}$$ You can also multiply a matrix by a vector this way. Multiplying an $m \times n $ matrix by a vector of $n$ elements returns another vector of $m$ elements. For example, if $$A = \begin{pmatrix} 1 & 0 \\ 2 & 1 \\ 0 & -1 \\ \end{pmatrix}, v = \begin{pmatrix} 1 \\ -1 \\ \end{pmatrix} $$ then $$Av = \begin{pmatrix} 1 \\ 1 \\ 1 \\ \end{pmatrix} $$ This means the following: 1. If matrix $A$ has dimensions $m \times n$ and matrix $B$ has dimensions $o \times p$ and $n\neq o$, they cannot be multiplied together. If $v$ is a vector of length $o\neq n$, $A$ cannot be multiplied by $v$. 1. Generally, $AB \neq BA$. Suppose $A$ has dimensions $m \times n$ and $B$ has dimensions $n \times p$ and $p\neq m$. Then $AB$ can be calculated and the final result has dimensions $m\times p$, but $BA$ cannot be calculated. **For the remainder of the document, we will only discuss square matrices, meaning the number of rows $m$ equals the number of columns $n$.** ### The identity matrix The identity matrix is the matrix $I$ such that for an $m\times m$ matrix $A$, $AI = A$. The identity is then an $m\times m$ matrix where $$I_{ii} = 1, I_{ij} = 0 \text{ for } i\neq j$$ For example, for $m=3$, the identity matrix is $$I = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ (Check for yourself, using the definition of multiplication above, that it really does satsify $AI = A$ for any $m \times m$ matrix $A$) ### The inverse of a matrix Now that we have the identity, we can define the inverse. The inverse of a matrix $A$ denoted $A^{-1}$ is the matrix such that $$A^{-1}A = I$$ Note that in general, $(A^{-1})_{ij}\neq 1/A_{ij}$. Instead, this is actually a fairly difficult thing to do and most importantly, *the inverse does not always exist*. When the inverse does not exist, we say the matrix is **singular**. One way to check whether a matrix is **singular** is by computing something called the **determinant**. The determinant can be defined in many ways. Here we will define it as a mysterious, recursive formula and do an example with a $2\times 2$ matrix. First start with a $1\times 1$ matrix $A$: $$A = (3)$$ The determinant of $A$, $\det(A)$ is just equal its one value: $\det(A) = 3$. Now let us move up to a $2\times 2$ matrix $B$. The determinant of a $2\times 2$ matrix $$A = \begin{pmatrix} a & b \\ c & d\\ \end{pmatrix}$$ is given by $$\det(A) = ad - bc$$ Now let us move up to a $3\times 3$ matrix $A$: $$A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{pmatrix}$$ The determinant of this matrix can be found using the so-called "cofactor expansion": $$\det(A) = a_{11} (a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32}-a_{22}a_{31})$$ This looks complicated but actually let me show you what is going on inside it. Look at the first term, which begins with $a_{11}$ (row 1, column 1). In this term, $a_{11}$ is multiplied by $a_{22}a_{33} - a_{23}a_{32}$. If you look carefully, this is the determinant of the $2\times 2$ matrix $$\begin{pmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \\ \end{pmatrix}$$ which is the matrix you would get from $A$ if you removed the first row and the first column from $A$. Look at the second term in the cofactor expansion, which begins with $a_{12}$ (row 1, column 2). In this term, $a_{12}$ is multiplied by $a_{21}a_{33} - a_{23}a_{31}$. This is the determinant of hte $2 \times 2$ matrix $$\begin{pmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \\ \end{pmatrix}$$ which is the matrix you would get from $A$ if you removed the first row and the second column from $A$. So each term in the expansion is defined in terms of the determinants of smaller matrices. As a result, it is generally challenging to compute determinants by hand for anything larger than a $3 \times 3$ matrix, but you could write a *recursive function* in Python to do it...if you want. Now that we have a method, if unpleasant, to calculate the determinant, here is one thing it can tell us: if $\det(A) = 0$, $A$ does not have an inverse (it is singular). ### The linear systems problem Suppose we have two lines $$y = 3x + 1$$ $$y = -2x + 2$$ At what point $x,y$ do they intersect? We can solve this problem using the ideas above. Rearrange those two equations to put all of the variables $(x,y)$ on one side: $$-3x + y = 1$$ $$2x + y = 2$$ Using the definition of matrix multiplication above, verify that this is the same as writing it as $$\begin{pmatrix} -3 & 1 \\ 2 & 1 \\ \end{pmatrix}\begin{pmatrix} x \\ y \\ \end{pmatrix}= \begin{pmatrix} 1 \\ 2 \\ \end{pmatrix} $$ Let $$A=\begin{pmatrix} -3 & 1 \\ 2 & 1 \\ \end{pmatrix}, v=\begin{pmatrix} x \\ y \\ \end{pmatrix}, b= \begin{pmatrix} 1 \\ 2 \\ \end{pmatrix} $$ This problem is the matrix equation $$Av = b$$ and we are trying to solve for $v$. Here's one way we can do it. Assume that $A^{-1}$ exists. Then multiply the LHS and the RHS of the equation on the left: $$A^{-1}A v = A^{-1} b$$ $A^{-1}A = I$ by definition, so $$v = A^{-1} b$$ So to find the intersection, invert $A$ and multiply $b$! Here is a snippet of code implementing all of this: ``` import numpy as np A = np.matrix([[-3, 1], [2, 1]]) b = np.matrix([[1],[2]]) v = np.linalg.inv(A) * b print("x = %.2f" % v[0]) print("y = %.2f" % v[1]) ``` Check that the values of $x$ and $y$ obtained above really do correspond to the intersection of the two lines. What happens if you tried to calculate the intersection of two lines which have the same slope? This corresponds to another possibility we have overlooked--the inverse of $A$ will not exist, or, in the language above, it has zero determinant. If you have studied linear algebra, you should recognize that when we make the slope of the two lines the same, the rows of $A$ become linearly dependent on each other. ### The eigenvalue problem ("diagonalization") Probably the most famous problem in all of science is the eigenvalue problem. It is ubiquitous across fields and is the defining problem of quantum mechanics. In this problem, we change the problem slightly from before. For a given $A$, we try to look for scalar values $\lambda$ and vectors $v$ that satisfy the matrix equation $$A v = \lambda v$$ Let us look at a simple example: $$A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix}$$ The eigenvectors of this matrix are $$v_1 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 1 \\ \end{pmatrix}\text{ and } v_2 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ -1 \\ \end{pmatrix}$$ If you multiply $A$ by $v_1$ you get $Av_1 = v_1$ and if you multiply $A$ by $v_2$ you get $Av_2 = -v_2$ (check these for yourself by doing the multiplications). This means that the eigenvalue $\lambda_1$ corresponding to the eigenvector $v_1$ is $\lambda_1 = 1$, and the eigenvalue $\lambda_2$ corresponding to the eigenvector $v_2$ is $\lambda_2 = -1$. One geometric intuition for what an eigenvector is that it is a vector whose direction is not changed by the action of $A$: only its length is rescaled by a factor of $\lambda$. An $m \times m$ matrix can have at most $m$ eigenvectors (and consequently it can have at most $m$ unique eigenvalues). It can also have $m$ eigenvectors but fewer than $m$ eigenvalues--that is, multiple eigenvectors may have the same eigenvalue (in physics, this is called "degeneracy"). This problem is referred to as "diagonalization" because once you have found all of the eigenvectors $v_i$ and their eigenvalues $\lambda_i$, you can decompose $A$ as $$A = U D U^\dagger$$ where $$ D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 & 0 \\ 0 & \lambda_2 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \lambda_{m-1} & 0 \\ 0 & 0 & \cdots & 0 & \lambda_m \\ \end{pmatrix}$$ and the columns of $U$ are formed by $$U = \begin{pmatrix} v_1 & v_2 & \cdots & v_{m-1} & v_m \end{pmatrix}$$ (remember each $v_i$ is a column of $m$ elements). And $U^\dagger$ denotes the conjugate transpose of $U$. $D$ is a "diagonal" matrix becaues it has non-zero elements only along its diagonal, so this process is called "diagonalization". It turns out that the determinant of a matrix, defined above, is also equal to the product of its eigenvalues. $$\det(A) = \lambda_1 \times \lambda_2 \dots \times \lambda_m$$ So when $\det(A) = 0$ (the matrix is *singular*), this means that at least one eigenvalue is equal to zero. ## Final remarks There are literally tomes and tomes about different algorithms to solve the two problems listed above because inverting a matrix and directly diagonalizing a matrix is usually computationally expensive. Here we have only discussed the mathematical aspect of it, but if you use some existing linear algebra package to solve one of these problems, most of the time, they will do something more sophisticated than what is described above. The reason people have invested so much into improving algorithms for these processes is that they show up in nearly every field of study, so if you have not studied linear algebra, I encourage you to do it.
github_jupyter
``` import pandas as pd import scipy as sp import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from statistics import mean from sklearn.model_selection import train_test_split ``` #### Cross-Domain Learnings ##### According to the research paper using Logistic Regression ``` df1=pd.read_csv('Drugcom.csv', sep=',') df1.head() # Boolean Mask with conditions, ISIN, wenn mehrere conditions abgefragt werden conditions=['Birth Control', 'Depression','Pain', 'Anxiety', 'Diabetes, Type 2'] def model_data(condition): df2 =df1[df1['condition']== condition] X_train, X_test, y_train, y_test = train_test_split(df2['review'], df2['Rating_Model'], test_size=0.33, random_state=42) return X_train, X_test, y_train, y_test X_BC_train, X_BC_test, y_BC_train, y_BC_test = model_data('Birth Control') X_D_train, X_D_test, y_D_train, y_D_test = model_data('Depression') X_P_train, X_P_test, y_P_train, y_P_test = model_data('Pain') X_A_train, X_A_test, y_A_train, y_A_test = model_data('Anxiety') X_Dia_train, X_Dia_test, y_Dia_train, y_Dia_test = model_data('Diabetes, Type 2') ``` ### Creating Inputs 25 Model Variations ``` train_str_Inputs = [['X_BC_train', 'y_BC_train'],['X_D_train', 'y_D_train'], ['X_P_train','y_P_train'], ['X_A_train','y_A_train'],['X_Dia_train','y_Dia_train']] test_str_Inputs = [['X_BC_test','y_BC_test'], ['X_D_test','y_D_test'],['X_P_test','y_P_test'],['X_A_test','y_A_test'],['X_Dia_test','y_Dia_test']] string_combination= [t+j for j in test_str_Inputs for t in train_str_Inputs] train_var_Inputs= [(X_BC_train, y_BC_train),(X_D_train, y_D_train), (X_P_train,y_P_train), (X_A_train,y_A_train),(X_Dia_train,y_Dia_train)] test_var_Inputs = [(X_BC_test,y_BC_test), (X_D_test,y_D_test),(X_P_test,y_P_test),(X_A_test,y_A_test),(X_Dia_test,y_Dia_test)] var_combination= [t+j for j in test_var_Inputs for t in train_var_Inputs] Classifiers = [ LogisticRegression(), ] v = CountVectorizer(analyzer = "word", ngram_range= (1,2)) dicti= {} dicti['train']=[] dicti['test']=[] dicti['accuracy']=[] for i , (X_train, y_train,X_test, y_test) in zip(string_combination, var_combination): print('-'*100) print(i) print('-'*100) train_features= v.fit_transform(X_train) test_features=v.transform(X_test) dense_features=train_features.toarray() dense_test= test_features.toarray() for classifier in Classifiers: try: fit = classifier.fit(train_features,y_train) pred = fit.predict(test_features) except Exception: fit = classifier.fit(dense_features,y_train) pred = fit.predict(dense_test) accuracy = accuracy_score(pred,y_test) #accuracy = accuracy_score(pred,y_test) dicti['train'].append(i[0][2:4]) dicti['test'].append(i[3][2:4]) dicti['accuracy'].append(float(accuracy)) print('Accuracy of '+classifier.__class__.__name__+'is '+str(accuracy)) # Procedure for Reducing, Calculation Power #--> CountVectorizer( min_df-> vocabulary deleted, starting with the lowest ) #How to add to a dictionary # Update only gives the values which are #DataFrame out of Dictionary cross_domain= pd.DataFrame.from_dict(dicti) cross_domain['test'].unique() abbreviation= ['BC', 'D_', 'P_', 'A_', 'Di'] cross_domain.replace(to_replace=abbreviation, value=conditions,inplace=True) # How to replace all values in a DataFrame cross_data_logistic=cross_domain.pivot_table(values='accuracy', index= 'train', columns='test') cross_data_logistic cross_data_logistic_s= cross_data_logistic.reindex(index=['Depression', 'Anxiety','Pain', 'Birth Control','Diabetes, Type 2' ],columns= ['Depression', 'Anxiety','Pain', 'Birth Control','Diabetes, Type 2' ]) # Calculate the mean overall all columns -> you can change the axis(0-vertical, 1-horizontal) cross_data_logistic_s['|Train| Avg. accuracy |']=cross_data_logistic_s[cross_data_logistic_s.columns[::-1]].mean(axis=1) cross_data_logistic_s.loc['|Test| Avg. accuracy |']=cross_data_logistic_s[cross_data_logistic_s.columns[::-1]].mean(axis=0) cross_data_logistic_s ```
github_jupyter
## Introduction The transformers library is an open-source, community-based repository to train, use and share models based on the Transformer architecture [(Vaswani & al., 2017)](https://arxiv.org/abs/1706.03762) such as Bert [(Devlin & al., 2018)](https://arxiv.org/abs/1810.04805), Roberta [(Liu & al., 2019)](https://arxiv.org/abs/1907.11692), GPT2 [(Radford & al., 2019)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), XLNet [(Yang & al., 2019)](https://arxiv.org/abs/1906.08237), etc. Along with the models, the library contains multiple variations of each of them for a large variety of downstream-tasks like **Named Entity Recognition (NER)**, **Sentiment Analysis**, **Language Modeling**, **Question Answering** and so on. ## Before Transformer Back to 2017, most of the people using Neural Networks when working on Natural Language Processing were relying on sequential processing of the input through [Recurrent Neural Network (RNN)](https://en.wikipedia.org/wiki/Recurrent_neural_network). ![rnn](http://colah.github.io/posts/2015-09-NN-Types-FP/img/RNN-general.png) RNNs were performing well on large variety of tasks involving sequential dependency over the input sequence. However, this sequentially-dependent process had issues modeling very long range dependencies and was not well suited for the kind of hardware we're currently leveraging due to bad parallelization capabilities. Some extensions were provided by the academic community, such as Bidirectional RNN ([Schuster & Paliwal., 1997](https://www.researchgate.net/publication/3316656_Bidirectional_recurrent_neural_networks), [Graves & al., 2005](https://mediatum.ub.tum.de/doc/1290195/file.pdf)), which can be seen as a concatenation of two sequential process, one going forward, the other one going backward over the sequence input. ![birnn](https://miro.medium.com/max/764/1*6QnPUSv_t9BY9Fv8_aLb-Q.png) And also, the Attention mechanism, which introduced a good improvement over "raw" RNNs by giving a learned, weighted-importance to each element in the sequence, allowing the model to focus on important elements. ![attention_rnn](https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2017/08/Example-of-Attention.png) ## Then comes the Transformer The Transformers era originally started from the work of [(Vaswani & al., 2017)](https://arxiv.org/abs/1706.03762) who demonstrated its superiority over [Recurrent Neural Network (RNN)](https://en.wikipedia.org/wiki/Recurrent_neural_network) on translation tasks but it quickly extended to almost all the tasks RNNs were State-of-the-Art at that time. One advantage of Transformer over its RNN counterpart was its non sequential attention model. Remember, the RNNs had to iterate over each element of the input sequence one-by-one and carry an "updatable-state" between each hop. With Transformer, the model is able to look at every position in the sequence, at the same time, in one operation. For a deep-dive into the Transformer architecture, [The Annotated Transformer](https://nlp.seas.harvard.edu/2018/04/03/attention.html#encoder-and-decoder-stacks) will drive you along all the details of the paper. ![transformer-encoder-decoder](https://nlp.seas.harvard.edu/images/the-annotated-transformer_14_0.png) ## Getting started with transformers For the rest of this notebook, we will use the [BERT (Devlin & al., 2018)](https://arxiv.org/abs/1810.04805) architecture, as it's the most simple and there are plenty of content about it over the internet, it will be easy to dig more over this architecture if you want to. The transformers library allows you to benefits from large, pretrained language models without requiring a huge and costly computational infrastructure. Most of the State-of-the-Art models are provided directly by their author and made available in the library in PyTorch and TensorFlow in a transparent and interchangeable way. ``` !pip install transformers !pip install --upgrade tensorflow import torch from transformers import AutoModel, AutoTokenizer, BertTokenizer torch.set_grad_enabled(False) # Store the model we want to use MODEL_NAME = "bert-base-cased" # We need to create the model and tokenizer model = AutoModel.from_pretrained(MODEL_NAME) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) ``` With only the above two lines of code, you're ready to use a BERT pre-trained model. The tokenizers will allow us to map a raw textual input to a sequence of integers representing our textual input in a way the model can manipulate. ``` # Tokens comes from a process that splits the input into sub-entities with interesting linguistic properties. tokens = tokenizer.tokenize("This is an input example") print("Tokens: {}".format(tokens)) # This is not sufficient for the model, as it requires integers as input, # not a problem, let's convert tokens to ids. tokens_ids = tokenizer.convert_tokens_to_ids(tokens) print("Tokens id: {}".format(tokens_ids)) # Add the required special tokens tokens_ids = tokenizer.build_inputs_with_special_tokens(tokens_ids) # We need to convert to a Deep Learning framework specific format, let's use PyTorch for now. tokens_pt = torch.tensor([tokens_ids]) print("Tokens PyTorch: {}".format(tokens_pt)) # Now we're ready to go through BERT with out input outputs, pooled = model(tokens_pt) print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape)) ``` As you can see, BERT outputs two tensors: - One with the generated representation for every token in the input `(1, NB_TOKENS, REPRESENTATION_SIZE)` - One with an aggregated representation for the whole input `(1, REPRESENTATION_SIZE)` The first, token-based, representation can be leveraged if your task requires to keep the sequence representation and you want to operate at a token-level. This is particularly useful for Named Entity Recognition and Question-Answering. The second, aggregated, representation is especially useful if you need to extract the overall context of the sequence and don't require a fine-grained token-level. This is the case for Sentiment-Analysis of the sequence or Information Retrieval. The code you saw in the previous section introduced all the steps required to do simple model invocation. For more day-to-day usage, transformers provides you higher-level methods which will makes your NLP journey easier. Let's improve our previous example ``` # tokens = tokenizer.tokenize("This is an input example") # tokens_ids = tokenizer.convert_tokens_to_ids(tokens) # tokens_pt = torch.tensor([tokens_ids]) # This code can be factored into one-line as follow tokens_pt2 = tokenizer("This is an input example", return_tensors="pt") for key, value in tokens_pt2.items(): print("{}:\n\t{}".format(key, value)) outputs2, pooled2 = model(**tokens_pt2) print("Difference with previous code: ({}, {})".format((outputs2 - outputs).sum(), (pooled2 - pooled).sum())) ``` As you can see above, calling the tokenizer provides a convenient way to generate all the required parameters that will go through the model. Moreover, you might have noticed it generated some additional tensors: - token_type_ids: This tensor will map every tokens to their corresponding segment (see below). - attention_mask: This tensor is used to "mask" padded values in a batch of sequence with different lengths (see below). ``` # Single segment input single_seg_input = tokenizer("This is a sample input") # Multiple segment input multi_seg_input = tokenizer("This is segment A", "This is segment B") print("Single segment token (str): {}".format(tokenizer.convert_ids_to_tokens(single_seg_input['input_ids']))) print("Single segment token (int): {}".format(single_seg_input['input_ids'])) print("Single segment type : {}".format(single_seg_input['token_type_ids'])) # Segments are concatened in the input to the model, with print() print("Multi segment token (str): {}".format(tokenizer.convert_ids_to_tokens(multi_seg_input['input_ids']))) print("Multi segment token (int): {}".format(multi_seg_input['input_ids'])) print("Multi segment type : {}".format(multi_seg_input['token_type_ids'])) # Padding highlight tokens = tokenizer( ["This is a sample", "This is another longer sample text"], padding=True # First sentence will have some PADDED tokens to match second sequence length ) for i in range(2): print("Tokens (int) : {}".format(tokens['input_ids'][i])) print("Tokens (str) : {}".format([tokenizer.convert_ids_to_tokens(s) for s in tokens['input_ids'][i]])) print("Tokens (attn_mask): {}".format(tokens['attention_mask'][i])) print() ``` ## Frameworks interoperability One of the most powerfull feature of transformers is its ability to seamlessly move from PyTorch to Tensorflow without pain for the user. We provide some convenient methods to load TensorFlow pretrained weight insinde a PyTorch model and opposite. ``` from transformers import TFBertModel, BertModel # Let's load a BERT model for TensorFlow and PyTorch model_tf = TFBertModel.from_pretrained('bert-base-cased') model_pt = BertModel.from_pretrained('bert-base-cased') # transformers generates a ready to use dictionary with all the required parameters for the specific framework. input_tf = tokenizer("This is a sample input", return_tensors="tf") input_pt = tokenizer("This is a sample input", return_tensors="pt") # Let's compare the outputs output_tf, output_pt = model_tf(input_tf), model_pt(**input_pt) # Models outputs 2 values (The value for each tokens, the pooled representation of the input sentence) # Here we compare the output differences between PyTorch and TensorFlow. for name, o_tf, o_pt in zip(["output", "pooled"], output_tf, output_pt): print("{} differences: {:.5}".format(name, (o_tf.numpy() - o_pt.numpy()).sum())) ``` ## Want it lighter? Faster? Let's talk distillation! One of the main concerns when using these Transformer based models is the computational power they require. All over this notebook we are using BERT model as it can be run on common machines but that's not the case for all of the models. For example, Google released a few months ago **T5** an Encoder/Decoder architecture based on Transformer and available in `transformers` with no more than 11 billions parameters. Microsoft also recently entered the game with **Turing-NLG** using 17 billions parameters. This kind of model requires tens of gigabytes to store the weights and a tremendous compute infrastructure to run such models which makes it impracticable for the common man ! ![transformers-parameters](https://lh5.googleusercontent.com/NRdXzEcgZV3ooykjIaTm9uvbr9QnSjDQHHAHb2kk_Lm9lIF0AhS-PJdXGzpcBDztax922XAp386hyNmWZYsZC1lUN2r4Ip5p9v-PHO19-jevRGg4iQFxgv5Olq4DWaqSA_8ptep7) With the goal of making Transformer-based NLP accessible to everyone we @huggingface developed models that take advantage of a training process called **Distillation** which allows us to drastically reduce the resources needed to run such models with almost zero drop in performances. Going over the whole Distillation process is out of the scope of this notebook, but if you want more information on the subject you may refer to [this Medium article written by my colleague Victor SANH, author of DistilBERT paper](https://medium.com/huggingface/distilbert-8cf3380435b5), you might also want to directly have a look at the paper [(Sanh & al., 2019)](https://arxiv.org/abs/1910.01108) Of course, in `transformers` we have distilled some models and made them available directly in the library ! ``` from transformers import DistilBertModel bert_distil = DistilBertModel.from_pretrained('distilbert-base-cased') input_pt = tokenizer( 'This is a sample input to demonstrate performance of distiled models especially inference time', return_tensors="pt" ) %time _ = bert_distil(input_pt['input_ids']) %time _ = model_pt(input_pt['input_ids']) ``` ## Community provided models Last but not least, earlier in this notebook we introduced Hugging Face `transformers` as a repository for the NLP community to exchange pretrained models. We wanted to highlight this features and all the possibilities it offers for the end-user. To leverage community pretrained models, just provide the organisation name and name of the model to `from_pretrained` and it will do all the magic for you ! We currently have more 50 models provided by the community and more are added every day, don't hesitate to give it a try ! ``` # Let's load German BERT from the Bavarian State Library de_bert = BertModel.from_pretrained("dbmdz/bert-base-german-cased") de_tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased") de_input = de_tokenizer( "Hugging Face ist eine französische Firma mit Sitz in New-York.", return_tensors="pt" ) print("Tokens (int) : {}".format(de_input['input_ids'].tolist()[0])) print("Tokens (str) : {}".format([de_tokenizer.convert_ids_to_tokens(s) for s in de_input['input_ids'].tolist()[0]])) print("Tokens (attn_mask): {}".format(de_input['attention_mask'].tolist()[0])) print() output_de, pooled_de = de_bert(**de_input) print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape)) ```
github_jupyter
# Azure ML and IoT Edge Ensure we have a consistent version of the Azure ML SDK. ``` import sys ! {sys.executable} -m pip install -q --upgrade azureml-sdk[notebooks,automl,contrib]==1.5.0 from azureml.core.model import Model from azureml.core.environment import Environment import warnings warnings.filterwarnings('ignore') import logging logger = logging.getLogger() logger.setLevel(logging.CRITICAL) # Check core SDK version number import azureml.core from azureml.core import Workspace print("SDK version:", azureml.core.VERSION) ``` ## 1: Specify parameters Fill in the parameters below. If you already have IoT Hub or Azure ML workspace, then enter their information here. Otherwise, the parameter names will be used in provisioning new services. ``` # Provide the same experiment suffix used in the PyTorch training notebook. Replace *** my_nickname = *** # Provide your Azure subscription ID to provision your services subscription_id = "" # Provide your Azure ML service resource group and workspace name # If you don't have a workspace, pick a name to create a new one resource_group_name_aml = "" aml_workspace_name = "" # DO NOT CHANGE THESE VALUES for this tutorial # Enter the resource group in Azure where you want to provision the resources # or where IoT Hub exists resource_group_name_iot = "iot-aicamp-" + my_nickname # Enter Azure region where your IoT services will be provisioned, for example "eastus2" azure_region = "eastus2" # Enter your Azure IoT Hub name # If you don't have an IoT Hub, pick a name to make a new one iot_hub_name = "iothub-aicamp-" + my_nickname # Enter your IoT Edge device ID # If you don't have an IoT Edge device registered, pick a name to create a new one # This is NOT the name of your VM, but it's just an entry in your IoT Hub, so you can pick any name iot_device_id = "edge-vm-device" # Enter a name for the IoT Edge VM edge_vm_name = "edge-vm-" + my_nickname # This is the name of the AML module you deploy to the device module_name = "machinelearningmodule" ``` The login command below will trigger interactive login. Follow the directions printed below. ``` ! sudo az login # Just in case this is the command to update the Azure CLI # ! curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # Load the IoT extension for Azure CLI if needed - may not work in the notebook so try in terminal if not ! sudo az extension add --name azure-cli-iot-ext ! sudo az account set --subscription $subscription_id ! sudo az group create --name $resource_group_name_iot --location $azure_region ``` ## 2: Provision IoT Hub If you already have provisioned these resources, then skip this section and go Section 3. ### 2.1 Provision an Edge VM **IMPORTANT NOTE**: Before you proceed, you must perform a one-time task to accept the terms of the data science virtual machine on your Azure subscription. You can do this by visiting [Configure Programmatic Deployment](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/LegalTermsSkuProgrammaticAccessBlade/legalTermsSkuProgrammaticAccessData/%7B%22product%22%3A%7B%22publisherId%22%3A%22microsoft_iot_edge%22%2C%22offerId%22%3A%22iot_edge_vm_ubuntu%22%2C%22planId%22%3A%22ubuntu_1604_edgeruntimeonly%22%7D%7D) ``` ! sudo az vm create --resource-group $resource_group_name_iot --name $edge_vm_name --image microsoft_iot_edge:iot_edge_vm_ubuntu:ubuntu_1604_edgeruntimeonly:latest --admin-username azureuser --generate-ssh-keys ``` If you want to SSH into this VM after setup, use the publicIpAddress with the command: `ssh azureuser@{publicIpAddress}`. To open up ports for SSH issues see <a href="https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-ubuntuvm#next-steps" target="_blank">this resource</a>. ### 2.2: Provision IoT Hub If you get an error because there's already one free hub in your subscription, change the SKU to S1. If you get an error that the IoT Hub name isn't available, it means that someone else already has a hub with that name so try a different name. ``` ! sudo az iot hub create --resource-group $resource_group_name_iot --name $iot_hub_name --sku F1 ``` ### 2.3 Register an IoT Edge device ``` # Register an IoT Edge device (create a new entry in the Iot Hub) ! sudo az iot hub device-identity create --hub-name $iot_hub_name --device-id $iot_device_id --edge-enabled ``` ## 3: Load resources Load the Azure ML workspace and get the IoT Edge device connection string from your IoT Hub. ### 3.1 Load the Azure ML workspace. ``` # Initialize a workspace object from persisted configuration from azureml.core import Workspace ws = Workspace.from_config(path="config.json") print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n') ``` ### 3.2: Get the Azure IoT Edge device connection string Set the Edge connection string on the device. Instruction can be found here: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux#configure-the-security-daemon. ``` # Get the connection string that you will need to enter in the IoT Edge device ! sudo az iot hub device-identity show-connection-string --device-id $iot_device_id --hub-name $iot_hub_name ``` Replace the `HostName=...` in the following variable with the entire connection string from above. ``` # Secret!!! Don't check in to source control conn_str = "HostName=..." # Don't modify this part set_cmd = "/etc/iotedge/configedge.sh '"+conn_str+"'" print(set_cmd) ! sudo az vm run-command invoke -g $resource_group_name_iot -n $edge_vm_name --command-id RunShellScript --script "$set_cmd" ``` ## 4: PyTorch Classification Model We've already: - Trained the model - Created the scoring script - Deployed it as a service to Azure Container Instance ### 4.1 Get registered model ``` model = Model(ws,'behavior-pytorch-'+my_nickname, version=1) ``` ### 4.2 Create Docker Image Specify the required packages for the image. ``` # This specifies the dependencies to include in the environment from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies.create(pip_packages=['azureml-defaults==1.5.0', 'torch==1.3.0', 'torchvision==0.4.1', 'Pillow==6.2.1']) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) print(myenv.serialize_to_string()) ``` You can add tags and descriptions to images. Also, an image can contain multiple models. ``` from azureml.core.image import Image, ContainerImage image_config = ContainerImage.image_configuration(runtime="python", execution_script="pytorch_score_iot.py", conda_file="myenv.yml", tags={'area': "iot", 'type': "classification", "framework": "pytorch"}, description="IoT Edge PyTorch classification model for suspicious behavior; Pillow<7") image = Image.create(name="suspiciousbehaviorclass", # this is the model object models=[model], image_config=image_config, workspace=ws) ``` Note that following command can take few minutes. ``` image.wait_for_creation(show_output=True) ``` List images by tag and find out the detailed build log for debugging. ``` for i in Image.list(workspace=ws, tags=["area"]): print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri)) ``` ## 5: Deploy container to Azure IoT Edge device Create a deployment.json file that contains the modules you want to deploy to the device and the routes. Then push this file to the IoT Hub, which will then send it to the IoT Edge device. The IoT Edge agent will then pull the Docker images and run them. ``` # Getting your container details container_reg = ws.get_details()["containerRegistry"] reg_name=container_reg.split("/")[-1] container_url = "\"" + image.image_location + "\"," subscription_id = ws.subscription_id print('{}'.format(image.image_location)) print('{}'.format(reg_name)) print('{}'.format(subscription_id)) from azure.mgmt.containerregistry import ContainerRegistryManagementClient from azure.mgmt import containerregistry client = ContainerRegistryManagementClient(ws._auth,subscription_id) result= client.registries.list_credentials(resource_group_name_aml, reg_name, custom_headers=None, raw=False) username = result.username password = result.passwords[0].value ``` The file modified below is a standar IoT Edge manifest file. This is how IoT Edge and IoT Hub know what modules to deploy down to the device (which in this case is an Azure VM running IoT Edge Runtime). ``` file = open('iot-deployment-template.json') contents = file.read() contents = contents.replace('__MODULE_NAME', module_name) contents = contents.replace('__REGISTRY_NAME', reg_name) contents = contents.replace('__REGISTRY_USER_NAME', username) contents = contents.replace('__REGISTRY_PASSWORD', password) contents = contents.replace('__REGISTRY_IMAGE_LOCATION', image.image_location) with open('./deployment.json', 'wt', encoding='utf-8') as output_file: output_file.write(contents) ``` The following commmand will tell IoT Hub to deploy down the modules from images in the ACR. ``` # Push the deployment JSON to the IOT Hub ! sudo az iot edge set-modules --device-id $iot_device_id --hub-name $iot_hub_name --content deployment.json ``` ## Congratulations! You made it to the end of the tutorial! You can monitor messages from your edge device to your IoT Hub with VS Code and the [Azure IoT Hub Toolkit](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension. After installing the extension in VSCode, log in to the Azure Account (View -> Command Palette -> "Azure: Sign in to Azure Cloud"). Select the IoT Hub by: View -> Command Palette -> "Azure IoT Hub: Set IoT Hub Connection String". <img width="50%" src="../../assets/iot_edge_select_iot_hub_conn_str.png"> Monitor the built in endpoint by right clicking on the Device under the Azure IoT Hub (expand this in the lower left corner), selecting "Start Monitoring Built-in Event Endpoint" (will monitor all messages from any device module to IoT Hub). <img width="50%" src="../../assets/iot_edge_monitor_vscode.png"> If selecting "Start Monitoring Built-in Event Endpoint" in VSCode, the output should look like: ``` [IoTHubMonitor] Start monitoring message arrived in built-in endpoint for device [edge-vm-device] ... [IoTHubMonitor] Created partition receiver [0] for consumerGroup [$Default] [IoTHubMonitor] Created partition receiver [1] for consumerGroup [$Default] [IoTHubMonitor] [7:59:45 PM] Message received from [edge-vm-device/machinelearningmodule]: { "body": { "label": "suspicious", "probability": "0.5001148", "filename": "Walk3frame0002.jpg" }, "applicationProperties": { "AzureMLResponse": "OK" } } [IoTHubMonitor] [7:59:45 PM] Message received from [edge-vm-device/machinelearningmodule]: { "body": { "label": "suspicious", "probability": "0.5", "filename": "Browse_WhileWaiting2frame0000.jpg" }, "applicationProperties": { "AzureMLResponse": "OK" } } ```
github_jupyter
``` import numpy as np from sklearn.manifold import SpectralEmbedding from sklearn.datasets import make_swiss_roll, make_s_curve import matplotlib.pyplot as plt import mpl_toolkits.mplot3d.axes3d as p3 plt.style.use('ggplot') %matplotlib inline ``` This notebook will walk through the steps of Laplacian Eigenmaps (LE) algorithm. It will be a step-by-step walkthrough of the algorithm and towards the notebook, I will work on some common known speed-up attempts. ``` seed = 123 rng = np.random.seed(123) n_samples = 1500 noise = 0.1 random_state = seed data, color = make_swiss_roll(n_samples=n_samples, noise=noise, random_state=random_state) data, color = make_s_curve(n_samples=n_samples, noise=noise, random_state=random_state) ``` \begin{align} g &= \int_a^b f(x)dx \label{eq1} \\ a &= b + c \label{eq2} \end{align} See (\ref{eq1}) ``` fig = plt.figure() ax = p3.Axes3D(fig) ax.scatter(data[:,0], data[:, 1], data[:,2], c=color, cmap=plt.cm.Spectral) ax.set_title("Original Data") plt.show() ``` ### Laplacian Eigenmaps (Sklearn) ``` %%time n_components = 2 affinity = 'nearest_neighbors' n_neighbors = 10 n_jobs = -1 random_state = 123 eigen_solver = 'arpack' # initialize le model le_model = SpectralEmbedding( n_components=n_components, affinity=affinity, n_neighbors=n_neighbors, n_jobs=n_jobs, random_state=random_state, eigen_solver=eigen_solver ) # fit and transform data embedding = le_model.fit_transform(data); fig, ax = plt.subplots(figsize=(10, 6)) ax.spy(le_model.affinity_matrix_, markersize=1.0) plt.show() print(embedding.shape) fig, ax = plt.subplots() ax.scatter(embedding[:, 0], embedding[:, 1], c=color) ax.set_title('Projected Data') plt.show() ``` ## Adjacency Matrix Construction ### Nearest Neighbours Search ``` # some baseline parameters n_neighbors = 10 algorithm = 'brute' metric = 'euclidean' p=2 n_jobs = -1 # initialize nn model nn_model = NearestNeighbors( n_neighbors=n_neighbors, metric=metric, algorithm=algorithm, p=p, n_jobs=n_jobs ) # fit nn model to data nn_model.fit(data); # grab distances and indices dists, indices = nn_model.kneighbors( data, n_neighbors=n_neighbors, return_distance=True ) ``` ### Weighted Distances ``` # Heat kernel def heat_kernel(distances, length_scale=None): if length_scale is None: length_scale = 1.0 # length_scale = np.sqrt(distances.shape[1] / 2.0) # length_scale = 1.0 / distances.shape[1] # return np.exp(- length_scale * distances**2) return np.exp(- distances**2 / length_scale) # transform distances with heat kernel dists = heat_kernel(dists) ``` ### Construct Graph ``` # Construct sparse KNN Graph n_samples = data.shape[0] indptr = np.arange(0, n_samples * n_neighbors + 1, n_neighbors) adjacency_matrix = csr_matrix((dists.ravel(), indices.ravel(), indptr), shape=(n_samples, n_samples)) # ensure that its symmetrix adjacency_matrix = 0.5 * (adjacency_matrix + adjacency_matrix.T) ``` #### Peak at Adjacency Matrix ``` fig, ax = plt.subplots() ax.spy(adjacency_matrix, markersize=1.0) ax.set_title('Adjacency Matrix', pad=15.0) plt.show() ``` ## Laplacian Matrix Some notes about some different Laplacian matrices: **Unnormalized Graph Laplacian** $$L=D-W$$ **Symmetric Normalized Graph Laplacian** $$L_{Sym}=D^{-\frac{1}{2}}LD^{-\frac{1}{2}}$$ **Random Walk Normalized Laplacian** $$L_{rw}=D^{-1}L$$ **Random Walk Transition Matrix** $$L_{rwt}=D^{-1}W$$ ### Laplacian and Degree Matrix ``` def graph_laplacian(adjacency_matrix, graph_type='normalized', return_diag=True): n_samples = adjacency_matrix.shape[0] # Get degree vector degree = np.array(adjacency_matrix.sum(axis=1)).squeeze() # Create Sparse matrix for degree degree_mat = spdiags( degree, diags=0, m=n_samples, n=n_samples ) if graph_type in ['unnormalized']: laplacian = degree_mat - adjacency_matrix elif graph_type in ['normalized']: laplacian = degree_mat - adjacency_matrix norm = 1 / np.sqrt(degree_mat) laplacian = norm @ adjacency_matrix @ norm # Set all values # set diagonal elements of array to negative return laplacian, degree_mat %%time laplacian, degree = graph_laplacian(adjacency_matrix) # print(degree) print(degree.shape) print(laplacian.diagonal().shape) laplacian, degree = graph_laplacian(adjacency_matrix, normed=True, return_diag=True) print(laplacian.shape, degree.shape) fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].spy(laplacian, markersize=1.0) ax[0].set_title('Laplacian Matrix', pad=15.0) ax[1].spy(degree, markersize=1.0) ax[1].set_title('Degree Matrix', pad=15.0) plt.show() ``` ## EigenValue Decomposition | Algorithm | Laplacian Equation | Generalized Eigenvalue | Standard Eigenvalue | |:----------------------:|-------------------------|:----------------------:|---------------------| | Unnormalized Laplacian | $$L=D-W$$ | $$$$ | | | Normalized Laplacian | $$L=D^{-1/2}LD^{-1/2}$$ | 2.75 | | | Random Walk | | 2.03 | | | ReNormalized | | 0.64 | | | Geometric | | | | ### Generalized Eigenvalue Solver ``` %%time # Transform equation laplacian_sol = -1 * laplacian n_components = 2 solver = 'LM' # Smallest to Largest sigma = 1.0 eigen_tol = 0.0 v0 = np.random.uniform(-1, 1, laplacian.shape[0]) eigenvalues, eigenvectors = eigsh( laplacian_sol, k=n_components, which=solver, sigma=sigma, tol=eigen_tol, v0=v0 ) # Transform eigenvectors embedding = eigenvectors.T[n_components::-1] print(eigenvalues.shape, eigenvectors.shape) fig, ax = plt.subplots() ax.scatter(embedding[0, :], embedding[1,:], c=color) ax.set_title('Projected Data') plt.show() ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Var</a></span></li><li><span><a href="#Init" data-toc-modified-id="Init-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Init</a></span></li><li><span><a href="#Test-samples" data-toc-modified-id="Test-samples-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Test samples</a></span><ul class="toc-item"><li><span><a href="#n50_Pseudomonas" data-toc-modified-id="n50_Pseudomonas-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>n50_Pseudomonas</a></span></li><li><span><a href="#n10" data-toc-modified-id="n10-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>n10</a></span></li><li><span><a href="#n5" data-toc-modified-id="n5-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>n5</a></span></li></ul></li><li><span><a href="#Gene-list" data-toc-modified-id="Gene-list-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Gene list</a></span></li></ul></div> # Goal * test dataset construction & evaluation for Struo2 # Var ``` samps_dir = '/ebio/abt3_projects/software/dev/struo2/tests/samples/' work_dir = '/ebio/abt3_projects/software/dev/struo2/tests/data/' meta_file = '/ebio/abt3_projects/databases_no-backup/GTDB/release95/Struo/' meta_file = file.path(meta_file, 'metadata_1per-GTDB-Spec_gte50comp-lt5cont_wtaxID_wPath.tsv') ``` # Init ``` library(dplyr) library(tidyr) library(ggplot2) library(LeyLabRMisc) library(uuid) df.dims() ``` # Test samples ``` # reading in GTDB metadata file used for Struo meta = read.delim(meta_file, sep='\t') meta ``` ## n50_Pseudomonas ``` # selecting subset meta_n50 = meta %>% filter(grepl('Pseudomonas', gtdb_taxonomy)) %>% sample_n(50) meta_n50 outF = file.path(samps_dir, 'GTDBr95_n50_Pseudomonas.tsv') write_table(meta_n50, outF) ``` ## n10 ``` # selecting subset meta_n10 = meta %>% sample_n(10) meta_n10 outF = file.path(samps_dir, 'GTDBr95_n10.tsv') write_table(meta_n10, outF) ``` ## n5 ``` # selecting subset meta_n5 = meta %>% anti_join(meta_n10, c('accession')) %>% sample_n(5) meta_n5 outF = file.path(samps_dir, 'GTDBr95_n5.tsv') write_table(meta_n5, outF) ``` # Gene list * A list of genes to add to an existing gene database * Columns required in gene metadata table: ``` 'seq_uuid', 'seq_orig_name', 'domain', 'phylum', 'class', 'order', 'family', 'genus', 'species', 'taxid', 'genome_name', 'genome_length_bp' ``` ``` F = file.path(work_dir, 'clusters_rep-seqs_tmp.txt') meta = read.delim(F, sep='\t', header=FALSE) meta meta_f = meta %>% rename('seq_orig_name' = V1) %>% mutate(seq_uuid = gsub('-', '', UUIDgenerate())) %>% mutate(domain = '', phylum = '', class = '', order = '', family = '', genus = gsub('(.+)\\.s__.+', '\\1', V3), species = gsub('.+\\.(s__.+)', '\\1', V3), taxid = gsub('.+__taxID(.+)', '\\1', V3), genome_name = '', genome_length_bp = '') %>% mutate(species = gsub('__taxID.+', '', species), taxid = ifelse(grepl('^[0-9]+$', taxid), taxid, '')) %>% dplyr::select(seq_uuid, seq_orig_name, domain, phylum, class, order, family, genus, species, taxid, genome_name, genome_length_bp) meta_f # writing outF = file.path(work_dir, 'clusters_rep-seqs.txt') write_table(meta_f, outF) ```
github_jupyter
# PyBo's Tokenizer ### 0. Running the tokenizer ``` from pybo import BoTokenizer ``` Instanciate the tokenizer with the 'POS' profile (see [profile documentation](this.file)) ``` tokenizer = BoTokenizer('POS') ``` Given a random text in Tibetan language, ``` input_str = '༆ ཤི་བཀྲ་ཤིས་ tr བདེ་་ལེ གས། བཀྲ་ཤིས་བདེ་ལེགས་༡༢༣ཀཀ། མཐའི་རྒྱ་མཚོར་གནས་པའི་ཉས་ཆུ་འཐུང་།། །།མཁའ།' ``` Let's see what information can be derived from it. ``` tokens = tokenizer.tokenize(input_str) print(f'The output is a {type(tokens)}.\nThe constituting elements are {type(tokens[0])}s.') ``` Tokenizing without separating affixed particles is also possible: ``` not_split = tokenizer.tokenize(input_str, split_affixes=False) ``` ### 1. A first look #### Non Tibetan tokens First thing, I see there is non-Tibetan stuff in the middle of the input string. Let's see how I can detect it. ``` for n, token in enumerate(tokens): if token.type == 'non-bo': content = token.content print(f'"{content}", token number {n+1}, is not Tibetan.') start = token.start length = token.len print(f'this starts at {start}th character in the input and spans {length} characters') ``` #### Tokens that are not words Is there any Tibetan punctuation? ``` for n, token in enumerate(tokens): if token.type == 'punct': content = token.content print(f'"{content}", token number {n+1}, is a punctuation token.') ``` How are the Tibetan digits treated? ``` for n, token in enumerate(tokens): if token.type == 'num': content = token.content print(f'"{content}", token number {n+1}, is a numeral.') ``` #### Splitting affixed particles or not: ``` print(f'splitting them: {tokens[11].content}, {tokens[12].content}') print(f'keeping them together: {not_split[11].content}') ``` ### 2. The attributes of tokens Strictly speaking, a token is a word that has been correctly extracted from the input string, but our Token objects have much more information that is awaiting to be exploited by NLP treatments: #### Token.content – the unmodified content straight from the input string ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t "{token.content}"') ``` #### Token.type – the basic types of tokens ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t{token.type}\t("{token.content}")') ``` - syl: contains valid Tibetan syllables - num: Tibetan numerals - punct: Tibetan punctuation - non-bo: non-Tibetan content #### Token.pos – Part of Speech ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t{token.pos}\t\t("{token.content}")') ``` - NOUN: Tibetan noun - VERB: Tibetan verb - PART: casual particle (affixed or not) - oov: Tibetan word for which no POS was found - non-word:A sequence of Tibetan letters that does not appear in our list of words - punct: Tibetan punctuation - num: Tibetan numerals - non-bo: non-Tibetan characters (spaces have a special treatment) #### Token.tag – Token.pos augmented with morphological information on affixed particles ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t{token.tag}\t\t("{token.content}")') ``` - la: the ladon(ལ་དོན་) particle was affixed to the previous token - gi: the dreldra(འབྲེལ་སྒྲ་) particle was affixed - gis: the jedra(བྱེད་སྒྲ་) particle was affixed note: The runic character "ᛃ" is used as a separator because we assume it won't ever appear besides Tibetan text. #### Token.lemma – The current word in its canonical form ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t"{token.lemma}"\t\t("{token.content}")') ``` Only tokens with have some content in this attribute. The other ones have an empty string. Token 13 is a ladon(ལ་དོན་) particle that is affixed, so its lemma is the canonical form of this casual particle: "ལ་". The same goes for token 15 and 17. The final འ is reconstructed where necessary (token 12). When we have a lemma for a given word in our list, we provide it, such as for token 5, but otherwise, we chose to give the normalized version of the content, such as in token 10. #### Token.cleaned_content – the normalized form of Token.content ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t"{token.cleaned_content}"\t\t("{token.content}")') ``` 1. The different Unicode spaces and tabs are removed, 2. Insecable tseks are replaced with regular tseks. 2. Tseks are added at the end of every syllable (not at the end of every token) See for example in token 5 that the double tsek is reduced and that a tsek is added at the end of the second syllable. On the other hand, tokens 12, 14 and 16 don't end with a tsek since their last syllable ends in the following token. note: as of now, the normalization of punctuation is not implemented. #### Token.unaffixed_word – Token.cleaned_content augmented with the འ reinsertion ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): print(f'{n+1}.\t"{token.unaffixed_word}"\t\t("{token.content}")') ``` When tokens contain an affixed particle, the unaffixed form is reconstructed. འ is reinserted in token 12, but not in token 16, nor 14, nor 18. This also functions when we choose not to separate affixed particles from their hosting word: ``` for n, token in enumerate(not_split): print(f'{n+1}.\t"{token.unaffixed_word}"\t\t("{token.content}")') ``` #### Token.affix & Token.affixed – Host-word and its affixed particle ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): if token.affix: # boolean value: True print(f'{n+1}.\tAffix\t\t("{token.content}")') elif token.affixed: print(f'{n+1}.\tHost\t\t("{token.content}")') else: print(f'{n+1}.\t\t\t("{token.content}")') ``` #### Token.aa_word – Signals words that end with འ ``` print(f'"{input_str}"\n') for n, token in enumerate(tokens): if token.aa_word: # boolean value: True print(f'{n+1}.\tTrue\t\t("{token.content}")') else: print(f'{n+1}.\t\t\t("{token.content}")') ``` note: This is currently not detected in words not containing affixed particles, such as token 23. #### Token.syls – Individual syllables of every token ``` print(f'"{input_str}"\n') for n, token in enumerate(not_split): print(f'{n+1}.\t{token.syls}\t\t\t\t("{token.content}")') ``` Tokens containing no syllabe have "None" as value for this attribute. For the others, every syllable is represented as a list containing indices. The indices are relative to the beginning of the current token (Token.start attribute) Each index corresponds to a letter of the syllabe (spaces and tseks are omitted). Here is how we can make use of them to get a cleaned syllable using this attribute and the original string (input_str): ``` for n, token in enumerate(not_split): if token.syls: syls_in_list_of_chars = [] for s in token.syls: syls_in_list_of_chars.append([input_str[token.start + a] for a in s]) syls_in_list = [''.join(a) for a in syls_in_list_of_chars] clean_content = '་'.join(syls_in_list) + '་' print(f'{n+1}.\t{clean_content}\t\t<- {syls_in_list}\t\t<- {syls_in_list_of_chars}') else: print(f'{n+1}.') ``` #### Token.char_types – General categorization of characters ``` for n, token in enumerate(tokens): print(f'{n+1}.', end=' ') for m, t in enumerate(token.char_types): print(f"'{token.content[m]}':{t}", end=', ') print() ```
github_jupyter
# Basic plotting with Bokeh > This chapter provides an introduction to basic plotting with Bokeh. You will create your first plots, learn about different data formats Bokeh understands, and make visual customizations for selections and mouse hovering. This is the Summary of lecture "Interactive Data Visualization with Bokeh", via datacamp. - toc: true - badges: true - comments: true - author: Chanseok Kang - categories: [Python, Datacamp, Visualization] - image: <link href="https://cdn.pydata.org/bokeh/release/bokeh-0.12.13.min.css" rel="stylesheet" type="text/css"> <script src="https://cdn.pydata.org/bokeh/release/bokeh-0.12.13.min.js"></script> ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from bokeh.io import output_file, show from bokeh.plotting import figure from IPython.display import HTML plt.rcParams['figure.figsize'] = (10, 5) ``` ## Plotting with glyphs ### A simple scatter plot In this example, you're going to make a scatter plot of female literacy vs fertility using data from the [European Environmental Agency](http://www.eea.europa.eu/data-and-maps/figures/correlation-between-fertility-and-female-education). This dataset highlights that countries with low female literacy have high birthrates. The x-axis data has been loaded for you as fertility and the y-axis data has been loaded as female_literacy. Your job is to create a figure, assign x-axis and y-axis labels, and plot female_literacy vs fertility using the circle glyph. After you have created the figure, in this exercise and the ones to follow, play around with it! Explore the different options available to you on the tab to the right, such as "Pan", "Box Zoom", and "Wheel Zoom". You can click on the question mark sign for more details on any of these tools. Note: You may have to scroll down to view the lower portion of the figure. ``` eea = pd.read_csv('./dataset/literacy_birth_rate.csv') fertility = eea['fertility'].tolist() female_literacy = eea['female literacy'] # Create the figure: p p = figure(x_axis_label='fertility (children per woman)', y_axis_label='female_literacy (% population)') # Add a circle glyph to the figure p p.circle(x=fertility, y=female_literacy) # Call the output_file() function and specify the name of the file output_file('./html/fert_lit.html') show(p) HTML(filename="./html/fert_lit.html") ``` ### A scatter plot with different shapes By calling multiple glyph functions on the same figure object, we can overlay multiple data sets in the same figure. In this exercise, you will plot female literacy vs fertility for two different regions, Africa and Latin America. Each set of x and y data has been loaded separately for you as `fertility_africa`, `female_literacy_africa`, `fertility_latinamerica`, and `female_literacy_latinamerica`. Your job is to plot the Latin America data with the `circle()` glyph, and the Africa data with the `x()` glyph. ``` fertility_africa = eea[eea['Continent'] == 'AF']['fertility'].astype(float).tolist() fertility_latinamerica = eea[eea['Continent'] == 'LAT']['fertility'].astype(float).tolist() female_literacy_africa = eea[eea['Continent'] == 'AF']['female literacy'].astype(float).tolist() female_literacy_latinamerica = eea[eea['Continent'] == 'LAT']['female literacy'].astype(float).tolist() # Create the figure: p p = figure(x_axis_label='fertility (children per woman)', y_axis_label='female_literacy (% population)') # Add a circle glyph to the figure p p.circle(x=fertility_latinamerica, y=female_literacy_latinamerica) # Add an x glyph to the figure p p.x(x=fertility_africa, y=female_literacy_africa) # Specify the name of the file output_file('./html/fert_lit_separate.html') show(p); HTML(filename="./html/fert_lit_separate.html") ``` ### Customizing your scatter plots The three most important arguments to customize scatter glyphs are `color`, `size`, and `alpha`. Bokeh accepts colors as hexadecimal strings, tuples of RGB values between 0 and 255, and any of the [147 CSS color names](http://www.colors.commutercreative.com/grid/). Size values are supplied in screen space units with 100 meaning the size of the entire figure. The `alpha` parameter controls transparency. It takes in floating point numbers between 0.0, meaning completely transparent, and 1.0, meaning completely opaque. ``` # Create the figure: p p = figure(x_axis_label='fertility (children per woman)', y_axis_label='female_literacy (% population)') # Add a blue circle glyph to the figure p p.circle(fertility_latinamerica, female_literacy_latinamerica, color='blue', alpha=0.8, size=10) # Add a red circle glyph to the figure p p.circle(fertility_africa, female_literacy_africa, color='red', alpha=0.8, size=10) # Specify the name of the file output_file('./html/fert_lit_separate_colors.html') # Display the plot show(p) HTML(filename='./html/fert_lit_separate_colors.html') ``` ## Additional glyphs - Patches - Useful for showing geographic regions - Data given as "list of lists" ### Lines We can draw lines on Bokeh plots with the `line()` glyph function. In this exercise, you'll plot the daily adjusted closing price of Apple Inc.'s stock (AAPL) from 2000 to 2013. The data points are provided for you as lists. `date` is a list of datetime objects to plot on the x-axis and `price` is a list of prices to plot on the y-axis. Since we are plotting dates on the x-axis, you must add `x_axis_type='datetime'` when creating the figure object. ``` aapl = pd.read_csv('./dataset/aapl.csv', index_col=0) date = pd.to_datetime(aapl['date']).tolist() price = aapl['price'].tolist() # Create a figure with x_axis_type='datetime': p p = figure(x_axis_type='datetime', x_axis_label='Date', y_axis_label='US Dollars') # Plot date along the x axis and price along the y axis p.line(date, price) # Specify the name of the output file and show the result output_file('./html/line.html') show(p) HTML(filename='./html/line.html') ``` ### Lines and markers Lines and markers can be combined by plotting them separately using the same data points. In this exercise, you'll plot a line and circle glyph for the AAPL stock prices. Further, you'll adjust the `fill_color` keyword argument of the `circle()` glyph function while leaving the line_color at the default value. ``` # Create a figure with x_axis_type='datetime':p p = figure(x_axis_type='datetime', x_axis_label='Date', y_axis_label='US Dollars') # Plot date along the x-axis and price along the y-axis p.line(date, price) # With date on the x-axis and price on the y-axis, add a white circle glyph of size 4 p.circle(date, price, fill_color='white', size=4) # Specify the name of the output file and show the result output_file('./html/line2.html') show(p) HTML(filename='./html/line2.html') ``` ### Patches n Bokeh, extended geometrical shapes can be plotted by using the `patches()` glyph function. The patches glyph takes as input a list-of-lists collection of numeric values specifying the vertices in x and y directions of each distinct patch to plot. In this exercise, you will plot the state borders of Arizona, Colorado, New Mexico and Utah. The latitude and longitude vertices for each state have been prepared as lists. ``` az = pd.read_csv('./dataset/az.csv') co = pd.read_csv('./dataset/co.csv') nm = pd.read_csv('./dataset/nm.csv') ut = pd.read_csv('./dataset/ut.csv') az_lats, az_lons = az['lats'].tolist(), az['lons'].tolist() co_lats, co_lons = co['lats'].tolist(), co['lons'].tolist() nm_lats, nm_lons = nm['lats'].tolist(), nm['lons'].tolist() ut_lats, ut_lons = ut['lats'].tolist(), ut['lons'].tolist() p = figure() # Create a list of az_lons, co_lons, nm_lons and ut_lons: x x = [az_lons, co_lons, nm_lons, ut_lons] # Create a list of az_lats, co_lats, nm_lats, ut_lats: y y = [az_lats, co_lats, nm_lats, ut_lats] # Add patches to figure p with line_color=white for x and y p.patches(x, y, line_color='white') # Specify the number of the output and show the result output_file('./html/four_corners.html') show(p) HTML(filename='./html/four_corners.html') ``` ## Data formats - Column Data Source - Common fundamental data structure for Bokeh - Maps string column names to sequences of data - Often created automatically for you - Can be shared between glyphs to link selections - Extra columns can be used with hover tooltips ### Plotting data from NumPy arrays In the previous exercises, you made plots using data stored in lists. You learned that Bokeh can plot both numbers and datetime objects. In this exercise, you'll generate NumPy arrays using `np.linspace()` and `np.cos()` and plot them using the circle glyph. `np.linspace()` is a function that returns an array of evenly spaced numbers over a specified interval. For example, `np.linspace(0, 10, 5)` returns an array of 5 evenly spaced samples calculated over the interval [0, 10]. `np.cos(x)`calculates the element-wise cosine of some array `x`. For more information on NumPy functions, you can refer to the [NumPy User Guide](https://docs.scipy.org/doc/numpy/user/index.html#user) and [NumPy Reference](https://docs.scipy.org/doc/numpy/reference/index.html). ``` # Create arary using np.linspace: x x = np.linspace(0, 5, 100) # Create array using np.cos: y y = np.cos(x) # Add circles at x and y p = figure() p.circle(x, y) # Specify the name of the output file and show the result output_file('./html/numpy.html') show(p) HTML(filename='./html/numpy.html') ``` ### Plotting data from Pandas DataFrames You can create Bokeh plots from Pandas DataFrames by passing column selections to the glyph functions. Bokeh can plot floating point numbers, integers, and datetime data types. In this example, you will read a CSV file containing information on 392 automobiles manufactured in the US, Europe and Asia from 1970 to 1982. Your job is to plot miles-per-gallon (`mpg`) vs horsepower (`hp`) by passing Pandas column selections into the `p.circle()` function. Additionally, each glyph will be colored according to values in the `color` column. ``` df = pd.read_csv('./dataset/auto-mpg.csv') # Createthe figure: p p = figure(x_axis_label='HP', y_axis_label='MPG') # Plot mpg vs hp by color p.circle(x=df['hp'], y=df['mpg'], color=df['color'], size=10) # Specify the name of the output file and show the result output_file('./html/auto_df.html') show(p) HTML(filename='./html/auto_df.html') ``` ### The Bokeh ColumnDataSource (continued) You can create a `ColumnDataSource` object directly from a Pandas DataFrame by passing the DataFrame to the class initializer. In this exercise, we have imported pandas as `pd` and read in a data set containing all Olympic medals awarded in the 100 meter sprint from 1896 to 2012. A `color` column has been added indicating the CSS colorname we wish to use in the plot for every data point. Your job is to import the `ColumnDataSource` class, create a new `ColumnDataSource` object from the DataFrame `df`, and plot circle glyphs with `'Year'` on the x-axis and `'Time'` on the y-axis. Color each glyph by the `color` column. ``` df = pd.read_csv('./dataset/sprint.csv') from bokeh.plotting import ColumnDataSource # Createthe figure: p p = figure(x_axis_label='Year', y_axis_label='Time') # Create a ColumnDataSource from df: source source = ColumnDataSource(df) # Add circle glyphs to the figure p p.circle(x='Year', y='Time', source=source, color='color', size=8) # Specify the name of the output file and show the result output_file('./html/sprint.html') show(p) HTML('./html/sprint.html') ``` ## Customizing glyphs ### Selection and non-selection glyphs In this exercise, you're going to add the box_select tool to a figure and change the selected and non-selected circle glyph properties so that selected glyphs are red and non-selected glyphs are transparent blue. You'll use the `ColumnDataSource` object of the Olympic Sprint dataset you made in the last exercise. It is provided to you with the name source. After you have created the figure, be sure to experiment with the Box Select tool you added! As in previous exercises, you may have to scroll down to view the lower portion of the figure. > Note: `ColumnDataSource` can handle only one dataframe in current doc. So it needs to re-assign it every time. ``` # Create a ColumnDataSource from df: source source = ColumnDataSource(df) # Create a figure with the "box_select" tool: p p = figure(x_axis_label='Year', y_axis_label='Time', tools='box_select') # Add circle glyphs to the figure p with the selected and non-selected properties p.circle(x='Year', y='Time', source=source, selection_color='red', nonselection_alpha=0.1) # Specify the name of the output file and show the result output_file('./html/selection_glyph.html') show(p) HTML('./html/selection_glyph.html') ``` ### Hover glyphs Now let's practice using and customizing the hover tool. In this exercise, you're going to plot the blood glucose levels for an unknown patient. The blood glucose levels were recorded every 5 minutes on October 7th starting at 3 minutes past midnight. The date and time of each measurement are provided to you as `x` and the blood glucose levels in mg/dL are provided as `y`. Your job is to add a circle glyph that will appear red when the mouse is hovered near the data points. You will also add a customized hover tool object to the plot. When you're done, play around with the hover tool you just created! Notice how the points where your mouse hovers over turn red. ``` df = pd.read_csv('./dataset/glucose.csv') x = pd.to_datetime(df['datetime']) y = df['glucose'] from bokeh.models import HoverTool # Create figure p = figure(x_axis_label='date', y_axis_label='glucose levels (mg/dL)') # Add circle glyphs to figure p p.circle(x, y, size=10, fill_color='grey', alpha=0.1, line_color=None, hover_fill_color='firebrick', hover_alpha=0.5, hover_line_color='white') # Create a HoverTool: hover hover = HoverTool(tooltips=None, mode='vline') # Add the hover tool to the figure p p.add_tools(hover) # Specify the name of the output file and show the result output_file('./html/hover_glyph.html') show(p) HTML('./html/hover_glyph.html') ``` ### Colormapping The final glyph customization we'll practice is using the `CategoricalColorMapper` to color each glyph by a categorical property. Here, you're going to use the automobile dataset to plot miles-per-gallon vs weight and color each circle glyph by the region where the automobile was manufactured. The `origin` column will be used in the ColorMapper to color automobiles manufactured in the US as blue, Europe as red and Asia as green. ``` df = pd.read_csv('./dataset/auto-mpg.csv') from bokeh.models import CategoricalColorMapper # Create a figure: p p = figure(x_axis_label='Weight', y_axis_label='MPG') # Convert df to a ColumnDataSource: source source = ColumnDataSource(df) # Make a CategoricalColorMapper object: color_mapper color_mapper = CategoricalColorMapper(factors=['Europe', 'Asia', 'US'], palette=['red', 'green', 'blue']) # Add a circlr glyph to the figure p p.circle('weight', 'mpg', source=source, color=dict(field='origin', transform=color_mapper), legend_field='origin') # Specify the name of the output file and show the result output_file('./html/colormap.html') show(p) HTML('./html/colormap.html') ```
github_jupyter
# Propensity to Buy Company XYZ is into creating productivity apps on cloud. Their apps are quite popular across the industry spectrum - large enterprises, small and medium companies and startups - all of them use their apps. A big challenge that their sales team need to know is to know if the product is ready to be bought by a customer. The products can take anywhere from 3 months to a year to be created/updated. Given the current state of the product, the sales team want to know if customers will be ready to buy. They have anonymized data from various apps - and know if customers have bought the product or not. Can you help the enterprise sales team in this initiative? # 1. Frame The first step is to convert the business problem into an analytics problem. The sales team wants to know if a customer will buy the product, given its current development stage. This is a **propensity to buy** model. This is a classification problem and the preferred output is the propensity of the customer to buy the product # 2. Acquire The IT team has provided the data in a csv format. The file has the following fields `still_in_beta` - Is the product still in beta `bugs_solved_3_months` - Number of bugs solved in the last 3 months `bugs_solved_6_months` - Number of bugs solved in the last 3 months `bugs_solved_9_months` - Number of bugs solved in the last 3 months `num_test_accounts_internal` - Number of test accounts internal teams have `time_needed_to_ship` - Time needed to ship the product `num_test_accounts_external` - Number of customers who have test account `min_installations_per_account` - Minimum number of installations customer need to purchase `num_prod_installations` - Current number of installations that are in production `ready_for_enterprise` - Is the product ready for large enterprises `perf_dev_index` - The development performance index `perf_qa_index` - The QA performance index `sev1_issues_outstanding` - Number of severity 1 bugs outstanding `potential_prod_issue` - Is there a possibility of production issue `ready_for_startups` - Is the product ready for startups `ready_for_smb` - Is the product ready for small and medium businesses `sales_Q1` - Sales of product in last quarter `sales_Q2` - Sales of product 2 quarters ago `sales_Q3` - Sales of product 3 quarters ago `sales_Q4` - Sales of product 4 quarters ago `saas_offering_available` - Is a SaaS offering available `customer_bought` - Did the customer buy the product **Load the required libraries** ``` #code here ``` **Load the data** ``` #code here #train = pd.read_csv ``` # 3. Refine ``` # View the first few rows # What are the columns # What are the column types? # How many observations are there? # View summary of the raw data # Check for missing values. If they exist, treat them ``` # 4. Explore ``` # Single variate analysis # histogram of target variable # Bi-variate analysis ``` # 5. Transform ``` # encode the categorical variables ``` # 6. Model ``` # Create train-test dataset # Build decision tree model - depth 2 # Find accuracy of model # Visualize decision tree # Build decision tree model - depth none # find accuracy of model # Build random forest model # Find accuracy model # Bonus: Do cross-validation ```
github_jupyter
# EventVestor: Dividend Announcements In this notebook, we'll take a look at EventVestor's *Cash Dividend Announcement* dataset, available on the [Quantopian Store](https://www.quantopian.com/store). This dataset spans January 01, 2007 through the current day, and documents cash dividend announcements, including special dividends. ## Notebook Contents There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through. - <a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting. - <a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting. ### Free samples and limits One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze. There is a *free* version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date. To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set. With preamble in place, let's get started: <a id='interactive'></a> #Interactive Overview ### Accessing the data with Blaze and Interactive on Research Partner datasets are available on Quantopian Research through an API service known as [Blaze](http://blaze.pydata.org). Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner. Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side. It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization. Helpful links: * [Query building for Blaze](http://blaze.readthedocs.io/en/latest/queries.html) * [Pandas-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-pandas.html) * [SQL-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-sql.html). Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using: > `from odo import odo` > `odo(expr, pandas.DataFrame)` ###To see how this data can be used in your algorithm, search for the `Pipeline Overview` section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a> ``` # import the dataset # from quantopian.interactive.data.eventvestor import dividends as dataset # or if you want to import the free dataset, use: from quantopian.interactive.data.eventvestor import dividends_free as dataset # import data operations from odo import odo # import other libraries we will use import pandas as pd # Let's use blaze to understand the data a bit using Blaze dshape() dataset.dshape # And how many rows are there? # N.B. we're using a Blaze function to do this, not len() dataset.count() # Let's see what the data looks like. We'll grab the first three rows. dataset[:3] ``` Let's go over the columns: - **event_id**: the unique identifier for this event. - **asof_date**: EventVestor's timestamp of event capture. - **trade_date**: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day. - **symbol**: stock ticker symbol of the affected company. - **event_type**: this should always be *Dividend*. - **event_headline**: a brief description of the event - **event_phase**: the inclusion of this field is likely an error on the part of the data vendor. We're currently attempting to resolve this. - **div_type**: dividend type. Values include *no change, increase, decrease, initiation, defer, suspend, omission, stock, special*. Note *QoQ* = quarter-on-quarter. - **div_amount**: dividend payment amount in local currency - **div_currency**: dividend payment currency code. Values include *$, BRL, CAD, CHF, EUR, GBP, JPY*. - **div_ex_date**: ex-dividend date - **div_record_date**: dividend payment record date - **div_pay_date**: dividend payment date - **event_rating**: this is always 1. The meaning of this is uncertain. - **timestamp**: this is our timestamp on when we registered the data. - **sid**: the equity's unique identifier. Use this instead of the symbol. We've done much of the data processing for you. Fields like `timestamp` and `sid` are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the `sid` across all our equity databases. We can select columns and rows with ease. Below, we'll fetch all fifty-cent dividends. ``` fiftyc = dataset[(dataset.div_amount==0.5) & (dataset['div_currency']=='$')] # When displaying a Blaze Data Object, the printout is automatically truncated to ten rows. fiftyc.sort('timestamp') ``` We've done much of the data processing for you. Fields like `timestamp` and `sid` are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the `sid` across all our equity databases. We can select columns and rows with ease. Below, we'll fetch all fifty-cent dividends. ``` fifty_df = odo(fiftyc, pd.DataFrame) reduced = fifty_df[['sid','div_type','timestamp']] # When printed: pandas DataFrames display the head(30) and tail(30) rows, and truncate the middle. reduced ``` Finally, suppose we want a DataFrame of that data, but we only want the sid, timestamp, and div_type: ``` fifty_df = odo(fiftyc, pd.DataFrame) reduced = fifty_df[['sid','div_type','timestamp']] # When printed: pandas DataFrames display the head(30) and tail(30) rows, and truncate the middle. reduced ``` <a id='pipeline'></a> #Pipeline Overview ### Accessing the data in your algorithms & research The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows: Import the data set here > `from quantopian.pipeline.data.eventvestor import (` > `DividendsByExDate,` > `DividendsByPayDate,` > `DividendsByAnnouncement` > `)` Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline: > `pipe.add(DividendsByExDate.next_date.latest, 'next_dividends')` ``` # Import necessary Pipeline modules from quantopian.pipeline import Pipeline from quantopian.research import run_pipeline from quantopian.pipeline.factors import AverageDollarVolume # Import the datasets available from quantopian.pipeline.data.eventvestor import ( DividendsByExDate, DividendsByPayDate, DividendsByAnnouncementDate, ) from quantopian.pipeline.factors.eventvestor import ( BusinessDaysSincePreviousExDate, BusinessDaysUntilNextExDate, BusinessDaysSincePreviousPayDate, BusinessDaysUntilNextPayDate, BusinessDaysSinceDividendAnnouncement, ) ``` Now that we've imported the data, let's take a look at which fields are available for each dataset. You'll find the dataset, the available fields, and the datatypes for each of those fields. ``` print "Here are the list of available fields per dataset:" print "---------------------------------------------------\n" def _print_fields(dataset): print "Dataset: %s\n" % dataset.__name__ print "Fields:" for field in list(dataset.columns): print "%s - %s" % (field.name, field.dtype) print "\n" for data in (DividendsByExDate, DividendsByPayDate, DividendsByAnnouncementDate): _print_fields(data) print "---------------------------------------------------\n" ``` Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline. This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters ``` # Let's see what this data looks like when we run it through Pipeline # This is constructed the same way as you would in the backtester. For more information # on using Pipeline in Research view this thread: # https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters pipe = Pipeline() pipe.add(DividendsByExDate.next_date.latest, 'next_ex_date') pipe.add(DividendsByExDate.previous_date.latest, 'prev_ex_date') pipe.add(DividendsByExDate.next_amount.latest, 'next_amount') pipe.add(DividendsByExDate.previous_amount.latest, 'prev_amount') pipe.add(DividendsByExDate.next_currency.latest, 'next_currency') pipe.add(DividendsByExDate.previous_currency.latest, 'prev_currency') pipe.add(DividendsByExDate.next_type.latest, 'next_type') pipe.add(DividendsByExDate.previous_type.latest, 'prev_type') # Setting some basic liquidity strings (just for good habit) dollar_volume = AverageDollarVolume(window_length=20) top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000 pipe.set_screen(top_1000_most_liquid & DividendsByExDate.previous_amount.latest.notnan()) # The show_graph() method of pipeline objects produces a graph to show how it is being calculated. pipe.show_graph(format='png') # run_pipeline will show the output of your pipeline pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25') pipe_output ``` Taking what we've seen from above, let's see how we'd move that into the backtester. ``` # This section is only importable in the backtester from quantopian.algorithm import attach_pipeline, pipeline_output # General pipeline imports from quantopian.pipeline import Pipeline from quantopian.pipeline.factors import AverageDollarVolume # Import the datasets available from quantopian.pipeline.data.eventvestor import ( DividendsByExDate, DividendsByPayDate, DividendsByAnnouncementDate, ) from quantopian.pipeline.factors.eventvestor import ( BusinessDaysSincePreviousExDate, BusinessDaysUntilNextExDate, BusinessDaysSinceDividendAnnouncement, ) def make_pipeline(): # Create our pipeline pipe = Pipeline() # Screen out penny stocks and low liquidity securities. dollar_volume = AverageDollarVolume(window_length=20) is_liquid = dollar_volume.rank(ascending=False) < 1000 # Create the mask that we will use for our percentile methods. base_universe = (is_liquid) # Add pipeline factors pipe.add(DividendsByExDate.next_date.latest, 'next_ex_date') pipe.add(DividendsByExDate.previous_date.latest, 'prev_ex_date') pipe.add(DividendsByExDate.next_amount.latest, 'next_amount') pipe.add(DividendsByExDate.previous_amount.latest, 'prev_amount') pipe.add(DividendsByExDate.next_currency.latest, 'next_currency') pipe.add(DividendsByExDate.previous_currency.latest, 'prev_currency') pipe.add(DividendsByExDate.next_type.latest, 'next_type') pipe.add(DividendsByExDate.previous_type.latest, 'prev_type') pipe.add(BusinessDaysUntilNextExDate(), 'business_days') # Set our pipeline screens pipe.set_screen(is_liquid) return pipe def initialize(context): attach_pipeline(make_pipeline(), "pipeline") def before_trading_start(context, data): results = pipeline_output('pipeline') ``` Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our <a href='https://www.quantopian.com/posts/pipeline-factor-library-for-data'>data pipeline factor library</a>
github_jupyter
``` %load_ext autoreload %autoreload 2 %cd /home/aditya/git/RCNN_Pneumonia %env PROJECT_PATH /home/aditya/git/RCNN_Pneumonia from utils.envs import * import random import tensorflow as tf import pandas as pd import numpy as np from model.dataset import PneumoniaDataset from mrcnn.model import log from mrcnn import utils from model.config import InferenceConfig from mrcnn import visualize from tqdm import tqdm_notebook as tqdm from mrcnn import model as modellib from utils.vis import get_ax train_dataset = PneumoniaDataset(data_dir, 'train') dev_dataset = PneumoniaDataset(data_dir, 'dev') val_dataset = PneumoniaDataset(data_dir, 'val') test_dataset = PneumoniaDataset(data_dir, 'test') train_dataset.prepare() dev_dataset.prepare() val_dataset.prepare() test_dataset.prepare() config = InferenceConfig() # Create model in inference mode model = modellib.MaskRCNN(mode="inference", model_dir=logs_dir, config=config) n_epoch = '21' weights_path = '/home/aditya/git/RCNN_Pneumonia/logs/pneumonia20181026T1746/mask_rcnn_pneumonia_00{}.h5'.format(n_epoch) print("Loading weights ", weights_path) model.load_weights(weights_path, by_name=True) result_95 = [] result_97 = [] result_99 = [] for idx in tqdm(test_dataset.image_ids): img = test_dataset.load_image(idx) patient_id = test_dataset.image_info[idx]['patient_id'] prediction = model.detect([img]) rois = prediction[0]['rois'] score = prediction[0]['scores'] predictionString = '' for i in range(len(score)): if score[i] > 0.95: x1, y1, x2, y2 = int(rois[i][1]), int(rois[i][0]), int(rois[i][3]), int(rois[i][2]) x, y, width, height = x1, y1, x2-x1, y2-y1 predictionString = predictionString + '{} {} {} {} {} '.format(score[i], x, y, width, height) result_95.append({ 'patientId' : patient_id, 'predictionString' : predictionString.strip() }) predictionString = '' for i in range(len(score)): if score[i] > 0.97: x1, y1, x2, y2 = int(rois[i][1]), int(rois[i][0]), int(rois[i][3]), int(rois[i][2]) x, y, width, height = x1, y1, x2-x1, y2-y1 predictionString = predictionString + '{} {} {} {} {} '.format(score[i], x, y, width, height) result_97.append({ 'patientId' : patient_id, 'predictionString' : predictionString.strip() }) predictionString = '' for i in range(len(score)): if score[i] > 0.99: x1, y1, x2, y2 = int(rois[i][1]), int(rois[i][0]), int(rois[i][3]), int(rois[i][2]) x, y, width, height = x1, y1, x2-x1, y2-y1 predictionString = predictionString + '{} {} {} {} {} '.format(score[i], x, y, width, height) result_99.append({ 'patientId' : patient_id, 'predictionString' : predictionString.strip() }) iteration = 7 result_95_df = pd.DataFrame(result_95) result_97_df = pd.DataFrame(result_97) result_99_df = pd.DataFrame(result_99) result_95_path = os.path.join(output_path, 'result_{}_{}_{}.csv').format(iteration, n_epoch, 0.95) result_97_path = os.path.join(output_path, 'result_{}_{}_{}.csv').format(iteration, n_epoch, 0.97) result_99_path = os.path.join(output_path, 'result_{}_{}_{}.csv').format(iteration, n_epoch, 0.99) result_95_df.to_csv(result_95_path, index=False) result_97_df.to_csv(result_97_path, index=False) result_99_df.to_csv(result_99_path, index=False) ```
github_jupyter
``` from nbdev import * # default_exp text_norm ``` # Text Normalization > Functions used for TTS Dataset Preparation ``` #export import re from typing import Tuple from razdel import tokenize #hide from fastcore.test import * from nbdev.showdoc import * ``` ## Functions for Pipeline ``` #export def collapse_whitespace(text: str) -> str: "Replace multiple various whitespaces with a single space, strip leading and trailing spaces." return re.sub(r'[\s\ufeff\u200b\u2060]+', ' ', text).strip() test_eq(collapse_whitespace( chr(int("0xfeff", 16)) + # zero width no-break space chr(int("0x200b", 16)) + # zero width space chr(int("0x202f", 16)) + # narrow no-break space chr(int("0x2060", 16)) + # word joiner chr(int("0x3000", 16)) + # ideographic space chr(int("0xa0" , 16)) + # no-break space "\t\n 1 2 3 4 5 \t\r\n"), "1 2 3 4 5") #export def lowercase(text: str) -> str: "Convert `text` to lower case." return text.lower() test_eq(lowercase('ПрИвеТ, ЧуВАК!'), 'привет, чувак!') #export def check_no_numbers(text: str) -> list: "Return a list of digits, or empty list, if not found." return re.findall(r'(\d+)', text) test_eq(check_no_numbers('Цифры есть 1 12 13.4'), ['1', '12', '13', '4']) test_eq(check_no_numbers('Цифр нет'), []) #export _specials = [(re.compile(f'{x[0]}'), x[1]) for x in [ (r'\(?\d\d[:.]\d\d\)?', ''), # timestamps (r'!\.{1,}', '!'), # !. -> ! (r'\?\.{1,}', '?'),# ?. -> ? (r'\/', ''), (r'[\*\_]', ''), (r'[\(\)]', '') ]] #export def remove_specials(text: str, purge_digits: bool=None) -> str: "Replace predefined in `_specials` sequence of characters" for regex, replacement in _specials: text = re.sub(regex, replacement, text) if purge_digits: text = re.sub(r'\d', '', text) return text #export def purge_dots(text, purgedots=False): "If `purgedots`, `...`|`…` will be purged. Else replaced with `.`" text = re.sub(r'\s(…)', ' ', text) replacement = '' if purgedots else '.' text = re.sub(r'…', replacement, text) text = re.sub(r'\.{3}', replacement, text) text = re.sub(r'\.{2}', '', text) # pause .. removed return text test_eq(purge_dots("Word..."), 'Word.') test_eq(purge_dots("Word…",), 'Word.') test_eq(purge_dots("Word...", purgedots=True), 'Word') test_eq(purge_dots("Word…", purgedots=True), 'Word') test_eq(purge_dots(" …Word",), ' Word') test_eq(purge_dots("Word..",), 'Word') test_eq(purge_dots('Многоточие... Многоточие… … …Многоточие'), 'Многоточие. Многоточие. Многоточие') test_eq(remove_specials('Скобки у аббревиатур (вайфай) удаляем.'),'Скобки у аббревиатур вайфай удаляем.') test_eq(remove_specials('Метки времени 01:12 или 01.01, (01:12) или (01.01) удаляем.'), 'Метки времени или , или удаляем.') test_eq(remove_specials('Ой!. Ага?. / Стоп.'), 'Ой! Ага? Стоп.') test_eq(remove_specials('*США* _Френсис_'), 'США Френсис') #export _abbreviations = [(re.compile(f'\\b{x[0]}', re.IGNORECASE), x[1]) for x in [ (r'т\.е\.', 'то есть'), (r'т\.к\.', 'так как'), (r'и т\.д\.', 'и так далее.'), (r'и т\.п\.', 'и тому подобное.') ]] #export def expand_abbreviations(text: str) -> str: "`expand_abbreviations()` defined in `_abbreviations`" for regex, replacement in _abbreviations: text = re.sub(regex, replacement, text) return text test_eq( expand_abbreviations('Привет Джон, т.е. Иван. Т.к. русский. И т.д. И т.п.'), 'Привет Джон, то есть Иван. так как русский. и так далее. и тому подобное.') #export def unify_dash_hyphen(text: str) -> str: "Unify dash and hyphen symbols -- replace with emdash or hyphen, separate with space." text = re.sub('[\u2212\u2012\u2014]', '\u2013', text) # replace minus sign, figure dash, em dash with en dash text = re.sub('[\u2010\u2011]', '\u002d', text) # hyphen, non-breaking hyphen text = re.sub('\s*?(\u2013)\s*?',' \g<1> ',text) return text test_eq(unify_dash_hyphen( chr(int("2212",16))+ # minus sign chr(int("2012",16))+ # figure dash chr(int("2010",16))+ # hyphen chr(int("2011",16))),# non-breaking hyphen (" "+chr(int("2013",16))+" ")*2+chr(int("2d",16))*2) test_eq(unify_dash_hyphen('Я '+chr(int("2013",16))+ 'Джейми Кейлер'),'Я – Джейми Кейлер') test_eq(unify_dash_hyphen('Я' +chr(int("2013",16))+ 'Джейми Кейлер'),'Я – Джейми Кейлер') test_eq(collapse_whitespace(unify_dash_hyphen('Я' +chr(int("2013",16))+' Джейми Кейлер')),'Я – Джейми Кейлер') #export def rm_quot_marks(text: str) -> str: """Remove quotation marks from `text`.""" # \u0022\u0027\u00ab\u00bb\u2018\u2019\u201a\u201b\u201c\u201d\u201e\u201f\u2039\u203a\u276e\u276f\u275b\u275c\u275d\u275e\u275f\u2760\u2e42\u301d\u301e\u301f return re.sub(r'["\'«»‘’‚‛“”„‟‹›❮❯❛❜❝❞❟❠]','',text) test_eq(rm_quot_marks('"\'«»‘’‚‛“”„‟‹›❮❯❛❜❝❞❟❠'),'') ``` ### Test Text Strings Equality ``` #export def texts_equal(text1: str, text2: str, ignore_e: bool = True, verbose = False)\ -> Tuple[bool, str, str]: """Check if `text1` equals `text2`. Optionally ignore diff between `е` and `ё`.""" is_equal = 1 text1, text2 = text1.replace('-',' ').strip(), text2.replace('-',' ').strip() if len(text1) != len(text2): if verbose: print("Not equal length") return False, text1, text2 words1 = [_.text for _ in list(tokenize(text1))] words2 = [_.text for _ in list(tokenize(text2))] wc1, wc2 = len(words1), len(words2) if wc1 != wc2: if verbose: print(f"Not equal words count: {wc1} != {wc2}") return False, text1, text2 text1, text2 = "", "" # Per word comparison, assuming wc1 == wc2 for i in range(len(words1)): letters1 = [char for char in words1[i]] letters2 = [char for char in words2[i]] if words1[i] != words2[i]: is_equal -= 1 for j in range(min(len(letters1), len(letters2))): if letters1[j] == letters2[j]: continue else: if ignore_e and letters1[j] in ['е', 'ё'] and letters2[j] in ['е', 'ё']: if verbose: print('е != ё -- норм') is_equal += 1 elif letters1[j] in ['-', ' '] and letters2[j] in ['-', ' ']: is_equal += 1 else: letters1[j] = letters1[j].upper() letters2[j] = letters2[j].upper() is_equal -= 1 words1[i], words2[i] = ''.join(letters1), ''.join(letters2) text1 = text1 + " " + words1[i] text2 = text2 + " " + words2[i] return is_equal == 1, text1[1:], text2[1:] texts_equal("что-ли а", "что-то и", verbose=False) texts_equal("что-ли а", "что-то и", verbose=True) test_eq(texts_equal("1234", "12345", verbose = False), (False, "1234", "12345")) #test_stdout(lambda: test_eq(texts_equal("1234", "12345", verbose = True), False), "Not equal length") test_eq(texts_equal("все", "всё", ignore_e = True, verbose = False), (True, "все", "всё")) test_eq(texts_equal("все", "всё", ignore_e = False, verbose = False), (False, "всЕ", "всЁ")) #test_stdout(lambda: texts_equal("все", "всё", ignore_e = False, verbose = True), "всЕ != всЁ") test_eq(texts_equal("слово ещё одно", "слово ещё одно"), (True,"слово ещё одно", "слово ещё одно")) #hide # test_stdout(lambda: texts_equal("слово ещё одно", "слово ещё одна"), # "однО != однА") # test_stdout(lambda: texts_equal("слово ещё одно", "слово ещё одно лишнее"), # "Not equal length\nNot equal words count: 3 != 4") ``` ## Pipelines ``` #export def basic_cleaner(text: str) -> str: "Basic pipeline: lowercase and collapse whitespaces." text = lowercase(text) text = collapse_whitespace(text) return text test_eq(basic_cleaner( 'Привет Джон, т.е. Иван, т.к. русский. И т.д. и т.п.'), 'привет джон, т.е. иван, т.к. русский. и т.д. и т.п.') #export def russian_cleaner(text, purge_digits=True, _purge_dots=False): "Pipeline for cleaning Russian text." text = expand_abbreviations(text) text = remove_specials(text, purge_digits=purge_digits) text = purge_dots(text, purgedots=_purge_dots) text = unify_dash_hyphen(text) text = rm_quot_marks(text) text = collapse_whitespace(text) return text #export def russian_cleaner2(text, purge_digits=True, _purge_dots=False): "Pipeline for cleaning and lowercase Russian text." return russian_cleaner(lowercase(text), purge_digits, _purge_dots) test_eq(russian_cleaner( 'Привет «Джон», т.е. Иван, т.к. русский... И т.д. и т.п. Ой!. Ага?. / "Стоп"..'), 'Привет Джон, то есть Иван, так как русский. и так далее. и тому подобное. Ой! Ага? Стоп') ``` ## Sentences Tokenizer ``` from razdel import sentenize fname = '/home/condor/git/cyrillica/b-ish.txt' with open(fname) as f: text = f.read() text text = russian_cleaner(text) text for s in sentenize(russian_cleaner(text)): print(s.text) ``` ### Set of characters in the origial text ``` print(f'Char\tDec\tHex\tPrintable?') for i,c in enumerate(sorted(set(text))): print(f'{c}\t{ord(c)}\t{hex(ord(c))}\t{c.isprintable()}') ``` ### Set of characters in the cleaned text ``` print(f'Char\tDec\tHex\tPrintable?') for i,c in enumerate(sorted(set(russian_cleaner2(text)))): print(f'{c}\t{ord(c)}\t{hex(ord(c))}\t{c.isprintable()}') ``` ### Set of the removed/replaced characters ``` print(f'Char\tDec\tHex\tPrintable?') for i,c in enumerate(sorted( set(text).difference(set(russian_cleaner2(text))))): print(f'{c}\t{ord(c)}\t{hex(ord(c))}\t{c.isprintable()}') assert check_no_numbers(russian_cleaner(text))== [] text = '''Восклицательное предложение! А это какое? Инициалы -- не повод разрывать. Правда, А.С. Пушкин? -- Разумеется, голубчик. (Скобки оставляем.)''' for _ in sentenize(text): print(_.text) #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import librosa from librosa.display import specshow from IPython.display import display, Audio from res.plot import set_default x, sampling_rate = librosa.load('./dataset/SaReGa.wav') # Load the audio file T = x.size/sampling_rate # Length of audio dt = 1/sampling_rate # Time step per sample t = np.r_[0:T:dt] # Time axis set_default(figsize=(12,4)) plt.plot(t, x) plt.xlim([0, T]) plt.show() # Plot a zoomed in Portion from 1s range_ = range(1*sampling_rate,1*sampling_rate+1000) plt.plot(t[range_], x[range_]) plt.show() # Plot a zoomed in Portion from 3s range_ = range(3*sampling_rate,3*sampling_rate+1000) plt.plot(t[range_], x[range_]) plt.show() Audio(x, rate=sampling_rate) X = librosa.stft(x) # Compute spectrogram X_dB = librosa.amplitude_to_db(np.abs(X)) fig, ax = plt.subplots(2,1, sharex=True, figsize=(16,8)) plt.xlim([0, T]) ax[0].plot(t, x) ax[0].set_ylabel('amplitude') specshow(X_dB, sr=sampling_rate, x_axis='time', y_axis='hz', ax=ax[1]) ax[1].set_ylim(top=1000) ax[1].set_ylabel('frequency [Hz]') plt.suptitle('Audio signal x(t) and its spectrogram X(t)', size=16) plt.show() # # generate tones (G#3 Scale - Common Female Scale) # Sa = 207.65 # Re = 233.08 # Ga = 261.63 # Ma = 277.18 # Pa = 311.13 # Dh = 349.23 # Ni = 392.00 # SA = 415.30 # # generate tones (C#3 Scale - Common Male Scale) # Sa = 138.59 # Re = 155.56 # Ga = 174.61 # Ma = 185.00 # Pa = 207.65 # Dh = 233.08 # Ni = 261.63 # SA = 277.18 # generate tones (C#4 - Scale Piano C#3) Sa = 277.18 Re = 311.13 Ga = 349.23 Ma = 369.99 Pa = 415.30 Dh = 466.16 Ni = 523.25 SA = 554.37 TT = 0.1 #s filter length tt = np.r_[0:TT:dt] a = 0.1 A = { 'Sa': a*np.sin(2 * np.pi * Sa * tt), 'Re': a*np.sin(2 * np.pi * Re * tt), 'Ga': a*np.sin(2 * np.pi * Ga * tt), 'Ma': a*np.sin(2 * np.pi * Ma * tt), 'Pa': a*np.sin(2 * np.pi * Pa * tt), 'Dh': a*np.sin(2 * np.pi * Dh * tt), 'Ni': a*np.sin(2 * np.pi * Ni * tt), 'SA': a*np.sin(2 * np.pi * SA * tt), } xx = np.concatenate([a[1] for a in A.items()]) XX = librosa.stft(xx) XX_dB = librosa.amplitude_to_db(np.abs(XX)) # spectrogram for filters specshow(XX_dB, sr=sampling_rate, x_axis='time', y_axis='hz') plt.ylim(bottom=150, top=1000) plt.plot() display(Audio(xx, rate=sampling_rate)) display(Audio(x, rate=sampling_rate)) ``` # Convolutions ``` fig, axs = plt.subplots(8, 1, sharex=True, figsize=(16,22)) plt.xlim([0, T]) convs = [] for i, a in enumerate(A.items()): # Compute the convolution convs.append(np.convolve(x, a[1], mode='same')) axs[i].set_title(rf'$x(t) \star {a[0]}(t)$') axs[i].plot(t, convs[-1]) axs[i].set_ylabel('amplitude [/]') plt.xlabel('time [s]') plt.show() # let's listen to these convolutions! for c in convs: display(Audio(c, rate=sampling_rate)) ```
github_jupyter
# matplotlib: plotting package ![Salinity](http://pong.tamu.edu/~kthyng/movies/txla_plots/salt/2004-07-30T00.png) http://kristenthyng.com/gallery/txla_salinity.html Data and model results can be abstract if we can't see how they look. Also, it is easy to get too removed from calculations and end up with answers that don't make sense. A straight-forward way to investigate information and to have a reality check is by plotting it up. Here we will cover the basics for making a variety of commonly-used plots. matplotlib provides a [gallery](http://matplotlib.org/gallery.html) of plot examples, as described by text and shown as plots. This is really helpful for finding what you want to do when you don't know how to describe it, and to get ideas for what possibilities are out there. To produce figures inline in Jupyter notebooks, you need to run the command `%matplotlib inline`. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np ``` A quick plot without any setup: ``` x = np.linspace(0, 10) plt.plot(x, x) ``` You can subsequently alter the plot with commands like `plt.xlabel('xlabel')` which act on the active figure, but you can only reference one figure at a time when they are not named. So, problems with this: * Less control * Harder to make changes later * Alters any figure you may already have open So, don't screw over future you! Set up your plot properly from the beginning # A. Figure overview A figure in matplotlib has several basic pieces, as shown in the following image. Note that `axes` refers to the area within a figure that is used for plotting and `axis` refers to a single x- and y-axis. ![fig](http://matplotlib.org/_images/fig_map.png) http://matplotlib.org/faq/usage_faq.html#parts-of-a-figure ## Figure and axes setup Steps for setting up a figure: 1. Open a figure, save the object to a variable, and size it as desired. 2. Add axes to the figure. Axes are the objects in which data is actually plotted. 3. Add labels to clearly explain the plot, such as axis labels and a title. 4. Plot! Most basically, use the `plot` command to plot lines and markers. Here is a good way to set up a general figure so that you can easily work with it: ``` # Step 1 fig = plt.figure(figsize=(8, 3)) # figure size is given as a (width, height) tuple # Step 2 ax = fig.add_subplot(111) # # Step 3 ax.set_xlabel('x axis [units]') ax.set_ylabel('y axis [units]') ax.set_title('Title') x = np.random.rand(10) y = np.random.rand(10) # Step 4 ax.plot(x, y, 's') ``` ## Useful commands and keyword arguments These commands and keyword arguments should be frequently used to customize and properly label your figures. Command syntax shown is common usage, not all available options. ### labels and text `ax.set_xlabel(xlabel, fontsize, color)`, `ax.set_ylabel(ylabel, fontsize, color)`: Label the x and y axis with strings xlabel and ylabel, respectively. This is where you should state what is being plotted, and also give units. `ax.set_title(Title, fontsize, color)`: Label the top of the axes with a title describing the plot. [`fig.suptitle(Suptitle, fontsize, color)`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.suptitle): Label the overall figure, above any subplot titles. [`ax.text(x, y, text, color, transform=ax.transAxes)`](http://matplotlib.org/api/text_api.html#matplotlib.text.Text): Write text in your axes. The text will appear at location (x,y) in data coordinates — often it is easier to input the location in units of the axes itself (from 0 to 1), which is done by setting transform=ax.transAxes. The text is input as a string and `color` controls the color of the text. ### [subplot](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot) `fig.add_subplot(nrows, ncols, plot_number)` Above, we showed an example of adding a single axes to the figure, with the command `fig.add_subplot(111)`. This command can be used to add multiple axes to the figure instead of a single one. These subplots can divide up the available space in the figure only in a simple way, which is enough for most cases. An example with 1 row and 2 columns of axes is shown, with the `plot_number` increasing from 1 across the rows then columns, up to the number of axes (2, in this case). ``` # subplot example fig = plt.figure(figsize=(16, 3)) ax1 = fig.add_subplot(1, 2, 1) # 1st subplot ax1.set_xlabel('x axis 1', fontsize=24, color='r') ax1.set_ylabel('y axis 1') ax1.set_title('Title 1') ax2 = fig.add_subplot(1, 2, 2) # 2nd subplot ax2.set_xlabel('x axis 2', fontsize=24, color='orange') ax2.set_ylabel('y axis 2') ax2.set_title('Title 2') ax2.set_xlim(0, 0.5) ax2.set_ylim(0, 2) fig.suptitle('Overall title', fontsize=18, color=(0.3, 0.1, 0.8, 0.5)) fig.tight_layout() # helper function to clean up plot ``` ### [subplots](http://matplotlib.org/examples/pylab_examples/subplots_demo.html) `fig, axes = plt.subplots(nrows, ncols)` If we want to use many subplots, it is more concise to save the number of axes to an array so that we can loop through them. This function allows us to have subplots with shared x, y, or both axes, which then shares the x and y limits and the ticks and tick labels. An example with 3 rows and 2 columns of axes is shown. We loop through the axes instead of listing each out separately. We demonstrate the ability to share the x axis. ``` fig, axes = plt.subplots(3, 4, sharex=True, sharey=True) # loop through axes for i, ax in enumerate(axes.flat): ax.set_title('plot_number=' + str(i+1)) # add 1 to plot_number since it starts counting at 1 # another way to access individual subplots axes[1, 1].plot(np.random.rand(100)*50) # make plot look nicer fig.tight_layout() # Use to make plot look nicer ``` ### axes layout [`fig.tight_layout()`](http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure.tight_layout): convenience function to automatically improve spacing between subplots (already used above). [`fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)`](http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure.subplots_adjust): Any of the keywords listed may be used to override the default values. In order, the adjust the left, bottom, right, and top of the subplots, the width and height for space between subplots. These values can be altered graphically when using a GUI version of a plot in iPython. ### axis control `ax.set_xlim(xmin, xmax)`, `ax.set_ylim(ymin, ymax)`: Set the x and y axis limits to xmin, xmax and ymin, ymax, respectively. #### [axis](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.axis) The axis values by default encompass the plotted data comfortably, usually giving some space at the edges, depending on the numbers. However, this can be modified. Common usages: `axis('equal')`: sets x and y limits so that the increments on both axes are equal lengths. `axis('tight')`: sets axis limits to just encompass the data, with no extra space on the ends. ``` # axis examples x = np.linspace(0, 10) fig = plt.figure(figsize=(16, 3)) # No adjustments to axis, depending on version this gives same result as `axis('tight')` ax1 = fig.add_subplot(1, 3, 1) ax1.plot(x, x) ax1.set_title('Default axis') # Using axis('equal') ax2 = fig.add_subplot(1, 3, 2) ax2.plot(x, x) ax2.axis('equal') ax2.set_title("axis('equal')") # Using axis('tight') ax3 = fig.add_subplot(1, 3, 3) ax3.plot(x, x) ax3.axis('tight') ax3.set_title("axis('tight')") ``` --- ### *Exercise* > Create figures with multiple axes two ways: with `add_subplot` and with `subplots`. Plot something simple, then label the x and y axis, and try changing the limits of the x and y axis with `xlim`, `ylim` and `axis`, and adjusting the overall character with `tight_layout` and `subplots_adjust`. Change the fontsize and color of your labels. > Help the other students at your table: if you are finished, you should be looking around to see who could use some help. Be prepared to show off your results! --- ### Plotting with `datetime` objects If you want to plot with time, use `datetime` objects to hold the time/dates. Then when you plot, things will work out nicely. In fact, in the following example, the plotted dates will be formatted correctly whether or not you plot with the special function [`plot_date()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot_date). But, to have them be readable, we need to rotate the labels and choose the date formatting we prefer. The following is a bunch of code to read in the data from an earlier homework assignment. We end up with a dictionary of time series. ``` ### Only uncomment the following lines if using Google Colab to run this notebook ### uncomment the two lines below here #from google.colab import drive #drive.mount('/content/gdrive') # you'll need to click the link and authorize the notebook to access your Google drive # then paste the authorization code into the box here; you'll need to use Ctrl+V to paste. ### now go to the file explorer on the right hand side of your Colab notebook and navigate to the folder where your notebook is located ### the path shown below is the path on my Google drive; yours may differ ### uncomment the three lines below here #your_directory_path = '/content/gdrive/MyDrive/python4geosciences/materials/Module 4' #import os #os.chdir(your_directory_path) from datetime import datetime missing_value = -999 # Copied some code from class materials f = open('../data/02_GPS.dat') f.seek(0) # This sets the pointer back to the beginning of the file. This allows us to run this # block of code many times without reopening the file each time. gps = {} # initialize dictionary gps['datetimes'] = [] # initialize datetimes list gps['speed'] = [] # initialize speed list gps['dir'] = [] for line in f.readlines(): # iterate over each line in the file. Each line is a string. data = line.split('\t') # split the line of text into words, each separated by tabs, to get full drifter name if data[0] == 'Trackpoint': # We only want to consider lines that begin with 'Trackpoint', as these hold the data #print(data) datetimeinfo = data[2].split() date = datetimeinfo[0].split('/') month = int(date[0]) day = int(date[1]) year = int(date[2]) time = datetimeinfo[1].split(':') hour = int(time[0]) mins = int(time[1]) sec = int(time[2]) if datetimeinfo[2] == 'PM': hour = hour + 12 # create the datetime object for this line and append onto our list gps['datetimes'].append(datetime(year, month, day, hour, mins, sec)) # now deal with speed if len(data) > 8: # this catches the one weird line with fewer tabs, which has no speed speed = data[7].split(' ')[0] deg = data[8].split('\u00b0')[0] #crashes in Python3 on my computer; for some reason puts angstroms in there #deg = data[8].split('\u00b0')[0][:-1] #can use this if angstroms appears in text if speed == '': gps['speed'].append(missing_value) # there aren't values available for many entries else: # it is a string containing a number gps['speed'].append(float(speed)) gps['dir'].append(float(deg)) else: gps['speed'].append(missing_value) gps['dir'].append(missing_value) # there aren't values available for many entries else: continue # Need to mask out missing values imissing = np.asarray(gps['speed'][:]) == -999 gps['speed'] = np.ma.masked_where(imissing, gps['speed']) gps['dir'] = np.ma.masked_where(imissing, gps['dir']) # these are missing at the same times as speed since they're related import matplotlib.dates fig = plt.figure() ax = fig.add_subplot(111) ax.plot(gps['datetimes'], gps['speed'], 'ro', ms=5) # labels # ax.set_xlabel('Times') ax.set_ylabel('Speed [mph]') # Fix the formatting of the dates since ugly ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%B %d, %H:%M')) plt.xticks(rotation=45); # rotating usually important for dates ``` ### ticks, ticklabels, spines Spines are the lines making up the x and y axis — both top/bottom and left/right. Ticks are the locations of the little marks (ticklines) along the axis, and ticklabels are the text. You can control each of these independently in a couple of ways, including the following and with `tick_params`. ``` x = np.linspace(0, 4) fig = plt.figure(figsize=(2, 3)) ax = fig.add_subplot(111) ax.plot(x, x) # turn off right and top spines: ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # Change color of left spine ax.spines['left'].set_color('r') # # But the ticks on the top and right axis are still there! # # That's because they are controlled separately. ax.xaxis.set_ticks_position('bottom') # turns off top tick marks ax.set_yticklabels('') # Turn off y tick labels (the text) ax.set_xticks([1, 2, 3]); xticklabels = ['1st', '2nd', '3rd'] ax.set_xticklabels(xticklabels) ``` ### Removing offset from axis By default, matplotlib will format the axis with a relative shift if there is a large difference in the axis value range. Sometimes this is helpful and sometimes it is annoying. We can turn this off with `ax.get_xaxis().get_major_formatter().set_useOffset(False)`. ``` fig = plt.figure(figsize=(8,4)) ax1 = fig.add_subplot(211) ax1.plot([10000, 10001, 10002], [1, 2, 3]) ax1.set_title('default matplotlib x-axis') ax2 = fig.add_subplot(212) ax2.plot([10000, 10001, 10002], [1, 2, 3]) ax2.set_title('turned off default offset') ax2.get_xaxis().get_major_formatter().set_useOffset(False) fig.tight_layout() ``` --- ### *Exercise* > Using the data read in above from `02_GPS.dat`, plot the drifter direction vs. the datetimes at which the data were taken. Label the plot and make it look nice and readable. Then remove the right and top spines, ticks, and ticklabels so that we are left with just the left and bottom indicators. --- ### Legends [`ax.legend([possible sequence of strings], loc)`](http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend) where loc tells where in the axes to place the legend: 'best' : 0, (only implemented for axes legends) 'upper right' : 1, 'upper left' : 2, 'lower left' : 3, 'lower right' : 4, 'right' : 5, 'center left' : 6, 'center right' : 7, 'lower center' : 8, 'upper center' : 9, 'center' : 10, A legend or key for a plot can be produced by matplotlib by either labeling plots as they are plotted and then calling the legend command, or by plotting and then labeling them in order within the legend command. ``` x = np.linspace(0, 10) fig = plt.figure() # Labeled plots as they were plotted ax1 = fig.add_subplot(121) ax1.plot(x, x, label='y=x') ax1.plot(x, x**2, '.', label='y=x$^2$') ax1.legend() ax1.set_title('labeled entries,\n default legend loc') # Chose specific location for legend and labeled plots in the legend in order of plotting ax2 = fig.add_subplot(122) ax2.plot(x, x) ax2.plot(x, x**2, '.') ax2.legend(('y=x', 'y=x$^2$'), loc='center left') ax2.set_title('labeled entries in legend,\n center left legend loc') ``` ### Plotting inputs #### [colors](http://matplotlib.org/api/colors_api.html#module-matplotlib.colors) A handful of colors are available by a single letter code: - b: blue - g: green - r: red - c: cyan - m: magenta - y: yellow - k: black - w: white, - and many more are available by (html) name: ![color chart](http://i.stack.imgur.com/k2VzI.png) http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib Other inputs to matplotlib possible: * Gray scale: a string with a float in it between 0 (black) and 1 (white) * Hex: '#eeefff' * RGB tuple in the range [0,1]: [0.1, 0.2, 0.3] #### [line styles and markers](http://matplotlib.org/1.3.1/examples/pylab_examples/line_styles.html) There are several line styles and many markers available for plotting. You can plot a marker alone, a line alone, or a combination of the two for even more options. Here are some examples: ![line styles and markers](http://matplotlib.org/1.3.1/_images/line_styles.png) http://matplotlib.org/1.3.1/examples/pylab_examples/line_styles.html #### options - `markersize` or `ms`: how large the markers are, if used - `markeredgecolor` or `mec`: color of marker edge, can be None - `markerfacecolor` or `mfc`: color of marker face, case be None - `linewidth` or `lw`: width of line if using a linestyle - `color`: color of line or marker - `alpha`: transparency of lines/markers, from 0 (transparent) to 1 (opaque) #### Plotting usage examples Many usage examples for `plot` can be found with plt.plot? Can use some shortcuts for simple options: ``` x = np.linspace(0, 10, 100) y = np.random.rand(100) fig, axes = plt.subplots(3, 1, sharex=True) axes[0].plot(x, y, 'k-') # Plots with a black line axes[1].plot(x, y, 'ro-.') # Plots with a red dashed line with stars axes[2].plot(x, y, 'k*', x, x, 'r') # can plot more than one line with one call without keywords ``` For more control, need to use keyword arguments. For example, unless you want to use one of the single-letter color names, you need to use the keyword `color` to choose your color. ``` fig, axes = plt.subplots(3, 1, figsize=(10, 6)) # Large light blue squares (defined by hex) with red edges axes[0].plot(x, y, color='#eeefff', ms=10, marker='s', mec='r', mfc='k') axes[1].plot(x, y, color='#eeefff', ms=10, marker='s', mec='r', mfc='k') axes[1].plot(x, y**2, color='palevioletred', linewidth=10, alpha=0.5) # Thick, half-transparent line axes[2].plot(x, y, '-.', color='0.6', lw=3) # grayscale line fig.subplots_adjust(hspace = 0.3) ``` --- ### *Exercise* > The following don't work. Why? plt.plot(x, y, 'r*b:') plt.plot(x, y, 'r*', ms=20, x, y, 'b:') --- ## Example Let's plot something from the CTD data we used in numpy. ``` data = np.loadtxt('../data/CTD.txt', comments='*') # set up figure fig = plt.figure() ax = fig.add_subplot(111) # plot data ax.plot(data[:,0], data[:,1], 'g-^', lw=4, ms=15, alpha=0.5) ax.invert_yaxis() # since depth below sea level # labels ax.set_xlabel("Pressure [db]", fontsize=20) ax.set_ylabel("Depth [m]", fontsize=20) # change font size of tick labels ax.tick_params(labelsize=20) ``` --- ### *Exercise* > Use the CTD data set from the previous example and compare two sets of temperature data: the temperature and the potential temperature. They are related but different measures, but they have similar values that we'd like to compare. Plot both of them vs. depth, making sure that you distinguish the two lines, label your plot appropriately, and use a legend. --- ### Get current plot These can be used if you didn't properly set up your figure in the first place and want to be able to reference the current plot. [`plt.gca()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.gca): Gets the current Axes instance. [`plt.gcf()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.gcf): Gets the current Figure instance. ### Labels with math Sometimes we want to be able to use $\LaTeX$ to write math in axis labels, especially for units. We can do this in matplotlib! We just have to use proper $\LaTeX$ notation for writing the math, and put an 'r' in front of the string, though often the 'r' doesn't appear to be needed. ``` fig = plt.figure(figsize=(2, 1)) ax = fig.add_subplot(111) ax.set_xlabel('Speed [m s$^{-1} \lambda$]') ax.set_ylabel('Volume [m$^{3}$]') ``` ### [savefig](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig) `fig.savefig(fname, dpi=None, bbox_inches='tight')` This is how you can save the figure that you have made. Input a file name with file extension you want, fname, and the dots per inch if you want something higher than the default. `bbox_inches='tight'` trims the white space around the figure; alternatively a number can be input to pad the figure if it cuts too close, but the 'tight' option works most of the time. ### *Good design principles* We all need to spend a bit of extra time to make really good plots instead of just-ok plots. This is worth the time because of how much clearer the result can be, which improves clarity, understanding, and communication. [Edward Tufte](http://www.edwardtufte.com/tufte/) has books about how to think about design principles. A fundamental part of his thinking is to show the data without extra junk that doesn't add to the plot — he calls this extra plotting stuff "chartjunk". For example, if you have a plot with gridlines, make sure the gridlines are visible but not dominating the actual data. Some guidelines to follow: * Always label all axes, with units if appropriate; * Make all text large enough to be easily seen by whatever type of viewer will be seeing your image — fontsize needs to be larger for a presentation than for a paper — this includes the ticklabels!; * Make all lines thick enough to be easily seen; * Make all lines/markers distinct in both color and line/marker style so that the legend is correct even in grayscale; * Use colors that are complementary! The default choices aren't usually so pretty (though `matplotlib` is having a style defaults update soon); * Don't forget to pay attention to edge lines on markers and bars, the font style, and other details; * You can update your own defaults with the [matplotlibrc file](http://matplotlib.org/users/customizing.html) or you can use other people's, like from the [seaborn](http://stanford.edu/~mwaskom/software/seaborn/) (statistical data visualization) package: ![Seaborn Example](http://stanford.edu/~mwaskom/software/seaborn/_images/hexbin_marginals.png) http://stanford.edu/~mwaskom/software/seaborn/examples/hexbin_marginals.html ``` x = np.linspace(0, 10) fig = plt.figure(figsize=(12, 6)) ax1 = fig.add_subplot(1, 2, 1) ax1.plot(x, x, x, x**2) ax1.grid(True, color='r', linestyle='-', linewidth=2) ax1.set_title('Where is the data?\nHow would this look in black and white?') ax1.legend(('x=x', 'x=x$^2$')) ax2 = fig.add_subplot(1, 2, 2) ax2.plot(x, x, '0.3', linewidth=3) ax2.plot(x, x**2, '0.6', linewidth=4, linestyle='--') ax2.grid(True, color='k', linestyle=':', linewidth=0.25) ax2.set_title('Data displayed prominently and clearly', fontsize=14) ax2.set_xlabel('xlabel [units]', fontsize=14) ax2.set_ylabel('ylabel [units]', fontsize=14) plt.xticks(fontsize=14); plt.yticks(fontsize=14); ax2.legend(('x=x', 'x=x$^2$'), loc='best') ``` # C. Histogram A histogram shows how many instances of data are in a particular bin in your data set. This is more of a mathematical thing, but it doesn't mean much without a plot. We are using the matplotlib `hist` function, but note that there are several options available for 1D or 2D analysis, and depending on how much control you want (`hist2d`, `histogram`, `histogram2d`, `histogramdd`). ``` import scipy.stats # rv = scipy.stats.cosine.rvs(size=1000) rv = np.random.rand(10000)**3 fig = plt.figure(figsize=(12,6)) ax1 = fig.add_subplot(1, 2, 1) ax1.hist(rv) ax1.set_xlabel('PDF') ax1.set_ylabel('Number of data points') ax2 = fig.add_subplot(1, 2, 2) ax2.hist(rv, bins=50, color='darkcyan', lw=0.1) ax2.set_xlabel('PDF', fontsize=14) ax2.set_ylabel('Number of data points', fontsize=14) plt.xticks(fontsize=14); plt.yticks(fontsize=14); ax2.set_title('A little bit of extra work\ncan make your plot easier to read', fontsize=14) fig.tight_layout() ``` # D. 2D plots A fundamental part of 2 dimensional plots is how to input the (x, y) coordinates, and this mainly depends on whether we have *structured* or *unstructured* data. A 2D array of values that correspond to (x,y) points that increase monotonically is structured. Unstructured data cannot be easily put into an array because the corresponding (x,y) points do not need to be consistent in each dimension. The function `scatter` can be used to plot unstructured data whereas `pcolor`/`pcolormesh` and `contour`/`contourf` required structured data. The following examples illustrate these differences. ## scatter (unstructured data) Scatter plots are good for plotting x, y, z triplets when they are in triplet form (lists of x, y, z, coordinates) which may be randomly ordered, instead of ordered arrays. In fact, you may use 4 sets of data as well (e.g., x, y, z, t). We can really capture 4 sets of information together in a `scatter` plot: * x vs. y with markers (just like we can do with `plot`) * x vs y with marker color representing z * x vs. y with marker color and marker size representing two more sets of data ``` # from http://matplotlib.org/examples/shapes_and_collections/scatter_demo.html N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = x+y**2 area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses fig, axes = plt.subplots(1, 3, sharey=True, figsize=(14,6)) axes[0].scatter(x, y, s=100, alpha=0.5) axes[0].set_title('2 fields of data:\nJust like `plot`') axes[1].scatter(x, y, s=100, c=colors, alpha=0.5) axes[1].set_title('3 fields of data') # the mappable is how the colorbar knows how to set up the range of data given the colormap mappable = axes[2].scatter(x, y, s=area, c=colors, alpha=0.3) axes[2].set_title('4 fields of data') cb = fig.colorbar(mappable) cb.set_label('values') ``` --- ### *Exercise* > Plot 4 columns of data from '../data/CTD.txt' together using the `scatter` function. Make sure all markers are visible, that everything is properly labeled, that you use a colorbar, etc. --- ## Arrays of Coordinates To plot structured data, you need arrays of coordinates that store the typically (x,y) locations of the data you have. You will sometimes be given this information, but sometimes you need to make some changes. Sometimes you have the data itself and need to create the arrays to represent the coordinates, which you can do with `np.meshgrid`. If you are starting with unstructured data and want to change it to be structured, you can interpolate. We will build up the same example for both of these concepts. ### `meshgrid` `meshgrid` converts from a x and a y vector of coordinates to arrays of x and y coordinates. The vectors are copied across the array. This image from Microsoft Excel shows how to think about it. ![example](https://i.stack.imgur.com/8Mbig.png) Once you have these arrays, you can use some of the following plotting techniques. Let's create synthetic coordinates and data using random numbers. ``` x = np.random.rand(1000) # x coordinate y = np.random.rand(1000) # y coordinate z = np.sin((x**2 + y**2)*5.0) # data values at the x, y locations plt.scatter(x, y, c=z, s=100, cmap='viridis') ``` We want to move from this unstructured data to structured data, which can improve visualization and allow us to make more calculations. To do this, we need to create an array of coordinates from our sporadically-located x and y coordinate data. We need to set up coordinate arrays for x and y that cover the full range of the coordinates. ``` # set up coordinate arrays for x and y that cover the full range of the # x and y coordinates xi = np.linspace(x.min(), x.max(), 501) yi = np.linspace(y.min(), y.max(), 501) ``` Then, we can change from the vector to the array, like in the image. These X and Y coordinate arrays are what we'll be able to plot with and perform calculations on. ``` # X and Y are these arrays X, Y = np.meshgrid(xi, yi) # uniform grid ``` ### Interpolation Now we can interpolate our $z$ values onto the new coordinate arrays, X and Y. We cover one method of 2d interpolation here, but you can find many more details in the notebook "ST_interpolation.ipynb". We will use [`griddata`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html) to do this two dimensional interpolation, which is not part of `numpy`. This way is not particularly fast because it cannot take advantage of structured (predictably-spaced) data. However, it generally works and is pretty straight-forward. Z = scipy.interpolate.griddata(pts, z, xy, method='cubic', fill_value=0) where `pts` is Nx2 and contains the coordinates for the input data, `z` are the values at `pts`, `xy` are the Mx2 coordinates where you want to interpolate to, `Z` are the values of the function `z` at `xy` locations. `method` can be 'linear', 'nearest', 'cubic', and `fill_value` fills in outside of the points. The data that we want to interpolate is put first into `griddata`. The first input, `pts`, is Nx2 and contains the coordinates for the input data. Here is how we can get the original data coordinates into the proper setup: ``` pts = np.vstack((x,y)).T # combine x and y to get Nx2 array ``` Next we need the coordinate locations of where we want to interpolate the data to, which are of shape Mx2. `griddata` allows us to interpolate to unstructured coordinates as well as structured coordinates, so it cannot assume you are inputting structured arrays. Therefore, we put them in as coordinates, like so: ``` xy = np.vstack((X.flat, Y.flat)).T ``` Now we can make our call to `griddata` to run the interpolation, and then reshape the resulting $Z$ output into an array to match the coordinates. ``` import scipy.interpolate Z = scipy.interpolate.griddata(pts, z, xy, method='cubic', fill_value=0) # reconstitute the output to structured array so we can plot it with pcolormesh Z.shape = X.shape fig, axes = plt.subplots(1, 2, sharex=True, sharey=True) axes[0].scatter(x, y, c=z, s=100, cmap='viridis') axes[0].set_title('Unstructured original data') axes[1].pcolormesh(X, Y, Z, cmap='viridis') axes[1].set_title('Interpolated data,\nnow structured') ``` ## `quiver(x, y, u, v)` The `quiver` command allows us to plot arrows. Typically they are used to show the direction of flow. They can be pretty difficult to work with because of the necessary number of available parameters you can tweak, but they are very useful for showing movement from your data (see plot example at top of notebook). ``` # http://matplotlib.org/examples/pylab_examples/quiver_demo.html X, Y = np.meshgrid(np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)) U = np.cos(X) V = np.sin(Y) fig = plt.figure() ax = fig.add_subplot(111) Q = ax.quiver(X, Y, U, V) ax.set_ylim(0, 8) qk = ax.quiverkey(Q, 0.5, 0.95, 5, r'5 m s$^{-1}$', labelpos='W') ``` ## `pcolor/pcolormesh(X, Y, C, cmap=colormap, vmin=data_min, vmax=data_max)` `pcolor` and `pcolormesh` are very similar and plot arrays. `X` and `Y` are coordinate locations of the data array `C`. The `cmap` keyword argument will take in a colormap instance (some strings are allowed here too), and `vmin`, `vmax` set the minimum and maximum values represented in the plot. A few notes: * The `X` and `Y` locations associated with the `C` values are assumed to be at the corners of the block of information represented by an element in `C`; so `X` and `Y` should have one more element in both the x and y directions. - if `X` and `Y` are given with the same shape as `C`, the final row and column in `C` will be ignored. * `pcolormesh` is an alternative to `pcolor` that is much faster due to the differences in their basic setup — basically we can just always use `pcolormesh` and not worry about the differences. Here we have an example of a `pcolormesh` plot. It shows the sea surface height in the northwest Gulf of Mexico as a single time, as calculated by a numerical model run in the oceanography department. Red colors represent where the sea level is above a vertical reference level, maybe mean sea level, and blue colors represent where the sea level is below the reference level. Note that the land can be seen as a dark shade of blue, and the extent of the colorbar has been controlled in the call to `pcolormesh`. ``` d = np.load('../data/model.npz') #import cmocean.cm as cmo fig = plt.figure(figsize=(10,8)) ax = fig.add_subplot(1, 1, 1) mappable = ax.pcolormesh(d['lon'], d['lat'], np.nan_to_num(d['ssh'][0], nan=-9999999), cmap=plt.get_cmap('RdBu_r'), vmin=-0.4, vmax=0.4, shading='auto') # use the gray colormap to look more realistic cb = fig.colorbar(mappable, extend='min') cb.set_label('Sea surface height [m]') ``` --- ### *Exercise* > Mask the sea surface height model output so that land – currently shown as dark blue – is not colored. > What do `vmin` and `vmax` in the `pcolormesh` command control? Change their values and see the result. What values *should* be used in this call and why? > What does `extend` control in the colorbar call? --- ## `contour/contourf(X, Y, Z, N or V, colors, cmap, levels)` The setup for `contour` and `contourf` are similar: both take in structured x, y arrays with Z information to plot in two dimensions. However, while `pcolor` plots boxes of data, `contourf` groups together similarly-sized data for plotting. `contour` plots isolines, which have equal values along the lines (like contours on a terrain map). Also, `contour` and `contourf` assume that the input x, y locations give the location of the datapoint itself, as opposed to the edges of a box in `pcolor`. Some useful optional inputs include N, the number of bins of data to use (default 10) or V, a sequence of values at which to plot; colors, a string or tuple of color inputs to use for the contours; cmap, a colormap instance to use instead of the colors; and levels, where you can specify the actual levels to use in the contours. There is an option to input `vmin` and `vmax` like in `pcolor` but this typically doesn't work as intended — it's better to use `levels` instead. Note that you can also use `contour` in conjunction with `cslabel` to label in the plot the value of the contours. ``` fig = plt.figure(figsize=(20,8)) ax = fig.add_subplot(1, 2, 1) ax.contour(d['lon'], d['lat'], d['ssh'][0], cmap=plt.get_cmap('RdBu_r')) # use the gray colormap to look more realistic ax.set_title('contour', fontsize=20) ax2 = fig.add_subplot(1, 2, 2) mappable = ax2.contourf(d['lon'], d['lat'], d['ssh'][0], cmap=plt.get_cmap('RdBu_r')) # use the gray colormap to look more realistic ax2.set_yticks([]) ax2.set_title('contourf', fontsize=20) # Make room for the colorbar on the right size, and scoot the subplots closer together fig.subplots_adjust(right=0.88, wspace=0.05) # Add an axes for the colorbar so that spacing looks right cax = fig.add_axes([0.9, 0.15, 0.02, 0.7]) cb = fig.colorbar(mappable, extend='min', cax=cax) cb.set_label('Sea surface height [m]') ``` --- ### *Exercise* > What happens as you increase the number of contours used in the `contourf` plot? Also, try inputting different sequences of contour values to plot (the `V` keyword argument). > Can you layer `contour` lines over `contourf` plots? > Make the tick labels larger. > Note that the min and max values of the colorbar are not equal, which is skewing the colors in the colorbar so that negative values only appear as white. How can you fix this so that the data is being properly presented to the viewer? What is the proper way to present it? --- ## colorbar When you plot multiple subplots that have the same colormap, you need to individually set them up to properly show the range of colors in the colorbar. - For `pcolor/pcolormesh` this mean setting the `vmin/vmax` for each subplot to the same values. - For `contourf` this means setting the `levels` attributes to be the same for all subplots. Then you need to give the colorbar call a `mappable` instance to know what range of colors to provide. You can get a `mappable` instance by setting the call to `pcolor/pcolormesh/contourf` to a variable. ``` Z = np.random.rand(5, 10) fig = plt.figure(figsize=(10,8)) ax1 = fig.add_subplot(2, 3, 1) ax1.pcolormesh(Z, cmap='viridis') ax1.set_ylabel('This row is incorrect', fontsize=16) ax2 = fig.add_subplot(2, 3, 2) ax2.pcolormesh(Z*2, cmap='viridis') ax3 = fig.add_subplot(2, 3, 3) mappable = ax3.pcolormesh(Z*5, cmap='viridis') # we choose some pcolormesh call to set the mappable variable fig.colorbar(mappable) # for this row of plots, we will set the max and min data values properly # dmin = Z.min() # min over the three plots # dmax = (Z*5).max() # max over the three plots dmin = 0 dmax = 5 ax4 = fig.add_subplot(2, 3, 4) ax4.pcolormesh(Z, cmap='viridis', vmin=dmin, vmax=dmax) ax4.set_ylabel('This row is correct', fontsize=16) ax5 = fig.add_subplot(2, 3, 5) mappable = ax5.pcolormesh(Z*2, cmap='viridis', vmin=dmin, vmax=dmax) ax6 = fig.add_subplot(2, 3, 6) ax6.pcolormesh(Z*5, cmap='viridis', vmin=dmin, vmax=dmax) fig.colorbar(mappable) ``` ***Why are these two rows different, even though the same things are being plotted?*** ## Colormaps You may have used a rainbow-based colormap in your work, or seen other people use it. There are many online tirades against jet, some of which are linked to [here](https://matplotlib.org/cmocean/#why-jet-is-a-bad-colormap-and-how-to-choose-better). Here is a [presentation](https://www.dropbox.com/s/yu9pe54z77zlirp/Fall_AGU.key?dl=0) about colormaps. ### Good colormaps to use: #### Use for sequential data (no 0 or critical value): Both of these sets are ok, though the first have better perceptual properties. ![](http://matplotlib.org/_images/colormaps_reference_00.png) ![](http://matplotlib.org/_images/colormaps_reference_01.png) #### Use for diverging data (data that diverges away from a critical value, often 0): ![](http://matplotlib.org/_images/colormaps_reference_03.png) http://matplotlib.org/examples/color/colormaps_reference.html Reference: http://matplotlib.org/users/colormaps.html #### `cmocean` There are a set of colormaps available through [`matplotlib`](http://matplotlib.org/cmocean/) and a [paper](http://tos.org/oceanography/assets/docs/29-3_thyng.pdf) about how to choose good colormaps. I'll expect you to make good choices for your colormaps — this means choosing a sequential colormap for sequential data, and diverging for diverging data. Since we know from science that the `jet` rainbow colormap is generally not a good way to represent your data, choose something else to use. ![](http://matplotlib.org/cmocean/_images/index-1.png) # E. Advanced ## Define your own axes ### Overlaid axes ![](http://matplotlib.org/_images/demo_axes_hbox_divider.png) http://matplotlib.org/examples/axes_grid/demo_axes_hbox_divider.html ### Complementary axes ![](http://matplotlib.org/_images/scatter_hist1.png) http://matplotlib.org/examples/pylab_examples/scatter_hist.html ## Complex tiling: `axes_grid1` ![Example 1](http://matplotlib.org/_images/demo_axes_rgb_00.png) http://matplotlib.org/examples/axes_grid/demo_axes_rgb.html ![Example 2](http://matplotlib.org/_images/simple_axesgrid2.png) http://matplotlib.org/examples/axes_grid/simple_axesgrid2.html ## Magnification: `inset_axes` ![Example of inset_axes](http://matplotlib.org/_images/inset_locator_demo2.png) http://matplotlib.org/examples/axes_grid/inset_locator_demo2.html # F. Other plotting packages Note that there are many other plotting packages to explore. Here is a brief list: * 3D (in matplotlib) * Bokeh * mpl3d * mayavi # Bonus! Now with Nobel Prize! Matplotlib, good colormaps, and strong design principles were used in a 2015 LIGO paper! ![LIGO paper](https://pbs.twimg.com/media/Ca8jlVIWcAUmeP8.png:large)
github_jupyter
# Exercise 6 ## Import packages ``` import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.optim.lr_scheduler import StepLR ``` ## Task 1 (3 points) Implement the training loop for one training epoch. An epoch trains on the whole training dataset once. ``` def train(model, use_cuda, train_loader, optimizer, epoch, log_interval): """ Train one epoch model -- the neural network use_cuda -- true if GPU should be used train_loader -- data loader optimizer -- network optimizer epoch -- number of current epoch log_interval -- number of training steps between logs """ # TODO: set the model to train mode model.train() # TODO: enumerate over the dataloader to get mini batches # of images and ground truth labels # HINT: the builtin python function enumerate() also gives you indices for i, (inputs, labels) in enumerate(train_loader): # TODO: set the optimizers gradients to zero optimizer.zero_grad() # TODO: run the network outputs = model(inputs) # TODO: compute negative log likelihood loss loss = F.nll_loss(outputs, labels) # TODO: do backpropagation loss.backward() # TODO: optimize optimizer.step() # TODO: print current loss for every nth ("log_interval"th) iteration if i % log_interval == 0: print("Loss for epoch {} in iteration {}: {}".format(epoch, i, loss)) ``` We already implemented the validation function for you (this is essentially validate() from the last exercise) ``` def validate(model, use_cuda, test_loader): """ Compute test metrics model -- the neural network use_cuda -- true if GPU should be used test_loader -- data loader """ # create a 10x10 grid of subplots _, axis = plt.subplots(10, 10) # set model to evaluation mode model.eval() test_loss = 0 correct = 0 plotted = 0 # disable gradients globally with torch.no_grad(): for batch_idx, (data, target) in enumerate(test_loader): # for each batch if use_cuda: # transfer to GPU data = data.cuda() target = target.cuda() # run network and compute metrics output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability img_correct = pred.eq(target.view_as(pred)) correct += pred.eq(target.view_as(pred)).sum().item() # plot the first 100 images img_idx = 0 data = data.cpu().numpy() while plotted < 100 and img_idx < data.shape[0]: # compute position of ith image in the grid y = plotted % 10 x = plotted // 10 # convert image tensor to numpy array and normalize to [0, 1] img = data[img_idx, 0] img = (img - np.min(img)) / (np.max(img) - np.min(img)) # make wrongly predicted images red img = np.stack([img] * 3, 2) if img_correct[img_idx] == 0: img[:, :, 1:] = 0.0 # disable axis and show image axis[y][x].axis('off') axis[y][x].imshow(img) # show the predicted class next to each image axis[y][x].text(30, 25, pred[img_idx].item()) plotted += 1 img_idx += 1 test_loss /= len(test_loader.dataset) # show results print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) plt.show() ``` ## Task 2 (4 points) Implement a five-layer fully connected neural network. The dimensions (without batch size) should change like this: 784->200->100->60->30->10 Use log softmax to compute the class predictions. Run the code at the end of the notebook to train and validate your implementation. ### Task 2.1 * sigmoid non-linear activation function * note that the last layer does not need an activation function! ### Task 2.2 * add a new class "FCNet2" * replace sigmoid with ReLU ### Task 2.3 * add a new class "FCNet2" * add batch normalization to the first and third layers (note the difference between 1D/2D/3D versions) **NOTE:** The perfomance should improve slightly with each step. However, due to the random weight initialization applied by PyTorch, your results may vary a bit between trainings. ``` class FCNet1(nn.Module): """ Fully Connected Neural Network Five fully connected layers with sigmoid non-linearity Dimensions 784->200->100->60->30->10 """ def __init__(self): super(FCNet1, self).__init__() # TODO: initialize network layers # HINT: take a look at "torch.nn" (imported as "nn") self.fc1 = nn.Linear(784, 200) self.fc2 = nn.Linear(200, 100) self.fc3 = nn.Linear(100, 60) self.fc4 = nn.Linear(60, 30) self.fc5 = nn.Linear(30, 10) def forward(self, x): # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run network layers x = self.fc1(x) x = torch.sigmoid(x) x = self.fc2(x) x = torch.sigmoid(x) x = self.fc3(x) x = torch.sigmoid(x) x = self.fc4(x) x = torch.sigmoid(x) x = self.fc5(x) # TODO: compute log softmax over the output # HINT: take a look at "torch.nn.functional" (imported as "F") output = F.log_softmax(x, dim=1) return output class FCNet2(nn.Module): def __init__(self): super(FCNet2, self).__init__() # TODO: initialize network layers # HINT: take a look at "torch.nn" (imported as "nn") self.fc1 = nn.Linear(784, 200) self.fc2 = nn.Linear(200, 100) self.fc3 = nn.Linear(100, 60) self.fc4 = nn.Linear(60, 30) self.fc5 = nn.Linear(30, 10) def forward(self, x): # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run network layers x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.relu(x) x = self.fc4(x) x = F.relu(x) x = self.fc5(x) # TODO: compute log softmax over the output # HINT: take a look at "torch.nn.functional" (imported as "F") output = F.log_softmax(x, dim=1) return output class FCNet3(nn.Module): def __init__(self): super(FCNet3, self).__init__() # TODO: initialize network layers # HINT: take a look at "torch.nn" (imported as "nn") self.fc1 = nn.Linear(784, 200) self.fc2 = nn.Linear(200, 100) self.fc3 = nn.Linear(100, 60) self.fc4 = nn.Linear(60, 30) self.fc5 = nn.Linear(30, 10) self.bn1 = nn.BatchNorm1d(784) self.bn2 = nn.BatchNorm1d(100) def forward(self, x): # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run network layers x = self.bn1(x) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.bn2(x) x = self.fc3(x) x = F.relu(x) x = self.fc4(x) x = F.relu(x) x = self.fc5(x) # TODO: compute log softmax over the output # HINT: take a look at "torch.nn.functional" (imported as "F") output = F.log_softmax(x, dim=1) return output ``` ## Task 3 (3 points) Implement a convolutional neural network, consisting of two convolutional and two fully connected layers. This time, the dimensions (without batch size) should change like this: 1x28x28->32x26x26->64x12x12->128->10 ### Task 3.1 * two convolutional layers (kernel size 3) * two fully-connected layers * ReLU activation function ### Task 3.2 * add batch normalization to first convolutional and first fully connected layer ### Task 3.3 * use max pooling instead of stride to reduce the dimensions to 64x12x12 ``` class ConvNet1(nn.Module): """ Convolutional Neural Network Two convolutional layers and two fully connected layers Dimensions: 1x28x28->32x26x26->64x12x12->128->10 """ def __init__(self): super(ConvNet1, self).__init__() # TODO: initialize network layers self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3, 2) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): # TODO: run convolutional layers x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run fully connected layers x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) # TODO: compute log softmax over the output output = F.log_softmax(x, dim=1) return output class ConvNet2(nn.Module): def __init__(self): super(ConvNet2, self).__init__() # TODO: initialize network layers self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3, 2) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) self.bn1 = nn.BatchNorm2d(1) self.bn2 = nn.BatchNorm1d(9216) def forward(self, x): # TODO: run convolutional layers x = self.bn1(x) x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run fully connected layers x = self.bn2(x) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) # TODO: compute log softmax over the output output = F.log_softmax(x, dim=1) return output class ConvNet3(nn.Module): def __init__(self): super(ConvNet3, self).__init__() # TODO: initialize network layers self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) self.bn1 = nn.BatchNorm2d(1) self.bn2 = nn.BatchNorm1d(9216) self.pool = nn.MaxPool2d((2, 2)) def forward(self, x): # TODO: run convolutional layers x = self.bn1(x) x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = self.pool(x) # TODO: reshape batch of images to batch of 1D vectors x = torch.flatten(x, 1) # TODO: run fully connected layers x = self.bn2(x) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) # TODO: compute log softmax over the output output = F.log_softmax(x, dim=1) return output # hyper parameters batch_size = 64 test_batch_size = 1000 epochs = 10 lr = 1.0 gamma = 0.7 log_interval = 100 # use GPU if available use_cuda = torch.cuda.is_available() kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {} # initialize data loaders train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=True, **kwargs) model = FCNet2() if use_cuda: model = model.cuda() # initialize optimizer and scheduler optimizer = optim.Adadelta(model.parameters(), lr=lr) scheduler = StepLR(optimizer, step_size=1, gamma=gamma) for epoch in range(1, epochs + 1): # train one epoch train(model, use_cuda, train_loader, optimizer, epoch, log_interval) # run on test dataset validate(model, use_cuda, test_loader) scheduler.step() torch.save(model.state_dict(), "models/mnist/checkpoint.pt") ```
github_jupyter
# Converting the Parquet data format to recordIO-wrapped protobuf --- --- ## Contents 1. [Introduction](#Introduction) 1. [Optional data ingestion](#Optional-data-ingestion) 1. [Download the data](#Download-the-data) 1. [Convert into Parquet format](#Convert-into-Parquet-format) 1. [Data conversion](#Data-conversion) 1. [Convert to recordIO protobuf format](#Convert-to-recordIO-protobuf-format) 1. [Upload to S3](#Upload-to-S3) 1. [Training the linear model](#Training-the-linear-model) ## Introduction In this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ``` import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ``` ## Optional data ingestion ### Download the data ``` %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ``` ### Convert into Parquet format ``` trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ``` ## Data conversion Since algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ``` dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ``` ### Convert to recordIO protobuf format ``` import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ``` ### Upload to S3 ``` import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ``` ## Training the linear model Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets. This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small. First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ``` containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest'} linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": containers[boto3.Session().region_name], "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ``` Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ``` %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ```
github_jupyter
``` #trying some concepts in pandas # this stores data only for one person person = { "first": "Andy", "last": "Zed", "email": "andyz@gmail.com" } # inorder to represent data for multiple people # just make the values in the dictionary a list people = { "first": ["Andy"], "last": ["Zed"], "email": ["andyz@gmail.com"] } # so we add up some values and the second value is the second person people = { "first": ["Andy", 'Jane', 'John'], "last": ["Zed", 'Doe', 'Doe'], "email": ["andyz@gmail.com", 'JaneDoe@email.com', 'JohnDoe@email.com'] } # so lets try to access email , that is straight forward people['email'] # for furthur functionality lets use dataframes and pandas import pandas as pd #condtruct the dataframe df = pd.DataFrame(people) df # to access a single value from the dataframe's COLUMNS we use the same principle as dictionary # accessing a single column df['first'] #lets check the type type(df['first']) #pandas.core.series.Series #Series can be thought as rows of a single data with single column but #dataframes are datas with multiple rows and columns #or dataframe is a container of multiple this series """ # accessing multiple column # just pass the a list containing the columns df[['email', 'first']] # if we have a lot of columns # can be seen easyly df.columns # to get the rows we use # loc and iloc # iloc allows us to access rows by integer location # the first row is which is a single row df.iloc[0] # to access multiple row pass lists of the rows df.iloc[[0, 1]] # to get columns for certain row we can pass another index as a second argument to iloc # for example 'email' is the third column that has index of 2 # iloc always takes integer so pass integer df.iloc[[0, 1], 2] # to access a row by lable we use loc # for this dataframe the lables are the indexes but can be changed to strings # and columns can be accessed by their names df.loc[[2, 0], ['email', 'last']] df # what if we want to set another identifier for the rows other than the index given by default # to set the email as an index to the dataframe df.set_index('email') # but the dataframe is not changed df # to change it in the way we want that is to make email an index # doing it might be useful as it is a unique value to search for other info # like using it in loc[] as a lable df.set_index('email', inplace=True) df # now it is changed and can be reseted to original after manipulation # lets check the index df.index df.columns # no email as it is now an index df.loc['andyz@gmail.com'] # to reset the dataframe df.reset_index(inplace=True) df # back to original # lets apply some filters on the dataframe # get the all whose last name ia Doe # first create a filter # then pass it to the dataframe filt = (df['last'] == 'Doe') df[filt] # but the better is to use the loc[] function # the same output but we can assign another argument in it like 'email' df.loc[filt, 'email'] # to use and and or operators her we have to use the signs instead # & and | filt = (df['last'] == 'Doe') & (df['first'] == 'John') df.loc[filt, 'email'] filt = (df['last'] == 'Doe') | (df['first'] == 'John') df.loc[filt, 'email'] # so to negate the filter # means every thing exept the filter df.loc[-filt, 'email'] df df.columns # lets alter some coumns # to change the name of the coulumns df.columns = ['email', 'first_name', 'last_name'] df # to make all the names of the columns uppercase # use list comprehension df.columns = [i.upper() for i in df.columns] df # to replace some signs in the name # like replacing the dashes with space df.columns = df.columns.str.replace('_', ' ') df # how to change specific column # use dictionary and rename method df.rename(columns={'EMAIL': 'email', 'FIRST NAME': 'first', 'LAST NAME': 'last'}, inplace=True) df # lets change value of the rows df.loc[0, ['email', 'first']] = ['bra@email.com', 'brah'] df # to change the the case of the values df['first'].str.upper() df # but the data is not changed # to make the changes df['first'] = df['first'].str.lower() df # how apply() works in series # will take a function for single column df['email'].apply(len) # also we can pass our own functions to apply method def revr(last): new = '' for letter in last: new = letter + new return new # no calling the function just pass it df['last'].apply(revr) # to make changes to the dataframe # df['last'] = df['last'].apply(revr) df # use apply on dataframes # apply() on data frames applys a function on all series # but apply() on series applys a function on all of the values in the series df.apply(len) # returns the how many elements in each series or columns # to count along the rows df.apply(len, axis='columns') # for one column len(df['email']) # to find the min in each column in all the data frame # as this is not numerical it will give us first alphabets in each column # df.apply(pd.Series.min) df # applymap() only works on dataframes # applys a function on every individuals in the dataframe # series objects has no applymap() method df.applymap(len) # make all the values a lowercase df.applymap(str.lower) # map() method is used on a series objects # used to substitute an element with another value df['first'].map({'brah':'trevor','jane':'fluffy'}) df # but the dataframe is not changed # to do that df['first'] = df['first'].map({'brah':'trevor','jane':'fluffy'}) df ```
github_jupyter
# Test Speeds from Rutgers Server vs Azure Blob #### Build a list of frames to process using the dbcamhd.json database ``` import numpy as np import pandas as pd import pycamhd as camhd dbcamhd = pd.read_json('dbcamhd.json', orient='records', lines=True) dbcamhd.tail() fileindex = 2064 filename = dbcamhd.filename[fileindex] timestamp = dbcamhd.timestamp[fileindex] frame_count = dbcamhd.frame_count[fileindex] n_images = 4000 frame_numbers = np.linspace(750,frame_count-6000, n_images, dtype=np.int64()) filename ``` #### Create timestamps for frames ``` from datetime import datetime timestamps = [] for i in range(len(frame_numbers)): timestamps.append(datetime.fromtimestamp(dbcamhd.timestamp[fileindex] + frame_numbers[i]/29.95)) timestamps[0:5] ``` #### Set up Rutgers Dask array and Xarray ``` from dask import delayed import dask.array as da import xarray as xr delayed_frames = [] moov_atom = camhd.get_moov_atom(filename) for frame_number in frame_numbers: delayed_frames.append(da.from_delayed( delayed(camhd.get_frame)(filename, frame_number, 'rgb24', moov_atom), shape=(1080, 1920, 3), dtype=np.uint8)[None,:,:,:]) delayed_frames[0] ds_rutgers = xr.DataArray(da.concatenate(delayed_frames, axis=0), dims=['time', 'y', 'x', 'channel'], coords={'time': timestamps} ).to_dataset(name='video') ds_rutgers ``` #### Start a Dask cluster ``` from dask_kubernetes import KubeCluster cluster = KubeCluster(n_workers=32) cluster from dask.distributed import Client client = Client(cluster) client ``` #### Compute the time-average of all images using Rutgers server and plot ``` %%time mean_image = ds_rutgers.video.mean(dim='time').load() mean_image.astype('i8').plot.imshow(); ``` #### Create a list of Azure blobs to process ``` blob_urls = [] for frame_number in frame_numbers: blob_urls.append('https://camhd.blob.core.windows.net/prores/%i-%08.0f' % (timestamp, frame_number)) blob_urls[0] ``` #### Get frame from Azure function ``` import requests def azure_get_frame(blob_url): blob = requests.get(blob_url) return camhd.decode_frame_data(blob.content, 'rgb24') test = azure_get_frame(blob_urls[0]) type(test) %matplotlib inline import matplotlib.pyplot as plt plt.imshow(test) ``` #### Set up Azure Dask array ``` delayed_frames = [] for blob_url in blob_urls: delayed_frames.append(da.from_delayed( delayed(azure_get_frame)(blob_url), shape=(1080, 1920, 3), dtype=np.uint8)[None,:,:,:]) delayed_frames[0] ds_azure = xr.DataArray(da.concatenate(delayed_frames, axis=0), dims=['time', 'y', 'x', 'channel'], coords={'time': timestamps} ).to_dataset(name='video') ds_azure ``` #### Compute the time-average of all images using Azure blob and plot ``` %%time mean_image = ds_azure.video.mean(dim='time').load() mean_image.astype('i8').plot.imshow(); ```
github_jupyter
# The Autodiff Cookbook *alexbw@, mattjj@* JAX has a pretty general automatic differentiation system. In this notebook, we'll go through a whole bunch of neat autodiff ideas that you can cherry pick for your own work, starting with the basics. ``` import jax.numpy as np from jax import grad, jit, vmap from jax import random key = random.PRNGKey(0) ``` ## Gradients ### Starting with `grad` You can differentiate a function with `grad`: ``` grad_tanh = grad(np.tanh) print(grad_tanh(2.0)) ``` `grad` takes a function and returns a function. If you have a Python function `f` that evaluates the mathematical function $f$, then `grad(f)` is a Python function that evaluates the mathematical function $\nabla f$. That means `grad(f)(x)` represents the value $\nabla f(x)$. Since `grad` operates on functions, you can apply it to its own output to differentiate as many times as you like: ``` print(grad(grad(np.tanh))(2.0)) print(grad(grad(grad(np.tanh)))(2.0)) ``` Let's look at computing gradients with `grad` in a linear logistic regression model. First, the setup: ``` def sigmoid(x): return 0.5 * (np.tanh(x / 2) + 1) # Outputs probability of a label being true. def predict(W, b, inputs): return sigmoid(np.dot(inputs, W) + b) # Build a toy dataset. inputs = np.array([[0.52, 1.12, 0.77], [0.88, -1.08, 0.15], [0.52, 0.06, -1.30], [0.74, -2.49, 1.39]]) targets = np.array([True, True, False, True]) # Training loss is the negative log-likelihood of the training examples. def loss(W, b): preds = predict(W, b, inputs) label_probs = preds * targets + (1 - preds) * (1 - targets) return -np.sum(np.log(label_probs)) # Initialize random model coefficients key, W_key, b_key = random.split(key, 3) W = random.normal(W_key, (3,)) b = random.normal(b_key, ()) ``` Use the `grad` function with its `argnums` argument to differentiate a function with respect to positional arguments. ``` # Differentiate `loss` with respect to the first positional argument: W_grad = grad(loss, argnums=0)(W, b) print('W_grad', W_grad) # Since argnums=0 is the default, this does the same thing: W_grad = grad(loss)(W, b) print('W_grad', W_grad) # But we can choose different values too, and drop the keyword: b_grad = grad(loss, 1)(W, b) print('b_grad', b_grad) # Including tuple values W_grad, b_grad = grad(loss, (0, 1))(W, b) print('W_grad', W_grad) print('b_grad', b_grad) ``` This `grad` API has a direct correspondence to the excellent notation in Spivak's classic *Calculus on Manifolds* (1965), also used in Sussman and Wisdom's [*Structure and Interpretation of Classical Mechanics*](http://mitpress.mit.edu/sites/default/files/titles/content/sicm_edition_2/book.html) (2015) and their [*Functional Differential Geometry*](https://mitpress.mit.edu/books/functional-differential-geometry) (2013). Both books are open-access. See in particular the "Prologue" section of *Functional Differential Geometry* for a defense of this notation. Essentially, when using the `argnums` argument, if `f` is a Python function for evaluating the mathematical function $f$, then the Python expression `grad(f, i)` evaluates to a Python function for evaluating $\partial_i f$. ### Differentiating with respect to nested lists, tuples, and dicts Differentiating with respect to standard Python containers just works, so use tuples, lists, and dicts (and arbitrary nesting) however you like. ``` def loss2(params_dict): preds = predict(params_dict['W'], params_dict['b'], inputs) label_probs = preds * targets + (1 - preds) * (1 - targets) return -np.sum(np.log(label_probs)) print(grad(loss2)({'W': W, 'b': b})) ``` You can [register your own container types](https://github.com/google/jax/issues/446#issuecomment-467105048) to work with not just `grad` but all the JAX transformations (`jit`, `vmap`, etc.). ### Evaluate a function and its gradient using `value_and_grad` Another convenient function is `value_and_grad` for efficiently computing both a function's value as well as its gradient's value: ``` from jax import value_and_grad loss_value, Wb_grad = value_and_grad(loss, (0, 1))(W, b) print('loss value', loss_value) print('loss value', loss(W, b)) ``` ### Checking against numerical differences A great thing about derivatives is that they're straightforward to check with finite differences: ``` # Set a step size for finite differences calculations eps = 1e-4 # Check b_grad with scalar finite differences b_grad_numerical = (loss(W, b + eps / 2.) - loss(W, b - eps / 2.)) / eps print('b_grad_numerical', b_grad_numerical) print('b_grad_autodiff', grad(loss, 1)(W, b)) # Check W_grad with finite differences in a random direction key, subkey = random.split(key) vec = random.normal(subkey, W.shape) unitvec = vec / np.sqrt(np.vdot(vec, vec)) W_grad_numerical = (loss(W + eps / 2. * unitvec, b) - loss(W - eps / 2. * unitvec, b)) / eps print('W_dirderiv_numerical', W_grad_numerical) print('W_dirderiv_autodiff', np.vdot(grad(loss)(W, b), unitvec)) ``` JAX provides a simple convenience function that does essentially the same thing, but checks up to any order of differentiation that you like: ``` from jax.test_util import check_grads check_grads(loss, (W, b), order=2) # check up to 2nd order derivatives ``` ### Hessian-vector products with `grad`-of-`grad` One thing we can do with higher-order `grad` is build a Hessian-vector product function. (Later on we'll write an even more efficient implementation that mixes both forward- and reverse-mode, but this one will use pure reverse-mode.) A Hessian-vector product function can be useful in a [truncated Newton Conjugate-Gradient algorithm](https://en.wikipedia.org/wiki/Truncated_Newton_method) for minimizing smooth convex functions, or for studying the curvature of neural network training objectives (e.g. [1](https://arxiv.org/abs/1406.2572), [2](https://arxiv.org/abs/1811.07062), [3](https://arxiv.org/abs/1706.04454), [4](https://arxiv.org/abs/1802.03451)). For a scalar-valued function $f : \mathbb{R}^n \to \mathbb{R}$, the Hessian at a point $x \in \mathbb{R}^n$ is written as $\partial^2 f(x)$. A Hessian-vector product function is then able to evaluate $\qquad v \mapsto \partial^2 f(x) \cdot v$ for any $v \in \mathbb{R}^n$. The trick is not to instantiate the full Hessian matrix: if $n$ is large, perhaps in the millions or billions in the context of neural networks, then that might be impossible to store. Luckily, `grad` already gives us a way to write an efficient Hessian-vector product function. We just have to use the identity $\qquad \partial^2 f (x) v = \partial [x \mapsto \partial f(x) \cdot v] = \partial g(x)$, where $g(x) = \partial f(x) \cdot v$ is a new scalar-valued function that dots the gradient of $f$ at $x$ with the vector $v$. Notice that we're only ever differentiating scalar-valued functions of vector-valued arguments, which is exactly where we know `grad` is efficient. In JAX code, we can just write this: ``` def hvp(f, x, v): return grad(lambda x: np.vdot(grad(f)(x), v)) ``` This example shows that you can freely use lexical closure, and JAX will never get perturbed or confused. We'll check this implementation a few cells down, once we see how to compute dense Hessian matrices. We'll also write an even better version that uses both forward-mode and reverse-mode. ### Jacobians and Hessians using `jacfwd` and `jacrev` You can compute full Jacobian matrices using the `jacfwd` and `jacrev` functions: ``` from jax import jacfwd, jacrev # Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs) J = jacfwd(f)(W) print("jacfwd result, with shape", J.shape) print(J) J = jacrev(f)(W) print("jacrev result, with shape", J.shape) print(J) ``` These two functions compute the same values (up to machine numerics), but differ in their implementation: `jacfwd` uses forward-mode automatic differentiation, which is more efficient for "tall" Jacobian matrices, while `jacrev` uses reverse-mode, which is more efficient for "wide" Jacobian matrices. For matrices that are near-square, `jacfwd` probably has an edge over `jacrev`. You can also use `jacfwd` and `jacrev` with container types: ``` def predict_dict(params, inputs): return predict(params['W'], params['b'], inputs) J_dict = jacrev(predict_dict)({'W': W, 'b': b}, inputs) for k, v in J_dict.items(): print("Jacobian from {} to logits is".format(k)) print(v) ``` For more details on forward- and reverse-mode, as well as how to implement `jacfwd` and `jacrev` as efficiently as possible, read on! Using a composition of two of these functions gives us a way to compute dense Hessian matrices: ``` def hessian(f): return jacfwd(jacrev(f)) H = hessian(f)(W) print("hessian, with shape", H.shape) print(H) ``` This shape makes sense: if we start with a function $f : \mathbb{R}^n \to \mathbb{R}^m$, then at a point $x \in \mathbb{R}^n$ we expect to get the shapes * $f(x) \in \mathbb{R}^m$, the value of $f$ at $x$, * $\partial f(x) \in \mathbb{R}^{m \times n}$, the Jacobian matrix at $x$, * $\partial^2 f(x) \in \mathbb{R}^{m \times n \times n}$, the Hessian at $x$, and so on. To implement `hessian`, we could have used `jacrev(jacrev(f))` or `jacrev(jacfwd(f))` or any other composition of the two. But forward-over-reverse is typically the most efficient. That's because in the inner Jacobian computation we're often differentiating a function wide Jacobian (maybe like a loss function $f : \mathbb{R}^n \to \mathbb{R}$), while in the outer Jacobian computation we're differentiating a function with a square Jacobian (since $\nabla f : \mathbb{R}^n \to \mathbb{R}^n$), which is where forward-mode wins out. ## How it's made: two foundational autodiff functions ### Jacobian-Vector products (JVPs, aka forward-mode autodiff) JAX includes efficient and general implementations of both forward- and reverse-mode automatic differentiation. The familiar `grad` function is built on reverse-mode, but to explain the difference in the two modes, and when each can be useful, we need a bit of math background. #### JVPs in math Mathematically, given a function $f : \mathbb{R}^n \to \mathbb{R}^m$, the Jacobian of $f$ evaluated at an input point $x \in \mathbb{R}^n$, denoted $\partial f(x)$, is often thought of as a matrix in $\mathbb{R}^m \times \mathbb{R}^n$: $\qquad \partial f(x) \in \mathbb{R}^{m \times n}$. But we can also think of $\partial f(x)$ as a linear map, which maps the tangent space of the domain of $f$ at the point $x$ (which is just another copy of $\mathbb{R}^n$) to the tangent space of the codomain of $f$ at the point $f(x)$ (a copy of $\mathbb{R}^m$): $\qquad \partial f(x) : \mathbb{R}^n \to \mathbb{R}^m$. This map is called the [pushforward map](https://en.wikipedia.org/wiki/Pushforward_(differential)) of $f$ at $x$. The Jacobian matrix is just the matrix for this linear map in a standard basis. If we don't commit to one specific input point $x$, then we can think of the function $\partial f$ as first taking an input point and returning the Jacobian linear map at that input point: $\qquad \partial f : \mathbb{R}^n \to \mathbb{R}^n \to \mathbb{R}^m$. In particular, we can uncurry things so that given input point $x \in \mathbb{R}^n$ and a tangent vector $v \in \mathbb{R}^n$, we get back an output tangent vector in $\mathbb{R}^m$. We call that mapping, from $(x, v)$ pairs to output tangent vectors, the *Jacobian-vector product*, and write it as $\qquad (x, v) \mapsto \partial f(x) v$ #### JVPs in JAX code Back in Python code, JAX's `jvp` function models this transformation. Given a Python function that evaluates $f$, JAX's `jvp` is a way to get a Python function for evaluating $(x, v) \mapsto (f(x), \partial f(x) v)$. ``` from jax import jvp # Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs) key, subkey = random.split(key) v = random.normal(subkey, W.shape) # Push forward the vector `v` along `f` evaluated at `W` y, u = jvp(f, (W,), (v,)) ``` In terms of Haskell-like type signatures, we could write ```haskell jvp :: (a -> b) -> a -> T a -> (b, T b) ``` where we use `T a` to denote the type of the tangent space for `a`. In words, `jvp` takes as arguments a function of type `a -> b`, a value of type `a`, and a tangent vector value of type `T a`. It gives back a pair consisting of a value of type `b` and an output tangent vector of type `T b`. The `jvp`-transformed function is evaluated much like the original function, but paired up with each primal value of type `a` it pushes along tangent values of type `T a`. For each primitive numerical operation that the original function would have applied, the `jvp`-transformed function executes a "JVP rule" for that primitive that both evaluates the primitive on the primals and applies the primitive's JVP at those primal values. That evaluation strategy has some immediate implications about computational complexity: since we evaluate JVPs as we go, we don't need to store anything for later, and so the memory cost is independent of the depth of the computation. In addition, the FLOP cost of the `jvp`-transformed function is about 3x the cost of just evaluating the function (one unit of work for evaluating the original function, for example `sin(x)`; one unit for linearizing, like `cos(x)`; and one unit for applying the linearized function to a vector, like `cos_x * v`). Put another way, for a fixed primal point $x$, we can evaluate $v \mapsto \partial f(x) \cdot v$ for about the same marginal cost as evaluating $f$. That memory complexity sounds pretty compelling! So why don't we see forward-mode very often in machine learning? To answer that, first think about how you could use a JVP to build a full Jacobian matrix. If we apply a JVP to a one-hot tangent vector, it reveals one column of the Jacobian matrix, corresponding to the nonzero entry we fed in. So we can build a full Jacobian one column at a time, and to get each column costs about the same as one function evaluation. That will be efficient for functions with "tall" Jacobians, but inefficient for "wide" Jacobians. If you're doing gradient-based optimization in machine learning, you probably want to minimize a loss function from parameters in $\mathbb{R}^n$ to a scalar loss value in $\mathbb{R}$. That means the Jacobian of this function is a very wide matrix: $\partial f(x) \in \mathbb{R}^{1 \times n}$, which we often identify with the Gradient vector $\nabla f(x) \in \mathbb{R}^n$. Building that matrix one column at a time, with each call taking a similar number of FLOPs to evaluating the original function, sure seems inefficient! In particular, for training neural networks, where $f$ is a training loss function and $n$ can be in the millions or billions, this approach just won't scale. To do better for functions like this, we just need to use reverse-mode. ### Vector-Jacobian products (VJPs, aka reverse-mode autodiff) Where forward-mode gives us back a function for evaluating Jacobian-vector products, which we can then use to build Jacobian matrices one column at a time, reverse-mode is a way to get back a function for evaluating vector-Jacobian products (equivalently Jacobian-transpose-vector products), which we can use to build Jacobian matrices one row at a time. #### VJPs in math Let's again consider a function $f : \mathbb{R}^n \to \mathbb{R}^m$. Starting from our notation for JVPs, the notation for VJPs is pretty simple: $\qquad (x, v) \mapsto v \partial f(x)$, where $v$ is an element of the cotangent space of $f$ at $x$ (isomorphic to another copy of $\mathbb{R}^m$). When being rigorous, we should think of $v$ as a linear map $v : \mathbb{R}^m \to \mathbb{R}$, and when we write $v \partial f(x)$ we mean function composition $v \circ \partial f(x)$, where the types work out because $\partial f(x) : \mathbb{R}^n \to \mathbb{R}^m$. But in the common case we can identify $v$ with a vector in $\mathbb{R}^m$ and use the two almost interchageably, just like we might sometimes flip between "column vectors" and "row vectors" without much comment. With that identification, we can alternatively think of the linear part of a VJP as the transpose (or adjoint conjugate) of the linear part of a JVP: $\qquad (x, v) \mapsto \partial f(x)^\mathsf{T} v$. For a given point $x$, we can write the signature as $\qquad \partial f(x)^\mathsf{T} : \mathbb{R}^m \to \mathbb{R}^n$. The corresponding map on cotangent spaces is often called the [pullback](https://en.wikipedia.org/wiki/Pullback_(differential_geometry)) of $f$ at $x$. The key for our purposes is that it goes from something that looks like the output of $f$ to something that looks like the input of $f$, just like we might expect from a transposed linear function. #### VJPs in JAX code Switching from math back to Python, the JAX function `vjp` can take a Python function for evaluating $f$ and give us back a Python function for evaluating the VJP $(x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))$. ``` from jax import vjp # Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs) y, vjp_fun = vjp(f, W) key, subkey = random.split(key) u = random.normal(subkey, y.shape) # Pull back the covector `u` along `f` evaluated at `W` v = vjp_fun(u) ``` In terms of Haskell-like type signatures, we could write ```haskell vjp :: (a -> b) -> a -> (b, CT b -> CT a) ``` where we use `CT a` to denote the type for the cotangent space for `a`. In words, `vjp` takes as arguments a function of type `a -> b` and a point of type `a`, and gives back a pair consisting of a value of type `b` and a linear map of type `CT b -> CT a`. This is great because it lets us build Jacobian matrices one row at a time, and the FLOP cost for evaluating $(x, v) \mapsto (f(x), v^\mathsf{T} \partial f(x))$ is only about three times the cost of evaluating $f$. In particular, if we want the gradient of a function $f : \mathbb{R}^n \to \mathbb{R}$, we can do it in just one call. That's how `grad` is efficient for gradient-based optimization, even for objectives like neural network training loss functions on millions or billions of parameters. There's a cost, though: though the FLOPs are friendly, memory scales with the depth of the computation. Also, the implementation is traditionally more complex than that of forward-mode, though JAX has some tricks up its sleeve (that's a story for a future notebook!). For more on how reverse-mode works, see [this tutorial video from the Deep Learning Summer School in 2017](http://videolectures.net/deeplearning2017_johnson_automatic_differentiation/). ### Hessian-vector products using both forward- and reverse-mode In a previous section, we implemented a Hessian-vector product function just using reverse-mode: ``` def hvp(f, x, v): return grad(lambda x: np.vdot(grad(f)(x), v)) ``` That's efficient, but we can do even better and save some memory by using forward-mode together with reverse-mode. Mathematically, given a function $f : \mathbb{R}^n \to \mathbb{R}$ to differentiate, a point $x \in \mathbb{R}^n$ at which to linearize the function, and a vector $v \in \mathbb{R}^n$, the Hessian-vector product function we want is $(x, v) \mapsto \partial^2 f(x) v$ Consider the helper function $g : \mathbb{R}^n \to \mathbb{R}^n$ defined to be the derivative (or gradient) of $f$, namely $g(x) = \partial f(x)$. All we need is its JVP, since that will give us $(x, v) \mapsto \partial g(x) v = \partial^2 f(x) v$. We can translate that almost directly into code: ``` from jax import jvp, grad # forward-over-reverse def hvp(f, primals, tangents): return jvp(grad(f), primals, tangents)[1] ``` Even better, since we didn't have to call `np.dot` directly, this `hvp` function works with arrays of any shape and with arbitrary container types (like vectors stored as nested lists/dicts/tuples), and doesn't even have a dependence on `jax.numpy`. Here's an example of how to use it: ``` def f(X): return np.sum(np.tanh(X)**2) key, subkey1, subkey2 = random.split(key, 3) X = random.normal(subkey1, (30, 40)) V = random.normal(subkey2, (30, 40)) ans1 = hvp(f, (X,), (V,)) ans2 = np.tensordot(hessian(f)(X), V, 2) print(np.allclose(ans1, ans2, 1e-4, 1e-4)) ``` Another way you might consider writing this is using reverse-over-forward: ``` # reverse-over-forward def hvp_revfwd(f, primals, tangents): g = lambda primals: jvp(f, primals, tangents)[1] return grad(g)(primals) ``` That's not quite as good, though, because forward-mode has less overhead than reverse-mode, and since the outer differentiation operator here has to differentiate a larger computation than the inner one, keeping forward-mode on the outside works best: ``` # reverse-over-reverse, only works for single arguments def hvp_revrev(f, primals, tangents): x, = primals v, = tangents return grad(lambda x: np.vdot(grad(f)(x), v))(x) print("Forward over reverse") %timeit -n10 -r3 hvp(f, (X,), (V,)) print("Reverse over forward") %timeit -n10 -r3 hvp_revfwd(f, (X,), (V,)) print("Reverse over reverse") %timeit -n10 -r3 hvp_revrev(f, (X,), (V,)) print("Naive full Hessian materialization") %timeit -n10 -r3 np.tensordot(hessian(f)(X), V, 2) ``` ## Composing VJPs, JVPs, and `vmap` ### Jacobian-Matrix and Matrix-Jacobian products Now that we have `jvp` and `vjp` transformations that give us functions to push-forward or pull-back single vectors at a time, we can use JAX's [`vmap` transformation](https://github.com/google/jax#auto-vectorization-with-vmap) to push and pull entire bases at once. In particular, we can use that to write fast matrix-Jacobian and Jacobian-matrix products. ``` # Isolate the function from the weight matrix to the predictions f = lambda W: predict(W, b, inputs) # Pull back the covectors `m_i` along `f`, evaluated at `W`, for all `i`. # First, use a list comprehension to loop over rows in the matrix M. def loop_mjp(f, x, M): y, vjp_fun = vjp(f, x) return np.vstack([vjp_fun(mi) for mi in M]) # Now, use vmap to build a computation that does a single fast matrix-matrix # multiply, rather than an outer loop over vector-matrix multiplies. def vmap_mjp(f, x, M): y, vjp_fun = vjp(f, x) outs, = vmap(vjp_fun)(M) return outs key = random.PRNGKey(0) num_covecs = 128 U = random.normal(key, (num_covecs,) + y.shape) loop_vs = loop_mjp(f, W, M=U) print('Non-vmapped Matrix-Jacobian product') %timeit -n10 -r3 loop_mjp(f, W, M=U) print('\nVmapped Matrix-Jacobian product') vmap_vs = vmap_mjp(f, W, M=U) %timeit -n10 -r3 vmap_mjp(f, W, M=U) assert np.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Matrix-Jacobian Products should be identical' def loop_jmp(f, x, M): # jvp immediately returns the primal and tangent values as a tuple, # so we'll compute and select the tangents in a list comprehension return np.vstack([jvp(f, (W,), (mi,))[1] for mi in M]) def vmap_jmp(f, x, M): _jvp = lambda s: jvp(f, (W,), (s,))[1] return vmap(_jvp)(M) num_vecs = 128 S = random.normal(key, (num_vecs,) + W.shape) loop_vs = loop_jmp(f, W, M=S) print('Non-vmapped Jacobian-Matrix product') %timeit -n10 -r3 loop_jmp(f, W, M=S) vmap_vs = vmap_jmp(f, W, M=S) print('\nVmapped Jacobian-Matrix product') %timeit -n10 -r3 vmap_jmp(f, W, M=S) assert np.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Jacobian-Matrix products should be identical' ``` ### The implementation of `jacfwd` and `jacrev` Now that we've seen fast Jacobian-matrix and matrix-Jacobian products, it's not hard to guess how to write `jacfwd` and `jacrev`. We just use the same technique to push-forward or pull-back an entire standard basis (isomorphic to an identity matrix) at once. ``` from jax import jacrev as builtin_jacrev def our_jacrev(f): def jacfun(x): y, vjp_fun = vjp(f, x) # Use vmap to do a matrix-Jacobian product. # Here, the matrix is the Euclidean basis, so we get all # entries in the Jacobian at once. J, = vmap(vjp_fun, in_axes=0)(np.eye(len(y))) return J return jacfun assert np.allclose(builtin_jacrev(f)(W), our_jacrev(f)(W)), 'Incorrect reverse-mode Jacobian results!' from jax import jacfwd as builtin_jacfwd def our_jacfwd(f): def jacfun(x): _jvp = lambda s: jvp(f, (x,), (s,))[1] Jt =vmap(_jvp, in_axes=1)(np.eye(len(x))) return np.transpose(Jt) return jacfun assert np.allclose(builtin_jacfwd(f)(W), our_jacfwd(f)(W)), 'Incorrect forward-mode Jacobian results!' ``` Interestingly, [Autograd](https://github.com/hips/autograd) couldn't do this. Our [implementation of reverse-mode `jacobian` in Autograd](https://github.com/HIPS/autograd/blob/96a03f44da43cd7044c61ac945c483955deba957/autograd/differential_operators.py#L60) had to pull back one vector at a time with an outer-loop `map`. Pushing one vector at a time through the computation is much less efficient than batching it all together with `vmap`. Another thing that Autograd couldn't do is `jit`. Interestingly, no matter how much Python dynamism you use in your function to be differentiated, we could always use `jit` on the linear part of the computation. For example: ``` def f(x): try: if x < 3: return 2 * x ** 3 else: raise ValueError except ValueError: return np.pi * x y, f_vjp = vjp(f, 4.) print(jit(f_vjp)(1.)) ``` ## Complex numbers and differentiation JAX is great at complex numbers and differentiation. To support both [holomorphic and non-holomorphic differentiation](https://en.wikipedia.org/wiki/Holomorphic_function), JAX follows [Autograd's convention](https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#complex-numbers) for encoding complex derivatives. Consider a complex-to-complex function $f: \mathbb{C} \to \mathbb{C}$ that we break down into its component real-to-real functions: ``` def f(z): x, y = real(z), imag(z) return u(x, y), v(x, y) * 1j ``` That is, we've decomposed $f(z) = u(x, y) + v(x, y) i$ where $z = x + y i$. We define `grad(f)` to correspond to ``` def grad_f(z): x, y = real(z), imag(z) return grad(u, 0)(x, y) - grad(u, 1)(x, y) * 1j ``` In math symbols, that means we define $\partial f(z) \triangleq \partial_0 u(x, y) - \partial_1 u(x, y) i$. So we throw out $v$, ignoring the complex component function of $f$ entirely! This convention covers three important cases: 1. If `f` evaluates a holomorphic function, then we get the usual complex derivative, since $\partial_0 u = \partial_1 v$ and $\partial_1 u = - \partial_0 v$. 2. If `f` is evaluates the real-valued loss function of a complex parameter `x`, then we get a result that we can use in gradient-based optimization by taking steps in the direction of the conjugate of `grad(f)(x)`. 3. If `f` evaluates a real-to-real function, but its implementation uses complex primitives internally (some of which must be non-holomorphic, e.g. FFTs used in convolutions) then we get the same result that an implementation that only used real primitives would have given. By throwing away `v` entirely, this convention does not handle the case where `f` evaluates a non-holomorphic function and you want to evaluate all of $\partial_0 u$, $\partial_1 u$, $\partial_0 v$, and $\partial_1 v$ at once. But in that case the answer would have to contain four real values, and so there's no way to express it as a single complex number. You should expect complex numbers to work everywhere in JAX. Here's differentiating through a Cholesky decomposition of a complex matrix: ``` A = np.array([[5., 2.+3j, 5j], [2.-3j, 7., 1.+7j], [-5j, 1.-7j, 12.]]) def f(X): L = np.linalg.cholesky(X) return np.sum((L - np.sin(L))**2) grad(f, holomorphic=True)(A) ``` For primitives' JVP rules, writing the primals as $z = a + bi$ and the tangents as $t = c + di$, we define the Jacobian-vector product $t \mapsto \partial f(z) \cdot t$ as $t \mapsto \begin{matrix} \begin{bmatrix} 1 & 1 \end{bmatrix} \\ ~ \end{matrix} \begin{bmatrix} \partial_0 u(a, b) & -\partial_0 v(a, b) \\ - \partial_1 u(a, b) i & \partial_1 v(a, b) i \end{bmatrix} \begin{bmatrix} c \\ d \end{bmatrix}$. See Chapter 4 of [Dougal's PhD thesis](https://dougalmaclaurin.com/phd-thesis.pdf) for more details. ## More advanced autodiff In this notebook, we worked through some easy, and then progressively more complicated, applications of automatic differentiation in JAX. We hope you now feel that taking derivatives in JAX is easy and powerful. There's a whole world of other autodiff tricks and functionality out there. Topics we didn't cover, but hope to in a "Advanced Autodiff Cookbook" include: - Gauss-Newton Vector Products, linearizing once - Custom VJPs and JVPs - Efficient derivatives at fixed-points - Estimating the trace of a Hessian using random Hessian-vector products. - Forward-mode autodiff using only reverse-mode autodiff. - Taking derivatives with respect to custom data types. - Checkpointing (binomial checkpointing for efficient reverse-mode, not model snapshotting). - Optimizing VJPs with Jacobian pre-accumulation.
github_jupyter
ERROR: type should be string, got "https://colab.research.google.com/drive/1gt6x41vUIewXozVrrjQ1njGp2OqzarH-\n\n```\n!pip install PyDrive\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\nfile_id = '0B-KJCaaF7elleG1RbzVPZWV4Tlk' # URL id. \ndownloaded = drive.CreateFile({'id': file_id})\ndownloaded.GetContentFile('steering_angle.zip')\n!ls\n!pwd\n!unzip steering_angle.zip\n!ls\n!pwd\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import pi\nimport cv2\nimport scipy.misc\nimport tensorflow as tf\nDATA_FOLDER = \"/content/driving_dataset/\"\nDATA_FILE = os.path.join(DATA_FOLDER, \"data.txt\")\n\nx = []\ny = []\n\ntrain_batch_pointer = 0\ntest_batch_pointer = 0\n\nwith open(DATA_FILE) as f:\n for line in f:\n image_name, angle = line.split()\n \n image_path = os.path.join(DATA_FOLDER, image_name)\n x.append(image_path)\n \n angle_radians = float(angle) * (pi / 180) #converting angle into radians\n y.append(angle_radians)\ny = np.array(y)\nprint(str(len(x))+\" \"+str(len(y)))\nx[2]\n#x = x[1:]\nprint(str(len(x))+\" \"+str(len(y)))\nsplit_ratio = int(len(x) * 0.8)\n\ntrain_x = x[:split_ratio]\ntrain_y = y[:split_ratio]\n\ntest_x = x[split_ratio:]\ntest_y = y[split_ratio:]\n\nlen(train_x), len(train_y), len(test_x), len(test_y)\nfig = plt.figure(figsize = (10, 7))\nplt.hist(train_y, bins = 50, histtype = \"step\",color='r')\nplt.hist(test_y, bins = 50, histtype = \"step\",color='b')\nplt.title(\"Steering Wheel angle in train and test\")\nplt.xlabel(\"Angle\")\nplt.ylabel(\"Bin count\")\nplt.grid('off')\nplt.show()\ntrain_x[0]\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport cv2\nim = cv2.imread(train_x[30000])\nplt.imshow(im)\nplt.grid('off')\nplt.imshow(im[100:,:,:])\nplt.grid('off')\nfrom keras.applications import vgg16\nfrom keras.utils.vis_utils import plot_model\nvgg16_model = vgg16.VGG16(include_top=False, weights='imagenet',input_shape=(156,455,3))\nx = []\ny = []\nfor i in range(10000):\n if(i%1000==0):\n print(i)\n im = cv2.imread(train_x[i])\n im = im[100:,:,:]/255\n vgg_im = vgg16_model.predict(im.reshape(1,im.shape[0],im.shape[1],3))\n x.append(vgg_im)\n y.append(train_y[i])\nprint(len(x),len(y))\nx1 = np.array(x)\ny1 = np.array(y)\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation, Flatten\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras.optimizers import SGD\nfrom keras import backend as K\nnp.max(x1)\nmodel = Sequential()\nmodel.add(Flatten(input_shape=(4,14,512)))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(.5))\nmodel.add(Dense(100, activation='linear'))\nmodel.add(Dropout(.2))\nmodel.add(Dense(50, activation='linear'))\nmodel.add(Dropout(.1))\nmodel.add(Dense(10, activation='linear'))\nmodel.add(Dense(1, activation='linear'))\nmodel.summary()\nmodel.compile(loss='mean_squared_error',\n optimizer='adam')\nx1.shape\nx1 = x1.reshape(x1.shape[0],4,14,512)\nnp.max(x1)\nhistory = model.fit(x1/11, y1,batch_size=32,epochs=10, validation_split = 0.1, verbose = 1)\nhistory_dict = history.history\nloss_values = history_dict['loss']\nval_loss_values = history_dict['val_loss']\nepochs = range(1, len(val_loss_values) + 1)\nplt.plot(epochs, history.history['loss'], 'r', label='Training loss')\nplt.plot(epochs, val_loss_values, 'b', label='Test loss')\nplt.title('Training and test loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.grid('off')\nplt.show()\nk=-400\nmodel.predict(x1[k].reshape(1,4,14,512)/11)\nround(y1[k],2)\nim = cv2.imread(train_x[k])\nplt.imshow(im)\nplt.grid('off')\nplt.title('Predicted angle: {}, actual angle:{}'.format(str(round(model.predict(x1[k].reshape(1,4,14,512)/11)[0][0],2)), str(round(y1[k],2))))\n```\n\n"
github_jupyter
# Miscellaneous Topics In the following we look at some miscellaneous topics on cryptography, blockchain and cryptocurrencies. ## Different Families of Hashing and Hardware - scrypt (Litecoin, Dogecoin) - CryptoNight (Monero) - X11 (DASH) ## Hardware - Central Processing Unit (CPU) - Graphics Processing Unit (GPU) - Application-Specific Integrated Circuit (ASICs) <tr> <td> <img src="" alt="empty" style="width: 50px;"/> </td> <td> <img src="include/misc/cpu.jpg" alt="cpt" style="width: 300px;"/> </td> <td> <img src="" alt="empty" style="width: 20px;"/> </td> <td> <img src="include/misc/gpu.jpg" alt="gpu" style="width: 400px;"/> </td> <td> <img src="" alt="empty" style="width: 20px;"/> </td> <td> <img src="include/misc/asic.jpg" alt="asic" style="width: 400px;"/> </td> <td> <img src="" alt="empty" style="width: 20px;"/> </td> </tr> ## Mining Farms <img src="include/misc/mining-farm.jpg" alt="mining farm" style="width: 500px;"/> ## Bitcoin Mining Power Comsumption <img src="include/misc/bitcoin_energy_1.png" alt="energy consumption" style="width: 900px;"/> <img src="include/misc/bitcoin_energy_2.png" alt="energy consumption" style="width: 900px;"/> source: https://digiconomist.net/bitcoin-energy-consumption ## Alternatives to PoW - Proof of Stake (PoS) - proof of Burn # Bitcoin Improvement Proposals (BIP) A set of design document or introducing new features or information on Bitcoin. There are three tracks, the **Standard Track** introduces new features and ideas. The proposals go through review and get accepted or rejected. Most of the BIPs are very specific to Bitcoin. However, some of the proposal are more general and can be used in different context. In the following we look at and implement [BIP39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki), ***Mnemonic code for generating deterministic keys*** [List](https://github.com/bitcoin/bips) of all BIPs is publicly available. ## BIP 39 First create a random number (ENT), between 128-256 bits. Calculate the checksum (first ENT/32 bits of the SHA256). The number of the mnemonic sentence (MS) words is as following: ``` # CS = ENT / 32 # MS = (ENT + CS) / 11 # | ENT | CS | ENT+CS | MS | # +-------+----+--------+------+ # | 128 | 4 | 132 | 12 | # | 160 | 5 | 165 | 15 | # | 192 | 6 | 198 | 18 | # | 224 | 7 | 231 | 21 | # | 256 | 8 | 264 | 24 | ``` Based on a word list [English](https://github.com/bitcoin/bips/blob/master/bip-0039/english.txt) dictionary we output MS words in order ``` # SOLUTION ```
github_jupyter
# Find hospitals closest to an incident The `network` module of the ArcGIS API for Python can be used to solve different types of network analysis operations. In this sample, we see how to find the hospital that is closest to an incident. ## Closest facility The closest facility solver provides functionality for finding out the closest locations to a particular input point. This solver would be useful in cases when you have an incident and need to find the closest facility or need to get information on the travel time and the distance to each of the facilities from an incident point for reporting purposes. ![](http://desktop.arcgis.com/en/arcmap/latest/extensions/network-analyst/GUID-96C273DB-6A24-4D42-AADA-975A33B44F3D-web.png) When finding closest facilities, you can specify how many to find and whether the direction of travel is toward or away from them. The closest facility solver displays the best routes between incidents and facilities, reports their travel costs, and returns driving directions. ### Connect to your GIS As a first step, you would need to establish a connection to your organization which could be an ArcGIS Online organization or an ArcGIS Enterprise. ``` from IPython.display import HTML import pandas as pd from arcgis.gis import GIS #connect to your GIS user_name = 'arcgis_python' password = 'P@ssword123' my_gis = GIS('https://www.arcgis.com', user_name, password) ``` ### Create a Network Layer To perform any network analysis (such as finding the closest facility, the best route between multiple stops, or service area around a facility), you would need to create a `NetworkLayer` object. In this sample, since we are solving for closest facilities, we need to create a `ClosestFacilityLayer` which is a type of `NetworkLayer`. To create any `NetworkLayer` object, you would need to provide the URL to the appropriate network analysis service. Hence, in this sample, we provide a `ClosestFacility` URL to create a `ClosestFacilityLayer` object. Since all ArcGIS Online organizations already have access to those routing services, you can access this URL through the `GIS` object's `helperServices` property. If you have your own ArcGIS Server based map service with network analysis capability enabled, you would need to provide the URL for this service. Let us start by importing the `network` module ``` import arcgis.network as network ``` Access the analysis URL from the `GIS` object ``` analysis_url = my_gis.properties.helperServices.closestFacility.url analysis_url ``` Create a `ClosestFacilityLayer` object using this URL ``` cf_layer = network.ClosestFacilityLayer(analysis_url, gis=my_gis) ``` ### Create hospitals layer In this sample, we will be looking for the closest hospital (facility) to an incident location. Even though we are interested in finding out the closest one, it would still be helpful to get the information on the distance and travel time to all of them for reference purposes. In the code below, we need to geocode the hospitals' addresses as well as do the reverse geocode for the incident location which has been supplied in the latitude/longitude format. To perform the geocode operations, we import the `geocoding` module of the ArcGIS API. ``` from arcgis import geocoding ``` In this sample, we geocode addresses of hospitals to create the facility layer. In your workflows, this could any feature layer. Create a list of hospitals in Rio de Janeiro, Brazil. ``` hospitals_addresses = ['Estrada Adhemar Bebiano, 339 Del Castilho, Rio de Janeiro RJ, 21051-370, Brazil', 'R. José dos Reis Engenho de Dentro, Rio de Janeiro RJ, 20750-000, Brazil', 'R. Dezessete, s/n Maré, Rio de Janeiro RJ, 21042-010, Brazil', 'Rua Dr. Miguel Vieira Ferreira, 266 Ramos, Rio de Janeiro RJ, Brazil'] ``` Loop through each address and geocode it. The geocode operation returns a list of matches for each address. We pick the first result and extract the coordinates from it and construct a `Feature` object out of it. Then we combine all the `Feature`s representing the hospitals into a `FeatureSet` object. ``` from arcgis.features import Feature, FeatureSet hosp_feat_list = [] for address in hospitals_addresses: hit = geocoding.geocode(address)[0] hosp_feat = Feature(geometry=hit['location'], attributes=hit['attributes']) hosp_feat_list.append(hosp_feat) ``` Construct a `FeatureSet` using each hospital `Feature`. ``` hospitals_fset = FeatureSet(features=hosp_feat_list, geometry_type='esriGeometryPoint', spatial_reference={'latestWkid': 4326}) ``` Lets draw our hospitals on a map ``` map1 = my_gis.map('Rio de Janeiro, Brazil') map1 map1.draw(hospitals_fset, symbol={"type": "esriSMS","style": "esriSMSSquare", "color": [76,115,0,255],"size": 8,}) ``` ### Create incidents layer Similarly, let us create the incient layer ``` incident_coords = '-43.281206,-22.865676' reverse_geocode = geocoding.reverse_geocode({"x": incident_coords.split(',')[0], "y": incident_coords.split(',')[1]}) incident_feature = Feature(geometry=reverse_geocode['location'], attributes=reverse_geocode['address']) incident_fset = FeatureSet([incident_feature], geometry_type='esriGeometryPoint', spatial_reference={'latestWkid': 4326}) ``` Let us add the incident to the map ``` map1.draw(incident_fset, symbol={"type": "esriSMS","style": "esriSMSCircle","size": 8}) ``` ## Solve for closest hospital By default the closest facility service would return only the closest location, so we need to specify explicitly the `default_target_facility_count` parameter as well as `return_facilities`. ``` result = cf_layer.solve_closest_facility(incidents=incident_fset, facilities=hospitals_fset, default_target_facility_count=4, return_facilities=True, impedance_attribute_name='TravelTime', accumulate_attribute_names=['Kilometers','TravelTime']) ``` Let us inspect the result dictionary ``` result.keys() ``` Let us use the `routes` dictionary to construct line features out of the routes to display on the map ``` result['routes'].keys() result['routes']['features'][0].keys() ``` Construct line features out of the routes that are returned. ``` line_feat_list = [] for line_dict in result['routes']['features']: f1 = Feature(line_dict['geometry'], line_dict['attributes']) line_feat_list.append(f1) routes_fset = FeatureSet(line_feat_list, geometry_type=result['routes']['geometryType'], spatial_reference= result['routes']['spatialReference']) ``` Add the routes back to the map. The route to the closest hospital is in red ``` map1.draw(routes_fset) ``` ## Analyze the results in a table Since we parsed the routes as a `FeatureSet`, we can display the attributes easily as a `pandas` `DataFrame`. ``` routes_fset.df ``` Let us add the hospital addresses and incident address to this table and display only the relevant columns ``` df1 = routes_fset.df df1['facility_address'] = hospitals_addresses df1['incident_address'] = [incident_feature.attributes['Match_addr'] for i in range(len(hospitals_addresses))] df1[['facility_address','incident_address','Total_Miles','Total_TravelTime']] ``` ### Conclusion Thus using the `network` module of the ArcGIS API for Python, you can solve for closest facilities from an incident location.
github_jupyter
# Recursive Clustering and Summarization Plan: - recursively cluster collections - create tree of clusters (the HDBSCAN does this anyways but likely not as we want) - cluster until max-depth is reached or (better) until each leaf only has one "plausible" cluster (based on thresholds or probabilities) - try summarizing to get "main idea" out of cluster - cluster on keywords ( randomize all grammar + stop words ) - topic clusters - context: title, abstract, etc. keywords ## TODO 28 01 2022 * Run topic clustering (BERT) over cluster_tree * hook back sentences to DOI ## Recursively cluster Based on the topic_clustering notebook, we will try with Agglomerative Clustering ``` import pandas as pd df = pd.read_csv("downloads/25k_enriched_small.csv") df2 = pd.read_csv("downloads/core_pos_min.csv") df2["text"] = df2["first_sent"] df2["source"] = "core_pubmed" df3 = df.append(df2) df = df3 df["oai"] df = df.sample(frac=1) sentences = list(df["text"]) df.to_csv("downloads/60k_core_all.csv", index = False, header=True) !pip install metapub #pos = df[df.label == 1] sentences = list(df["text"]) #otherwise key error df[df["PMID"].notna()][:100]["PMID"] from metapub.convert import pmid2doi pmid2doi(17030353) def safe_pmid2doi(pmid): try: pmid2doi(pmid) except: print(pmid, " error") return None !pip install sentence-transformers from sentence_transformers import SentenceTransformer, util print("Encode the corpus ... get a coffee in the meantime") model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences, batch_size=64, show_progress_bar=True, convert_to_tensor=True) from sklearn.cluster import AgglomerativeClustering import numpy as np def cluster(embeddings, **kwargs): embeddings = embeddings.cpu() # Normalize the embeddings to unit length corpus_embeddings = embeddings / np.linalg.norm(embeddings, axis=1, keepdims=True) # Perform kmean clustering clustering_model = AgglomerativeClustering(**kwargs) #, affinity='cosine', linkage='average', distance_threshold=0.4) clustering_model.fit(corpus_embeddings) # cluster_assignment = clustering_model.labels_ return clustering_model def get_clusters(clustering_model): clusters = {} for sentence_id, cluster_id in enumerate(clustering_model.labels_): if cluster_id not in clusters: clusters[cluster_id] = [] try: clusters[cluster_id].append(sentences[sentence_id]) except: print(sentence_id, "sentence_id") return clusters sample = embeddings cluster_model = cluster(sample, n_clusters=None, distance_threshold=1.4) ``` Cluster Model attributes n_clusters_ : int The number of clusters found by the algorithm. If ``distance_threshold=None``, it will be equal to the given ``n_clusters``. labels_ n_leaves_ n_connected_components_ : The estimated number of connected components in the graph. children_ : array-like of shape (n_samples-1, 2) The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i` distances_ : array-like of shape (n_nodes-1,) Distances between nodes in the corresponding place in `children_`. Only computed if `distance_threshold` is used or `compute_distances` is set to `True`. ``` len(get_clusters(cluster_model).keys()) clusters = get_clusters(cluster_model) def collection_to_clusters(texts, model=model, **kwargs): embs = model.encode(texts, batch_size=64, show_progress_bar=True, convert_to_tensor=True) cluster_model = cluster(embs, **kwargs) return get_clusters(cluster_model).values() from tqdm import tqdm cluster_tree = {} lens = [len(v) for v in clusters.values()] for i, v in tqdm(clusters.items()): if len(v) > 25: cluster_tree[i] = {"parent" : v} cluster_tree[i] = {"children" :[*collection_to_clusters(v, n_clusters=None, distance_threshold=0.7)]} #get values, embed and sample again with lower threshold len(get_clusters(cluster_model).keys()) import json ct = {str(k):v for k,v in cluster_tree.items()} with open('cluster_tree_65k_tresh_1,4.json', 'w') as outfile: json.dump(ct, outfile) !pip bertopic --version df.to_csv("25k_pm_enriched.csv", index = False, header=True) df["DOI"] = df["PMID"].apply(pmid) td = extract_topics(sents) import json with open("full_topic_triples.json", "w") as f: json.dump(td, f) ``` ## Get two top probability words * get third one as test * link up sentences to their topic AND the cluster * df = sentenceID, DOI, clusterID, clusterTOPICs ``` def get_cluster_for_sent(sent, clusterdict): for idx, sents in clusterdict.items(): if sent in sents: return idx return -1 n_topics = 470 cluster_model_2 = cluster(sample, n_clusters=n_topics)# distance_threshold=1.4) clusters_2 = get_clusters(cluster_model) #get cluster of sentence df["cluster_id"] = df["text"].apply(lambda t: get_cluster_for_sent(t, clusters)) #get topic of sentence ``` ## FAISS lookup ``` import faiss emb = embeddings.cpu() emb = emb.numpy() d = emb.shape[1] emb.shape, len(df) np.array(df["emb"][1])[:].shape x = [list(e) for e in emb ] #df["emb"] = x def create_index(vectors): faiss_index = faiss.IndexFlatL2(len(vectors[0])) faiss_index.add(vectors) # print(faiss_index.ntotal) return faiss_index #export def query_index(text, embedder, target_list, index, with_distance=False, k=10): embedding = embedder.encode([text]) distances, indices = index.search(embedding, k) if with_distance: return [(target_list[index], distances[0][i]) for i, index in enumerate(indices[0])] return [target_list[i] for i in indices[0]] def stepwise_l(step, df, f): l = [] for idx in range(step, len(df)+step-1, step): print(idx-step, idx)# dataframe[idx-step:idx]) sl = df[idx-step:idx] batch = [f(item) for item in sl] l = l + batch return l x = stepwise_l(1000, df["emb"], find_closest_problem) import pandas as pd df = pd.read_csv("downloads/25k_problems_enriched_2.csv", low_memory=False) #df["closest_problem_id"] = x #df["text"][[1,2,3]] #[print(t) for t in df.text] samp = df[:100] df.text[[39, 53, 15682]] import re test_string = '[39, 53, 15682]' [int(item) for item in list(re.findall(r'\d+', test_string))] df["closest_problem_id"] = df["closest_problem_id"].astype(int) df["closest_problem_id"] = df["closest_problem_id"].apply(lambda id_str: [int(item) for item in list(re.findall(r'\d+', id_str))]) df = df.drop(columns=["text_masked", ]) df = df.drop(columns=["Abstract", ]) df = df.drop(columns=["sentences", ]) df = df.drop(columns=["emb"]) df.to_csv("downloads/enrich_problem_id_25k_small.csv", index = False, header=True) df["closest_problems"] = df["closest_problem_id"].apply(lambda ids: df["text"][ids]) #df.to_csv("downloads/25k_problems_enriched_2.csv", index = False, header=True) ``` ## Find DOIs ``` dois = stepwise_l(300, df["PMID"][:10000], safe_pmid2doi) [doi for doi in dois if doi] df.to_csv("downloads/25k_problems_enriched_3.csv", index = False, header=True) df["problem_id"].map(type) df[df["problem_id"] == 19652] ``` ## Hierachy by source text overlap ``` #take td #if there's overlap of t1 and t2 t1 is parent if t2 has less of the sources (and 80%+) of t2 sources are in t1 (otherwise duplicate or not that realted)) def sources_relation(s1, s2, min_overlap=0.7): s1 = set(s1) s2 = set(s2) l1 = len(s1) l2 = len(s2) common = s1.intersection(s2) #order doesn't matter # cl1 = len(s1.intersect(common)) # cl2 = len(s2.intersect(common)) relation="TBA" #if only a few sources are common and it's not all/most of one topics sources then -> different ideas #if many are common and one topic is much bigger, the smaller is a subtopic #if almost all are common and there's 50:50 or 30:70 split then they are duplicates #either unrelated, duplicate, or hypernomy cl = len(common) ol2 = cl/l2 ol1 = cl/l1 if l1 > l2: ol = ol2 else: ol = ol1 if ol < min_overlap: #little in common (one topic is enough to decide) relation="different" #-- comparative if ol > min_overlap: #if lots of overlap if l2/l1 < 8/10: relation="child" #TODO: use permutations and then filter out duplicates / differents ... (only child left) elif l1/l2 < 8/10: relation="parent" else: #approx similar size #topics similar size and much of overlap (synonyms) relation = "duplicates" return relation from itertools import combinations topic_pairs = combinations(td.keys(), 2) result = [(t1, t2, sources_relation(td[t1], td[t2]) ) for t1,t2 in topic_pairs] ``` ### Building the Tree ``` triples = [r for r in result if r[2] == "child" or r[2] == "parent"] triples[:8], len(result), len(triples) #build the tree def tree_has_topic(topic, tree): return len([k for k in FlatterDict(tree).keys() if topic in k]) > 0 def get_leaves(topic, triples): node = {} node[topic] = {"children": []} #topic = node for t in triples: if (t[0] == topic and t[2] == "child") and not tree_has_topic(t[1], triples): #":or (t[1] == topic and t[2] == "parent")) node[topic]["children"] += [get_leaves(t[1], triples)] # TODO: flatten and check if key is already in if len(node[topic]["children"]) == 0: # leave return node return node x = "reperfusion" [t for t in triples if t[0] == x or t[1] == x] from flatdict import FlatDict, FlatterDict y = get_leaves("ischemia", triples) y triples[:20] trees = {} for triple in triples: #every entity first get's it's children attached before attaching to parent topic = triple[0] if not trees.get(topic): trees[topic] = get_leaves(topic, triples)[topic] trees for vs in trees["reperfusion"].values(): for v in vs: print([*v.keys()]) # default with a level of 0, and an indent of 4 characters def write(p, depth=0, indent=4): if p==None: return # here we multiply the level by the number of indents # and then you multiply that number with a space character # which will magically show as that number of spaces. print("{}{}".format(" "*(indent*depth), p)) if p.children!=None: # then you do not need your print(…, end='') hack # simply increase the depth write(p.children, depth=depth+1, indent=indent) def walk(node): """ iterate tree in pre-order depth-first search order """ # yield node for child in node.children: print(child) for n in walk(child): yield n def text_node(node_key, tree): text = node_key children = tree[node_key].get("children") if not children: text += "\n\t" for child in children: text += "\t -->" text += text_node([*child.keys()][0], child) return text trees["reperfusion"] trees.keys() t = """""" for key in trees.keys(): t += text_node(key, trees) with open('topic_trees.txt', 'w') as f: print(t, file=f) walk(trees["reperfusion"]) for topic, sources in td.items(): #every combination gets overlaps ``` ## Summarization **Tried: Google Pegasus**. Result: Does a terrible job of keeping the important information and doesn't retain the question but guesses at a conclusion ``` torch.cuda.is_available() ``` ### Pegasus Setup ``` from transformers import PegasusForConditionalGeneration, PegasusTokenizer import torch model_name = 'google/pegasus-xsum' device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device) def summarize(sentences): batch = tokenizer(sentences, truncation=True, padding='longest', return_tensors="pt").to(device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` ### T5 Setup ``` from transformers import pipeline summarizer = pipeline("summarization") ARTICLE = """ Background: Trust is a critical component of competency committees given their high-stakes decisions. Research from outside of medicine on group trust has not focused on trust in group decisions, and "group trust" has not been clearly defined. The purpose was twofold: to examine the definition of trust in the context of group decisions and to explore what factors may influence trust from the perspective of those who rely on competency committees through a proposed group trust model. Methods: The authors conducted a literature search of four online databases, seeking articles published on trust in group settings. Reviewers extracted, coded, and analyzed key data including definitions of trust and factors pertaining to group trust. Results: The authors selected 42 articles for full text review. Although reviewers found multiple general definitions of trust, they were unable to find a clear definition of group trust and propose the following: a group-directed willingness to accept vulnerability to actions of the members based on the expectation that members will perform a particular action important to the group, encompassing social exchange, collective perceptions, and interpersonal trust. Additionally, the authors propose a model encompassing individual level factors (trustor and trustee), interpersonal interactions, group level factors (structure and processes), and environmental factors.""" from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("snrspeaks/t5-one-line-summary") #snrspeaks/t5-one-line-summary tokenizer = AutoTokenizer.from_pretrained("snrspeaks/t5-one-line-summary") # T5 uses a max_length of 512 so we cut the article to 512 tokens. inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="pt", max_length=512, truncation=True) outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) tokenizer.decode(outputs[0]) def summarize(text): inputs = tokenizer.encode("summarize: " + text, return_tensors="pt", max_length=512, truncation=True) outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) return tokenizer.decode(outputs[0]) summarize("I have seen a ghost in my shed") "I have seen a ghost in my shed"[:5] ``` ### Print Results ``` for ID, cluster in get_clusters(cluster_model).items(): sentences = ".".join(cluster) print(sentences) print( "\n\n", "sum:::", summarize(sentences[:256]), "\n\n\n") ```
github_jupyter
# ndarray学习 https://numpy.org/doc/stable/reference/arrays.ndarray.html ndarray是具有相同类型和大小的项目的(通常为固定大小)多维容器。 数组中维度和项目的数量由其形状定义,形状是N个非负整数的元组,用于指定每个维度的大小。 数组中项目的类型由单独的数据类型对象(dtype)指定,其中一个与每个ndarray相关联。 与Python中的其他容器对象一样,可以通过对数组进行索引或切片(例如使用N个整数)以及通过ndarray的方法和属性来访问和修改ndarray的内容。 不同的ndarray可以共享相同的数据,因此在一个ndarray中所做的更改可能在另一个中可见。 也就是说,一个ndarray可以是另一个ndarray的“view”,并且它所引用的数据由“base” ndarray处理。 ndarrays也可以是实现缓冲区或数组接口的Python字符串或对象所拥有的内存的视图。 ``` import numpy as np ``` ## ndarray的构造函数 https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray ``` class numpy.ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None) Parameters(for the __new__ method; see Notes below) shape: tuple of ints Shape of created array. dtype: data-type, optional Any object that can be interpreted as a numpy data type. buffer: object exposing buffer interface, optional Used to fill the array with data. offset: int, optional Offset of array data in buffer. strides: tuple of ints, optional Strides of data in memory. order: {‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. ``` ``` a = np.ndarray((2,3)) a ``` ## ndarray的创建方式 https://numpy.org/doc/stable/reference/routines.array-creation.html#routines-array-creation ### Ones and zeros ``` """ numpy.empty(shape, dtype=float, order='C') 依据给定形状和类型(shape[, dtype, order])返回一个新的空数组 参数:shape : 整数或者整型元组定义返回数组的形状; dtype : 数据类型,可选定义返回数组的类型。 order : {‘C’, ‘F’}, 可选规定返回数组元素在内存的存储顺序:C(C语言)-rowmajor;F(Fortran)column-major。 """ np.empty((2,3)) # numpy.empty_like(prototype, dtype=None, order='K', subok=True, shape=None) # np.empty_like创建一个新的和原来array形状一样的,但是未初始化的array a=np.array([[1.,2.,3.],[4.,5.,6.]]) np.empty_like(a) # numpy.eye(N, M=None, k=0, dtype=<class 'float'>, order='C') # 返回一个2-D数组,对角线上是1,其他地方是0。 np.eye(3, dtype=int) np.eye(3, dtype=int,k=-2) # k 对角线索引:0(默认)表示主对角线,正值表示上对角线,负值表示下对角线。 # 单位阵是一个正方形阵,主对角线上有1个单位阵。 np.identity(3) # numpy.ones(shape, dtype=None, order='C')¶ np.ones((2,3)) # numpy.ones_like(a, dtype=None, order='K', subok=True, shape=None) np.ones([1,2]) np.zeros((2,3)) np.zeros([2,3]) np.full((2, 2), 10) np.full_like([2,3], 0.1, dtype=np.double) ``` ### 从现有数据创建ndarray 重点numpy.array(object, dtype=None, copy=True, order='K', subok=False, ndmin=0) https://numpy.org/doc/stable/reference/generated/numpy.array.html#numpy.array ``` np.array([[1, 2], [3, 4]]) # 最小二维 np.array([1, 2, 3], ndmin=2) np.array(np.mat('1 2; 3 4')) # subok : 如果为True,则子类将被传递,否则返回的数组将被强制为基类数组(默认)。 np.array(np.mat('1 2; 3 4'), subok=True) # 数据类型包含多个元素 x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')]) x x['a'] # asarray np.asarray([1,2]) # asanyarray 将输入转换为ndarray,但通过ndarray子类。 np.asanyarray([1,2]) # np.recarray 是记录数组 issubclass(np.recarray, np.ndarray) a = np.array([(1.0, 2), (3.0, 4)], dtype='f4,i4').view(np.recarray) np.asarray(a) is a np.asanyarray(a) is a x = np.array([(1.0, 2), (3.0, 4)], order="F") x.flags['C_CONTIGUOUS'] x1 = np.ascontiguousarray(x, dtype=np.float32) x1 x1.flags['C_CONTIGUOUS'] # 矩阵 x = np.array([[1, 2], [3, 4]]) m = np.asmatrix(x) x[0,0] = 5 m # copy x = np.array([1, 2, 3]) y = x z = np.copy(x) x[0] = 10 x[0] == y[0],x[0] == z[0] # frombuffer 将缓冲区解释为一维数组。 s = b'hello world' np.frombuffer(s, dtype='S1', count=5, offset=6),np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) # fromfile 从文本或二进制文件中的数据构造一个数组。 dt = np.dtype([('time', [('min', np.int64), ('sec', np.int64)]), ('temp', float)]) x = np.zeros((1,), dtype=dt) x['time']['min'] = 10; x['temp'] = 98.25 x import tempfile fname = tempfile.mkstemp()[1] x.tofile(fname) np.fromfile(fname, dtype=dt) np.save(fname, x) np.load(fname + '.npy') # fromfunction 通过在每个坐标上执行一个函数来构造一个数组。因此,所得数组在坐标(x,y,z)处具有值fn(x,y,z)。 np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) # fromiter 从一个迭代对象创建一个新的一维数组。 iterable = (x*x for x in range(5)) np.fromiter(iterable, float) # fromstring 从字符串中的文本数据初始化的新一维数组。 np.fromstring('1 2', dtype=int, sep=' '),np.fromstring('1, 2', dtype=int, sep=',') # numpy.loadtxt(fname, dtype=<class 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0, encoding='bytes', max_rows=None) # https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html#numpy.loadtxt from io import StringIO # StringIO behaves like a file object c = StringIO(u"0 1\n2 3") np.loadtxt(c) d = StringIO(u"M 21 72\nF 35 58") np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'), 'formats': ('S1', 'i4', 'f4')}) c = StringIO(u"1,0,2\n3,0,4") x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True) x,y # fromregex 使用正则表达式解析从文本文件构造一个数组。 # https://numpy.org/doc/stable/reference/generated/numpy.fromregex.html#numpy.fromregex f = open('test.dat', 'w') _ = f.write("1312 foo\n1534 bar\n444 qux") f.close() regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything] output = np.fromregex('test.dat', regexp, [('num', np.int64), ('key', 'S3')]) output,output['num'] # numpy.genfromtxt(fname, dtype=<class 'float'>, comments='#', delimiter=None, skip_header=0, skip_footer=0, converters=None, missing_values=None, filling_values=None, usecols=None, names=None, excludelist=None, deletechars=" !#$%&'()*+, -./:;<=>?@[\]^{|}~", replace_space='_', autostrip=False, case_sensitive=True, defaultfmt='f%i', unpack=None, usemask=False, loose=True, invalid_raise=True, max_rows=None, encoding='bytes') s = StringIO(u"1,1.3,abcde") data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ('mystring','S5')], delimiter=",") data ``` #### Creating record arrays (numpy.rec) https://numpy.org/doc/stable/reference/routines.array-creation.html#creating-record-arrays-numpy-rec numpy记录数组. ``` # 记录数组 和结构体数组类似,在元素访问方式上面有所区别 : 结构数组 students['age'] students[1]['age'] 记录数组 students.age students[1].age ==================================================================== 1.创建记录数组(numpy.rec) # 注意numpy.rec是首选别名 numpy.core.records。 core.records.array(obj [,dtype,shape,...]) #从各种各样的对象构造一个记录数组。 core.records.fromarrays(arrayList [,dtype,...]) #从(平面)数组列表创建一个记录数组 core.records.fromrecords(recList [,dtype,...]) #以文本形式从记录列表中创建一个recarray core.records.fromstring(datastring [,dtype,...])#从包含在字符串中的二进制数据创建(只读)记录数组 core.records.fromfile(fd [,dtype,shape,...]) #从二进制文件数据创建一个数组 ===================================================================== 2.记录数组 #属性访问结构化数组的字段 2.1.创建记录数组: recordarr = np.rec.array([(1,2.,'Hello'),(3,4.,"World")],dtype=[('a', 'i4'),('b', 'f4'), ('c', 'S10')]) # rec.array([(1, 2., b'Hello'), (3, 4., b'World')],dtype=[('a', '<i4'), ('b', '<f4'), ('c', 'S10')]) type(recordarr.a)# numpy.recarray ====================================================================== 2.2.数组操作: recordarr.b #array([2., 4.], dtype=float32) recordarr[1:2] # rec.array([(3, 4., 'World')],dtype=[('a', '<i4'), ('b', '<f4'), ('c', 'S10')]) recordarr[1:2].a# array([3], dtype=int32) recordarr.a[1:2]# array([3], dtype=int32) recordarr[1].c # 'World' ====================================================================== 2.2.numpy.rec.array 数组(包括结构化)转换为记录数组 arr = array([(1,2.,'Hello'),(3,4.,"World")],dtype=[('a', 'i4'), ('b', 'f4'), ('c', 'S10')]) recordarr = np.rec.array(arr) # 视图获得结构化数组的记录数组: recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)),type=np.recarray) # rec.array([(1, 2., b'Hello'), (3, 4., b'World')],dtype=[('a', '<i4'), ('b', '<f4'), ('c', 'S10')]) recordarr = arr.view(np.recarray) #将ndarray作为类型查看np.recarray会自动转换为np.record数据类型 recordarr.dtype # dtype((numpy.record, [('a', '<i4'), ('b', '<f4'), ('c', 'S10')])) ===================================================================== 2.3.要回到纯粹的ndarray中,dtype和type都必须重置: arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) # array([(1, 2., b'Hello'), (3, 4., b'World')],dtype=[('a', '<i4'), ('b', '<f4'), ('c', 'S10')]) ===================================================================== ``` #### Creating character arrays (numpy.char) https://numpy.org/doc/stable/reference/routines.array-creation.html#creating-character-arrays-numpy-char numpy.char是numpy.core.defchararray的首选别名。 #### Numerical ranges numpy.arange([start, ]stop, [step, ]dtype=None) ``` np.arange(3,7),np.arange(3,7,2) # numpy.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0) np.linspace(2.0, 3.0, num=5),\ np.linspace(2.0, 3.0, num=5, endpoint=False),\ np.linspace(2.0, 3.0, num=5, retstep=True) # retstep 如果为真,返回(样本,步长),其中步长是样本之间的间隔。 import matplotlib.pyplot as plt N = 8 y = np.zeros(N) x1 = np.linspace(0, 10, N, endpoint=True) x2 = np.linspace(0, 10, N, endpoint=False) plt.plot(x1, y, 'o') plt.plot(x2, y + 0.5, 'o') plt.ylim([-0.5, 1]) plt.show() """ numpy.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None, axis=0) 返回以对数刻度均匀间隔的数字。 在线性空间中,序列以base ** start (base的开始次方)开始,以base ** stop(参见下面的端点)结束。 版本1.16.0中的更改:现在支持非标量启动和停止。 """ np.logspace(2.0, 3.0, num=4),\ np.logspace(2.0, 3.0, num=4, endpoint=False),\ np.logspace(2.0, 3.0, num=4, base=2.0) """ numpy.geomspace(start, stop, num=50, endpoint=True, dtype=None, axis=0) 返回在对数尺度上均匀间隔的数字(几何级数)。这类似于logspace,但是直接指定了端点。 每个输出示例都是前一个输出的常数倍。 版本1.16.0中的更改:现在支持非标量启动和停止。 Straight line np.geomspace(1j, 1000j, num=4) Circle np.geomspace(-1+0j, 1+0j, num=5) """ np.geomspace(1, 1000, num=4),\ np.geomspace(1, 1000, num=3, endpoint=False),\ np.geomspace(1, 1000, num=4, endpoint=False),\ np.geomspace(1, 256, num=9) # 注意,上面可能不会产生精确整数 np.geomspace(1, 256, num=9, dtype=int),\ np.around(np.geomspace(1, 256, num=9)).astype(int) # 允许负的、递减的和复杂的输入 np.geomspace(1000, 1, num=4),\ np.geomspace(-1000, -1, num=4),\ np.geomspace(1j, 1000j, num=4),\ np.geomspace(-1+0j, 1+0j, num=5) """ np.geomspace(1j, 1000j, num=4) # 直线 np.geomspace(-1+0j, 1+0j, num=5) # 圆 """ """ 从坐标向量返回坐标矩阵。 给定一维坐标数组x1、x2、…、xn,在N-D网格上建立N-D坐标数组,对N-D标量/向量场进行向量化计算。 在1.9版本中进行了更改:允许使用1-D和0-D的情况。 """ nx, ny = (3, 2) x = np.linspace(0, 1, nx) y = np.linspace(0, 1, ny) # sparse 如果为真,则返回一个稀疏网格以保存内存。默认是假的。新版本1.7.0。 np.meshgrid(x, y),np.meshgrid(x, y, sparse=True) import matplotlib.pyplot as plt x = np.arange(-5, 5, 0.1) y = np.arange(-5, 5, 0.1) xx, yy = np.meshgrid(x, y, sparse=True) z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2) h = plt.contourf(x,y,z) plt.show() """ nd_grid实例,它返回一个密集的多维“ meshgrid”。 numpy.lib.index_tricks.nd_grid的一个实例,该实例在建立索引时返回一个密集的(或充血的)网格网格, 以便每个返回的参数具有相同的形状。 输出数组的尺寸和数量等于索引尺寸的数量。 如果步长不是复数, 则停止不包括在内。 但是,如果步长是复数(例如5j),则其幅度的整数部分将被解释为指定要在起始值和终止值之间创建的点数, 其中终止值是包含端点的值。 """ np.mgrid[0:5,0:5],np.mgrid[-1:1:5j] """ nd_grid实例,它返回一个开放的多维meshgrid。 numpy.lib.index_tricks.nd_grid的一个实例,该实例在建立索引时返回一个开放的(即未充实的)网格, 因此每个返回数组的一个维数都大于1。输出数组的维数和数量相等 到索引尺寸的数量。 如果步长不是复数,则停止不包括在内。 但是,如果步长是复数(例如5j),则其幅度的整数部分将被解释为指定要在起始值和终止值之间创建的点数, 其中终止值是包含端点的值。 """ np.ogrid[-1:1:5j],np.ogrid[0:5,0:5] ``` #### 矩阵 ``` # 提取对角线或构造对角线数组 x = np.arange(9).reshape((3,3)) x np.diag(x),\ np.diag(x, k=1),\ np.diag(x, k=-1) np.diag(np.diag(x)) # 创建一个二维数组,将扁平输入作为对角线。 np.diagflat([[1,2], [3,4]]),np.diagflat([1,2], 1) # 在给定对角线处及以下且在其他位置为零的数组。 # k 数组被填充的位置和位置以下的次对角线。k = 0是主对角线,k < 0在它下面,k>在上面。默认值是0。 np.tri(3, 5, 2, dtype=int),np.tri(3, 5, -1) # 数组的下三角。 np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) # 数组的上三角。 np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) """ 生成一个范德蒙矩阵。 输出矩阵的列是输入向量的幂。幂的顺序由递增的布尔参数决定。 具体地说,当递增为False时,第i输出列是输入向量按元素顺序的N - i- 1次幂。 这样一个每一行都有一个几何级数的矩阵,以亚历山大·特拉夫·范德蒙德的名字命名。 """ x = np.array([1, 2, 3, 5]) N = 3 # 列数 默认len(x) np.vander(x, N) np.column_stack([x**(N-1-i) for i in range(N)]) np.vander(x),np.vander(x, increasing=True) np.linalg.det(np.vander(x)),(5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) # 把输入解释成一个矩阵。 x = np.array([[1, 2], [3, 4]]) m = np.asmatrix(x) x[0,0] = 5 m # 从字符串、嵌套序列或数组构建矩阵对象。 A = np.mat('1 1; 1 1') B = np.mat('2 2; 2 2') C = np.mat('3 4; 5 6') D = np.mat('7 8; 9 0') np.bmat([[A, B], [C, D]]),\ np.bmat(np.r_[np.c_[A, B], np.c_[C, D]]),\ np.bmat('A,B; C,D') ```
github_jupyter
*Licensed under the MIT License.* # Interpreting Classical Text Classification models _**This notebook showcases how to use the interpret-text repo to implement an interpretable module using feature importances and bag of words representation.**_ ## Contents 1. [Introduction](#Introduction) 2. [Setup](#Setup) 3. [Training](#Training) 4. [Results](#Results) ``` import sys sys.path.append("../..") import os # sklearn from sklearn.metrics import precision_recall_fscore_support from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from interpret_text.experimental.classical import ClassicalTextExplainer from notebooks.test_utils.utils_mnli import load_mnli_pandas_df # for testing from scrapbook.api import glue working_dir = os.getcwd() ``` ## 1. Introduction This notebook illustrates how to locally use interpret-text to help interpret text classification using a logisitic regression baseline and bag of words encoding. It demonstrates the API calls needed to obtain the feature importances along with a visualization dashbard. ###### Note: * *Although we use logistic regression, any model that follows sklearn's classifier API should be supported natively or with minimal tweaking.* * *The interpreter supports interpretations using either coefficients associated with linear models or feature importances associated with ensemble models.* * *The classifier relies on scipy's sparse representations to keep the dataset in memory.* ## 2. Setup The notebook is built on features made available by [scikit-learn](https://scikit-learn.org/stable/) and [spacy](https://spacy.io/) for easier compatibiltiy with popular tookits. ### Configuration parameters ``` DATA_FOLDER = './temp' TRAIN_SIZE = 0.7 TEST_SIZE = 0.3 ``` ### Data loading ``` df = load_mnli_pandas_df(DATA_FOLDER, "train") df = df[df["gold_label"] == "neutral"] # get unique sentences # fetch documents and labels from data frame X_str = df['sentence1'] # the document we want to analyze ylabels = df['genre'] # the labels, or answers, we want to test against ``` ### Create explainer ``` # Create explainer object that contains default glassbox classifier and explanation methods explainer = ClassicalTextExplainer() label_encoder = LabelEncoder() ``` ## Training ###### Note: Vocabulary * *The vocabulary is compiled from the training set. Any word that does not appear in the training data split, will not appear in the vocabulary.* * *The word must appear one or more times to be considered part of the vocabulary.* * *However, the sklearn countvectorizer allows the addition of a custom vocabulary as an input parameter.* ### Configure training setup This step will cast the training data and labels into the correct format 1. Split data into train and test using a random shuffle 2. Load desired classifier. In this case, Logistic Regression is set as default. 3. Setup grid search for hyperparameter optimization and train model. Edit the hyper parameter range to search over as per your model. 4. Fit models to train set ``` X_train, X_test, y_train, y_test = train_test_split(X_str, ylabels, train_size=0.8, test_size=0.2) y_train = label_encoder.fit_transform(y_train) y_test = label_encoder.transform(y_test) print("X_train shape =" + str(X_train.shape)) print("y_train shape =" + str(y_train.shape)) print("X_train data structure = " + str(type(X_train))) ``` #### Model Overview The 1-gram [Bag of Words](https://en.wikipedia.org/wiki/Bag-of-words_model) allows a 1:1 mapping from individual words to their respective frequencies in the [document-term matrix](https://en.wikipedia.org/wiki/Document-term_matrix). ``` classifier, best_params = explainer.fit(X_train, y_train) ``` ## Results ###### Notes for default Logistic Regression classifier: * *The parameters are set using cross-validation* * *Below listed hyperparamters are selected by searching over a larger space.* * *These apply specifically to this instance of the logistic regression model and mnli dataset.* * *'Multinomial' setup was found to be better than 'one-vs-all' across the board* * *Default 'liblinear' solver is not supported for 'multinomial' model setup* * *For a different model or dataset, set the range as appropriate using the hyperparam_range argument in the train method* ``` # obtain best classifier and hyper params print("best classifier: " + str(best_params)) ``` ## Performance Metrics ``` mean_accuracy = classifier.score(X_test, y_test, sample_weight=None) print("accuracy = " + str(mean_accuracy * 100) + "%") y_pred = classifier.predict(X_test) [precision, recall, fscore, support] = precision_recall_fscore_support(y_test, y_pred,average='macro') ``` Capture metrics for integration testing ``` glue("accuracy", mean_accuracy) glue("precision", precision) glue("recall", recall) glue("f1", fscore) print("[precision, recall, fscore, support] = " + str([precision, recall, fscore, support])) ``` ## Local Importances Local importances are the most and least important words for a single document. ``` # Enter any document or a document and label pair that needs to be interpreted document = "I travelled to the beach. I took the train. I saw fairies, dragons and elves" document1 = "The term construction means fabrication, erection, or installation of an affected unit." document2 = "Demonstrating Product Reliability Indicates the Product Is Ready for Production" document3 = "and see there\'s no secrecy to that because the bill always comes in and we know how much they pay for it" document4 = "Had that piquant gipsy face been at the bottom of the crime, or was it 73 the baser mainspring of money?" document5 = "No, the boy trusted me, and I shan\'t let him down." # Obtain the top feature ids for the selected class label explainer.preprocessor.labelEncoder = label_encoder local_explanation = explainer.explain_local(document) ``` Alternatively, you can pass the predicted label with the document ``` y = classifier.predict(document1) predicted_label = label_encoder.inverse_transform(y) local_explanation = explainer.explain_local(document1, predicted_label) from interpret_text.experimental.widget import ExplanationDashboard ExplanationDashboard(local_explanation) ```
github_jupyter
# Data analysis and visualizations of the "Titanic: Machine Learning from Disaster" challenge This notebook focuses on analysing the Titanic data set for correctly predicting the survival rate of the passengers of the titanic using the Kaggel's Titanic challenge dataset. In this notebook, we'll take a look at how the data is organized, what features represent the training data and which ones would be the most useful for a classification model to predict if a person would be likely to survive the trip or not. The notebook is structured as follows: 1. **Loading data** 1.1. Import libraries 1.2. Load files 2. **Data analysis and visualization** 2.1. Check data format 2.2. Checking for missing values 2.3. Detecting outliers 2.4. Removing outliers 2.5. Analyze numerical fields - Age - Fare - SibSp - Parch - Pclass 2.6. Analyze categorical fields - Name - Sex - Pclass - Embarked - Ticket - Cabin 3. **Fill missing values and feature engineering** 3.1. Age 3.2. Cabin 3.3. Embarked 3.4. Fare 3.5. Ticket 3.6. New field: Name/Title 3.7. New field: Family size 4. **Export data** 4.1. Convert fields to categorical values 4.2. Split data to train and test sets 4.2. Save transformed data to .csv files ## 1. Loading data ### 1.1. Import necessary libraries ``` # Import the libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_style('whitegrid') ``` ### 1.2. Load files ``` # Load the Titanic dataset's data files train_df = pd.read_csv("data/train.csv") test_df = pd.read_csv("data/test.csv") ``` ## 2. Data analysis and visualization ### 2.1. Check data format See how the data is formatted (data types, fields) ``` # check out a sample of the DataFrame train_df.head() # check out how the test DataFrame is organized test_df.head() # check the data types, number of entries and missing values present in the DataFrame train_df.info() # Check the test data format test_df.info() # Get some statistics of the training data train_df.describe() # Same for the test DataFrame test_df.describe() ``` #### About the data The training data of the dataset is formated as the following table: | Variable | Definition | Key | |---|---|---| | PassengerId | Passenger Id | | | Survival | Survival | 0 = No, 1 = Yes | | Pclass | Ticket class | 1 = 1st, 2 = 2nd, 3 = 3rd | | Sex | Sex | | | Age | Age in years | | Sibsp | # of siblings / spouses aboard the Titanic | | | Parch | # of parents / children aboard the Titanic | | | Ticket | Ticket number | | | Fare | Passenger fare | | | Cabin | Cabin number | | | Embarked | Port of Embarkation | C = Cherbourg, Q = Queenstown, S = Southampton | This dataset is rather small: the training data is composed of around 900 entry points. From these fields, only 3 have missing values (Age, Cabin and Embarked). This means that it is required to address the missing data before proceeding to fit a model with it. About half of the fields are in numeric format and the other half are categorical data (strings). Converting the categorical data is required before proceeding to fit a model with this data. To understand which fields are useful and which are not, additional analysis are required to better understand the data. Next, we proceed with some data visualizations/plotting to aid us in gathering more insights about this dataset. ``` # Before proceeding any further, lets concat the train and test sets for future data analysis / processing df = pd.concat([train_df, test_df], keys=["train", "test"]) ``` ### 2.2 Checking for missing values First lets see if there exists missing values in the dataset that need to be removed/filled ``` # Show the data format and number of non-null entries available for each column df.info() # Visually check the missing values in the dataset fig, ax = plt.subplots(figsize=(12,6)) sns.heatmap(df.isnull(), yticklabels=False, cbar=False, cmap='viridis', ax=ax) ax.set_ylabel('') ax.set_title('Missing values in the Titanic dataset') ``` We can see that there are columns with missing values in the dataset. These include the **Age**, **Cabin**, **Embarked** and **Fare** fields. The **Survived** field is only used as the label for training a model, so the other remaining missing values are attributed to the test set and don't need to be filled. Before filling these missing values, its is necessary to check out for outliers for these fields as well. That will be done in the next subsection. ``` # Preemptively fill all missing values as NaNs df = df.fillna(np.nan) # Sum all the empty values in the dataset df.isnull().sum() ``` ### 2.3 Detecting outliers This subsection deals with finding outliers for numerical fields. Outlier values are usually extreme events or special cases that are somewhat harmfull for the generalization of learned models are are best to be removed in order for a model to perform better. Here, we'll use the Tukeys method (IQR - Inter-Quartile Range) to detect and remove numerical values that can be considered as outliers. The outlier removal process is simple: we'll search for events (values) that are in of of the extremes of the distribution and flag them as outliers for the **Age**, **Fare**, **Parch** and **SibSp** fields. If an index has been flagged as an outlier for more than two of these fields, it is considered as an outlier and will be discarded. ``` # Histograms for the Age, Fare, Parch and SibSp fields fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(12,6)) sns.distplot(df["Age"].dropna(), ax=ax1) ax1.set_title('Age') ax1.set_xlabel('') sns.distplot(df["Fare"].dropna(), ax=ax2) ax2.set_title('Fare') ax2.set_xlabel('') sns.distplot(df["Parch"].dropna(), ax=ax3) ax3.set_title('Parch') ax3.set_xlabel('') sns.distplot(df["SibSp"].dropna(), ax=ax4) ax4.set_title('SibSp') ax4.set_xlabel('') ``` We can see all distributions have their set of outliers that should be dealth with. We'll follow the recipe explained previously to select the rows of data that produce extreme events. Also, compressing skewed distributions using the log transform will help in training a model to produce a better estimate based on these values. This will be addressed later on when egineering the features that will be used to train a model. ``` from collections import Counter def detect_ouliers(dataframe, fields, n): """ Detects ouliers for a set of fields of a DataFrame and returns a list of indices of outlier rows. If it exists more than 'n' duplicate indices, these indices are flagged as outliers and added to a list. """ outliers = [] for field in fields: # compute the first and third quartile values Q1 = dataframe[field].quantile(0.25) Q3 = dataframe[field].quantile(0.75) # Interquartile range (IQR) IQR = Q3 - Q1 # use an offset over the quantile to classify a value being an outlier offset = 1.5 * IQR # detect indexes as outliers field_outliers = dataframe[(dataframe[field] < Q1 - offset) | (dataframe[field] > Q3 + offset)] # Get a list of indices indices = field_outliers.index.tolist() outliers.extend(indices) # Count number of duplicate indices outlier_indices = Counter(outliers) # Filter only the duplicate indices and discard the rest output_outliers = list( k for k, v in outlier_indices.items() if v >= n ) return output_outliers outliers = detect_ouliers(df.loc['train'], ["Age","SibSp","Parch","Fare"], n=3) # Show all outlier entries df.iloc[outliers] ``` 11 rows have been detected as outliers. Some of them like rows 27, 88 and 341 have high fares while the other have a large number of siblings in their family. Curiously, all of them embarked from the Southampton (S) port. ### 2.4 Removing outliers Now that we've detected possible outliers in the data, lets proceeed in discarding these rows such that a model trained on this data is able to generalize better and give better predictions. ``` # Select a slice view of the train set new_train = df.loc['train'].drop(outliers, axis=0) filtered_df = pd.concat([new_train, df.loc['test']], keys=['train', 'test']) # old dataframe len(df) # new dataframe len(filtered_df) ``` ### 2.5 Analyze numerical fields This section deals with analysing the numerical fields of the data set and gather information about them. ``` # Plot an heatmap of all numerical fields in the dataset fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 8)) sns.heatmap(train_df[['Survived', 'Age', 'Fare', 'Parch', 'SibSp', 'Pclass']].corr(), annot=True, cmap='coolwarm', ax=ax) ``` From the correlation's heatmap, we can see that **Fare**, **Age** and **Pclass** are somewhat correlated with the survival rate of a passenger. We cannot tell if the other fields have any indirect impact to the overall survival rate of a passenger with this information, but these fields can have some correlation with subgroups of the population. ### Age ``` # Plot the age distribution for the surviving and non-surviving passengers g = sns.FacetGrid(data=train_df, col="Survived", aspect=1.6) g.map(sns.distplot, 'Age', bins=20) # Super-impose the two distributions to better compare them (i.e., to see the key differences between the two) g = sns.FacetGrid(data=train_df, hue='Survived', aspect=2.5) g.map(sns.kdeplot, 'Age', shade=True) g.add_legend() ``` The **Age** distribution clearly shows a trend for higher survivability of younger passengers when compared with the rest of the ages. ### Fare ``` # Plot the fare distribution of all passengers g = sns.FacetGrid(data=train_df, hue="Survived", aspect=3) g.map(sns.distplot, 'Fare', bins=20) g.ax.set_title('Fare distribution') g.add_legend() ``` The **Fare** attribute has a skewed distribution with a long tail and this will negatively influence the hability of an algorithm to efficiently use this data for prediction. To correct for this, it would be best to compact the values using a log transformation. ### SibSp ``` # Lets see how many passengers are distributed by number of spouses/sublings train_df['SibSp'].value_counts() # Plot the survival rate of each group of SibSp g = sns.factorplot(data=train_df, x='SibSp', y='Survived', kind='bar', aspect=1.6) g.despine(left=True) g = g.set_ylabels("survival probability") ``` Passengers with some spouses/siblings survival rate vary quite alot compared with passengers with few spouses/siblings. This feature indicates that the family size of a passengers would be a good indicator to use to predict the survival rate. ### Parch ``` # Lets see how many passengers are distributed by number of parents/children train_df['Parch'].value_counts() # Plot the survival rate of each group of Parch g = sns.factorplot(data=train_df, x='Parch', y='Survived', kind='bar', aspect=2) g.despine(left=True) g = g.set_ylabels("survival probability") ``` Like the **SibSp** field, the **Parch** also indicates that larger families survival rate varies alot, and that smaller families have a better chance to survive than larger families. ### Pclass ``` # Lets see how many passengers are distributed by class train_df['Pclass'].value_counts() fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) # Plot the number of passengers by class g = sns.countplot(data=train_df, x='Pclass', ax=ax1) g.set_ylabel('Number of passengers') g.set_title('Number of passengers per class') # Plot the survival rate by passenger class g = sns.barplot(data=train_df, x='Pclass', y='Survived', ax=ax2, palette='coolwarm') g.set_ylabel('Survival rate') g.set_title('Survivability by class') ``` The **Pclass** field is also a strong predictor for the survivability of a passenger on the titanic. It appears that more priviledged persons would much probable to survive than less priviledged ones. ``` # Plot the number of passengers by class and gender fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10,5)) g = sns.countplot(data=train_df, x='Pclass', hue='Sex', palette='coolwarm', ax=ax1) ax1.set_ylabel('Number of passengers') ax1.set_title('Number of passengers by Pclass and gender') # Plot the survival rate of passengers by class and gender g = sns.factorplot(data=train_df, x='Pclass', y='Survived', hue='Sex', kind='bar', palette='coolwarm', aspect=2) g.ax.set_ylabel('Survival rate') g.ax.set_title('Survivability by Pclass and gender') ``` Again, the gender is the main indicator of the survival rate of a passenger. Even males from higher, more priviledge classes have a lower survival rate than females of lower classes. ``` # Next, lets check and see the survival rate of the passenger's sex vs class vs age g = sns.FacetGrid(data=train_df, col='Survived', row='Pclass', size=2.2, aspect=2, hue='Sex', palette='coolwarm') g.map(plt.hist, "Age", alpha=0.7, bins=20) g.add_legend() ``` This plot provides more finer grain detail of the likelihood of a passenger being able to survive considering his/hers **Age**, **Sex** and **Pclass** attributes. For males, younger passengers are more likely survive independently of the class they traveled in.For older passengers, the class has a significant impact on the survival rate. For females, the class they travel in is not as important as for the male passengers, although the survival rate for passengers of the 3rd class is lower than of the other two classes. ### 2.6 Analyze categorical fields This subsection deals with analyzing categorical (non-numerical) fields in order to gain a better understanding of the type of data available to use for helping separate the passengers who survived on the voyage. These fields include: - Name - Sex - Pclass - Embarked - Ticket - Cabin As a reminder, in this subsection, only data analysis will be performed, but in Section 3. we'll be using the insigths gained here to process and clean the data to better help us predict who survived. ### Name ``` # First, lets take a peek at how the names are composed train_df['Name'].head(10) ``` In this format, there seems that every entry will be unique of all other. However, every passenger has a title in his/her name that we could take advantage of in grouping them into categories. The rationale is, as seen in the Section 2.5 with the **SibSp** and **Parch** fields, the title will convey information of single or married individuals or other types of titles. For that, we need to process/split the data strings in order to fetch this information. In a first look, the names are separated by *last* name, followed by the *title* and the *first* name. This is how we'll retrieve this information. ``` # Just for the sake of argument, lets check how many unique names there are print('Number of total names: ', len(train_df['Name'])) print('Number of unique names: ', len(train_df['Name'].unique())) ``` As figured, every entry in **Name** is unique. Lets proceed in fetching the title from the name. ``` # Split the names by comma, remove dots characters and split the resulting string by whitespace to get the title titles = train_df['Name'].map(lambda name: name.split(',')[1].split('.')[0]) titles.value_counts() # Count how many unique titles exist len(titles.value_counts()) ``` 17 different result from the **Name** field. Here we can see that the most common titles are **Mr** and **Miss**. Another interesting finding is the different kinds of titles that emerged from processing the strings like infrequent titles such as **Don** or **Capt**. Some of these seem to belong to higher social status that can be used to help separate subgroups in the **Pclass** field where such types of personalities might have the privilege to escape the shipwreck. ``` # Plot the passengers count by title train_df['Title'] = titles fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,5)) g = sns.countplot(data=train_df, x='Title', order=titles.value_counts().index, ax=ax) ax.set_xticklabels(g.get_xticklabels(), rotation=20) ax.set_ylabel('Number of passengers') ax.set_title('Number of passengers by Title') # Plot the survival rate of the passengers grouped by title g = sns.factorplot(data=train_df, x='Title', y='Survived', kind='bar', order=titles.value_counts().index, aspect=2.5) g.despine(left=True) g.set_xticklabels(rotation=20) g.set_ylabels('Survival rate') g.ax.set_title('Survival rate by Title') ``` As expected, some groups have a much better survival rate than others. However, because there's only a few values for the less common titles, it is not possible to estimate with a confortable degree of curtainty that some titles are better than others (although the captain title is probably the worst since it is common for the captain to be the last one to leave the ship in cases of shipwrecks). ### Sex ``` # Plot the number of passengers by gender fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) g = sns.countplot(data=train_df, x='Sex', ax=ax1) g.set_ylabel('Number of passengers') g.set_title('Number of passengers by gender') # Plot the survival rate by gender g = sns.barplot(data=train_df, x='Sex', y='Survived', ax=ax2, palette='coolwarm') g.set_ylabel('Survival rate') g.set_title('Survivability by gender') # Compute the mean of the survival rate of each gender train_df[["Sex","Survived"]].groupby('Sex').mean() ``` It is clear that the gender attribute influences the sruvival rate of a passenger on the titanic. The **Sex** field is a key attribute for training and evaluating on any statistical algorithm. ### Embarked ``` # Count the number of unique ports which passengers have embarked from train_df['Embarked'].value_counts() # Plot the number of passngers by embark port fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) g = sns.countplot(data=train_df, x='Embarked', ax=ax1) g.set_ylabel('Number of passengers') g.set_title('Number of passengers by embark port') # Plot the survival rate of passengers by embark port g = sns.barplot(data=train_df, x='Embarked', y='Survived', ax=ax2, palette='coolwarm') g.set_ylabel('Survival rate') g.set_title('Survivability by embark port') ``` Knowing where a passenger embarked would give a passenger a higher chance to survive. This might be due to other factors like the passenger's class where passengers from the Queenstown (Q) port being of an upper class than those from the Southampton (S) embark port. Lets see if this hypothesis is valid by comparing the **Embarked** and **Pclass** fields. ``` # Plot the number of passengers by class and embark port fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(14,4)) g = sns.countplot(data=train_df, x='Embarked', hue='Pclass', ax=ax1) g.set_ylabel('Number of passengers') g.set_title('Number of passengers by Pclass per Embark port') # Plot the survival rate of passengers by class and embark port g = sns.factorplot(data=train_df, x='Embarked', y='Survived', col='Pclass', kind='bar') g.set_ylabels('Survival rate') ``` Here we see that the bulk of the passengers came from the Southampton (S) port. Fewer passengers came from the other ports, and indeed the survival rate does vary from port to port but it is not enough to predict a passenger's survival rate based on the passenger class and embark port. ### Ticket ``` # Count the number of unique ticket groups train_df['Ticket'].value_counts() # Too many tickets to visualize, so lets count the number of all unique tickets len(train_df['Ticket'].unique()) ``` There are many different types of tickets in the **Tickets** field (681 unique tickets to be precise). However, there appears to be some types of tickets that start with the same string like "PC 17610" and "PC 17318" or "CA. 2343" and "CA 2144". Lets split the tickets for prefixes and count how many groups exist in the data. ``` # Lets retrieve thethe ticket prefix of each ticket ticket_by_prefix = train_df['Ticket'].map(lambda ticket: ticket.replace('.', '').replace('/', '').strip().split(' ')[0]) len(np.unique(ticket_by_prefix)) ``` Grouping the tickets by prefix helped to group the tickets into smaller groups. Maybe grouping them by strings vs digits may cluster the data into more meaningful groups. ``` # Group the tickets withouth prefix into a single group ticket_groups = ticket_by_prefix.map(lambda ticket: ticket if not ticket.isdigit() else 'Digits') len(np.unique(ticket_groups)) ``` This helped reduce the total number of groups to 31 unique groups. Lets see if these new groups would be a good indicator to predict the survival rate. ``` # Plot the survival rate per ticket group train_df['ticket_groups'] = ticket_groups g = sns.factorplot(data=train_df, x='ticket_groups', y='Survived', kind='bar', aspect=3) g.ax.set_title('Survival rate by Ticket group') g.set_ylabels('Survival rate') g.set_xlabels('') g.set_xticklabels(rotation=30) ``` The **Ticket** field, with some feature engineering, holds some valuable information that can be used to assist a prediction model to determine if passengers holding certain types of tickets would be more or less likely to survive. This type of data should be taken into account when modeling a predictor for this dataset. ### Cabin ``` # Count the number of unique cabins train_df['Cabin'].value_counts() # Too many cabins to visualize, lets count the total unique cabins that exist len(train_df['Cabin'].unique()) # See how many entries have missing values for the Cabin column total_missing_values = train_df['Cabin'].isnull().sum() total_data_rows = len(train_df['PassengerId']) print('Number of non-empty cabin rows: {}'.format(total_data_rows - total_missing_values)) print('Total number of data rows in the train set: {}'.format(total_data_rows)) ``` There are alot of missing values for the **Cabin** field and 3/4 of those are unique values. However, it seems that the cabins are grouped by a letter prefix that we can take advantage off to see how many kinds of cabins there were. Lets retrieve the first letter of each cabin string and see how many groups we can get. ``` # Group each existing cabin by its first letter. Otherwise, set missing values as 'X' cabin_letter = train_df['Cabin'].fillna('X').map(lambda cabin: cabin[0]) cabin_letter.value_counts() ``` Now, we have 9 unique types of cabin groups. Lets see what information we can gather using them. ``` train_df['cabin_groups'] = cabin_letter # Plot the number of passengers by cabin group fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(14.5,5)) g = sns.countplot(data=train_df, x='cabin_groups', order=['A','B','C','D','E','F','G','T','X'], ax=ax) ax.set_ylabel('Number of passengers') ax.set_title('Number of passenger by cabin groups') ax.set_xlabel('Cabin groups') # Plot the survival rate of passengers by cabin group g = sns.factorplot(data=train_df, x='cabin_groups', y='Survived', kind='bar', aspect=3, order=['A','B','C','D','E','F','G','T','X']) g.set_ylabels('Survival rate') g.set_xlabels('Cabin groups') plt.title('Survival rate by cabin groups') ``` Passengers holding the **B**, **D**, and **E** had a higher survival rate comparing with the others, but with so few data points it is difficult give much importance to this information. However, some of these groups of cabins may provide good indicators for a prediction model to effectively use to help separate some subgroups of passengers. ## 3. Fill missing values and feature engineering This section deals with cleaning data by filling missing values in the data set and engineer new features out of a combination of existing fields or by manipulating existing ones into more useful data (e.g., new categories). ### 3.1 Age ``` # Check how many missing values exists for the Age column df['Age'].isnull().sum() ``` The **Age** column contains 256 missing values in the dataset. To fill these missing values, we'll use the fields that have the most correlation with **Age**: **SibSp**, **Parch** and **Pclass**. We'll use the average ages of the passengers that have the same class, number of sibling/spouses and family size. ``` # Set a function to fill missing values for the Age column based on the passenger class, # number of siblings/spouses and the number of parents/children def fill_missing_ages(df): """ Fills the missing ages with the median age of ages with the same Pclass, SibSp and Parch. Else, fills the missing age with the median of all ages. """ ages_missing_index = df[df['Age'].isnull()].index for idx in ages_missing_index: row = df.loc[idx, :] median_age = df['Age'].median() media_age_pred = df[(df['Pclass'] == row['Pclass']) & (df['SibSp'] == row['SibSp']) & (df['Parch'] == row['Parch'])]['Age'].median() if np.isnan(media_age_pred): df.loc[idx, 'Age'] = median_age else: df.loc[idx, 'Age'] = media_age_pred # Show the median ages before and after filling the missing values for Age print('Median age before filling missing values: ', df['Age'].median()) fill_missing_ages(df) print('Median age after filling missing values: ', df['Age'].median()) # Plot the distributions of passengers by age vs sex and age vs survived fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) g = sns.boxplot(data=df, y='Age', x='Sex', palette='coolwarm', ax=ax1) ax1.set_title("Age vs Sex") g = sns.boxplot(data=df, y='Age', x='Survived', palette='viridis', ax=ax2) ax2.set_title("Age vs Survived") ``` No big difference between median values of **Age** w.r.t. gender or survival rate. ### 3.2 Cabin ``` # Total number of missing values in Cabin df['Cabin'].isnull().sum() ``` **Cabin** is the field with the most missing values by far. Here we'll fill the missing values with a common value **X** and create categories with the inicial letter of the remaining cabins (for more information, see the data analysis and visualizations for the **Cabin** field in the previous section). ``` # Fill missing values of Cabin with 'X' and get the first letter of the remaining values df['Cabin'] = df['Cabin'].fillna('X').map(lambda cabin: cabin[0]) # Total number of missing values after filing df['Cabin'].isnull().sum() # Show a count plot of all values in Cabin fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(14.5,5)) g = sns.countplot(data=df, x='Cabin', order=['A','B','C','D','E','F','G','T','X'], ax=ax) ax.set_ylabel('Number of passengers') ax.set_title('Number of passenger by cabin groups') ax.set_xlabel('Cabin groups') ``` ### 3.3 Embarked ``` # Total number of missing values in Embarked df['Embarked'].isnull().sum() ``` Very few values are missing for the **Embarked** field. Lets fill these missing values with the port with the most people. ``` # Count how many passengers embarked per port df['Embarked'].value_counts() # Fill missing values with the most common port (S) df['Embarked'] = df['Embarked'].fillna('S') # Total number of missing values after filing df['Embarked'].isnull().sum() ``` ### 3.4 Fare ``` # Total number of missing values in Fare df['Fare'].isnull().sum() ``` Only one missing value for **Fare**. We'll fill it the the median value. ``` # Fill the only missing value with the mean value avg_fare = df['Fare'].median() df['Fare'] = df['Fare'].fillna(avg_fare) # Total number of missing values after filing df['Fare'].isnull().sum() ``` If you recall, the **Fare** column had a skewed distribution to the left with a long tail to the right. We need to compact these values if we intend to feed them to a larning algoritm and expect them to be usefull for prediction. Next, we'll use a logaritmic transformation (**log**) to compact the distribution into a smaller range of values. ``` # Plot the Fare distribution fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,5)) g = sns.distplot(df['Fare'], ax=ax) g.set_title('Fare distribution') # Compress all values with a log function df['Fare'] = df['Fare'].map(lambda fare: np.log(fare) if fare > 0 else 0) # Plot the new Fare distribution fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,5)) g = sns.distplot(df['Fare'], ax=ax) g.set_title('New Fare distribution') ``` We see that the **Fare** distribution has now shrinked significantly. This will help learning models that will use this feature for predicting the survival of a passengers in a better, more meaningful way. ### (optional): show status on missing values of the dataset ``` # Count how many missing values persist in the dataset df.isnull().sum() ``` At this point, all fields are free of missing values (with the exception of the **Survived** that will be used as a label). Now, it is time to proceed with engineering some features before creating a learning model using this data. ### 3.5 Ticket This field contains several features that we've seen previously in the Section 2.6 where using the prefix of some tickets and grouping the remaining under a common class **digits** showed to create some very interesting categories that might come in usefull for a learning algorithm. ``` # Summary of the number of unique groups in Ticket len(df['Ticket'].unique()) # Convert ticket strings to a category df['Ticket'] = df['Ticket'].map(lambda ticket: 'digits' if ticket.isdigit() else ticket.replace('.', '').replace('/', '').strip().split(' ')[0]) # Summary of the number of unique groups in Ticket len(df['Ticket'].unique()) # Plot the survival rate for the new ticket groups g = sns.factorplot(data=df, x='Ticket', y='Survived', kind='bar', aspect=3.2) g.set_xticklabels(rotation=30) g.ax.set_xlabel('New Ticket groups') g.set_ylabels('Survival rate') ``` From the 929 different tickets we've grouped them into 37 distinct categories that we'll be using to feed into a learning algorithm in another notebook were we'll train several models to predict the survival rate of the Titanic passengers. ### New field: Name/Title The **Title** of a passengers showed to help identifying classes of passengers that have better chances of survival than others (for more information see section 2.6 - Name). Here, we'll fetch the titles from the **Name** column and group the most similar titles with each other to form 5 groups in total. ``` # Split the title from the name titles = df['Name'].map(lambda name: name.split(',')[1].split('.')[0].strip()) titles.value_counts() ``` We'll be creating the following groups: - Mr - Miss (Miss/Mrs/Ms/Mlle/Mme) - Master - Captain (there's only one captain on the ship) - Rare (remaining) ``` # Convert titles into a smaller group df['Title'] = titles df['Title'] = df['Title'].replace(['Lady','the Countess','Col','Don','Dr','Major','Rev','Sir','Jonkheer','Dona'], 'Rare') df["Title"] = df["Title"].map({"Master": 'Master', "Miss": 'Miss', "Ms" : 'Miss', "Mme": 'Miss', "Mlle": 'Miss', "Mrs": 'Miss', "Mr": 'Mr', "Capt": 'Captain', "Rare": 'Other'}) # Plot the number of passengers by title fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,5)) g = sns.countplot(data=df, x='Title', ax=ax) g.set_ylabel('Number of passengers') g.set_title('Number of passengers by Title') ``` ### New field: Family size The family size is given by the number of spouses/siblings and parents/children. As previously seen in section 2.6, the size of a person's family would be a good indicator of the survival rate of him/her. Therefore, lets create a new field that concerns with the size of a passengers family. ``` # Add the SibSp and Parch fields to get the size of the family df['Family_size'] = df['SibSp'] + df['Parch'] + 1 # plot the amount of passengers that have a certain family size fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12,5)) g = sns.countplot(data=df, x='Family_size', ax=ax) g.set_title('Family size') g.set_ylabel('Number of passengers') g.set_xlabel('Family size') # plot the survival rate by family size g = sns.factorplot(data=df, x='Family_size', kind='bar', y='Survived', aspect=2.5) g.ax.set_title('Survival rate by family size') g.set_ylabels('Survival rate') g.ax.set_xlabel('Family size') ``` There seems to be quite a number of different sizes of families that went on this trip. Single people (family size of 1) have a lower survival rate compared to other passengers with bigger family sizes. Here we can note there are 4 different trends in the data: - **Single passengers** (size 1) - **Small families** (size 2-3) - **Medium families** (size 4) - **Big families** (size 5-11) Lets group passengers by family size with these splits. ``` # Set the function to categorize a family by its size def family_group(family_size): """ Returns a family group to a family size. """ if family_size == 1: return 'Single' elif family_size == 2 or family_size == 3: return 'Small' elif family_size == 4: return 'Medium' else: return 'Large' family_group = df['Family_size'].map(family_group) df['Family_group'] = family_group # Plot the number of passenegrs by family group fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,5)) g = sns.countplot(data=df, x='Family_group', order=['Single', 'Small', 'Medium', 'Large'], ax=ax) g.set_title('Number of passengers by family group') g.set_ylabel('Number of passengers') g.set_xlabel('Family group') # Plot the survival rate by family group g = sns.factorplot(data=df, x='Family_group', y='Survived', kind='bar', order=['Single', 'Small', 'Medium', 'Large'], aspect=2) g.ax.set_title('Survival rate by Family group') g.set_ylabels('Survival rate') g.ax.set_xlabel('Family group') ``` **Small** and **Medium** families seem to have the highest survival rates. **Big** families have a lower chance of survival than **Single** passengers which seems to be expected since the chance of someone form a bigger family having all family members surviving will be lower than smaller families with less members. ## 4. Export data This section deals with preparing the processed data for consumption by a statistical learning algorithm. Here, categorical values will be converted to numerical ones, and unnecessary fields will be discarded. ### 4.1. Convert fields to categorical values It is important to convert a categorical field (i.e., string values) to numerical in order for statistical learning to be able to use them for prediction. This can be done using **pd.get_dummies()**. ``` # Convert Ticket to categorical values df = pd.get_dummies(df, columns = ["Ticket"], prefix="Ticket") # Convert Family Group to categorical values df = pd.get_dummies(df, columns = ["Family_group"], prefix="Fsize") # Convert Title to categorical values df = pd.get_dummies(df, columns = ["Title"], prefix="Title") # Convert Cabin to categorical values df = pd.get_dummies(df, columns = ["Cabin"], prefix="Cabin") # Convert Embarked to categorical values df = pd.get_dummies(df, columns = ["Embarked"], prefix="E") # Convert Sex to categorical values df = pd.get_dummies(df, columns = ["Sex"], prefix="Sex") # Convert Pclass to categorical values df = pd.get_dummies(df, columns = ["Pclass"], prefix="Pclass") # Check all the new columns created df.columns ``` ### 4.2. Split data to train and test sets ``` # Retrieve the train data group train_data = df.loc['train'] # Retrieve the test data group test_data = df.loc['test'] ``` ### 4.3. Save transformed data to .csv files ``` train_data.to_csv("data/train_processed.csv") test_data.to_csv("data/test_processed.csv") ```
github_jupyter
# Text classification with Transformer **Author:** [Apoorv Nandan](https://twitter.com/NandanApoorv)<br> **Date created:** 2020/05/10<br> **Last modified:** 2020/05/10<br> **Description:** Implement a Transformer block as a Keras layer and use it for text classification. ## Setup ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers ``` ## Implement multi head self attention as a Keras layer ``` class MultiHeadSelfAttention(layers.Layer): def __init__(self, embed_dim, num_heads=8): super(MultiHeadSelfAttention, self).__init__() self.embed_dim = embed_dim self.num_heads = num_heads if embed_dim % num_heads != 0: raise ValueError( f"embedding dimension = {embed_dim} should be divisible by number of heads = {num_heads}" ) self.projection_dim = embed_dim // num_heads self.query_dense = layers.Dense(embed_dim) self.key_dense = layers.Dense(embed_dim) self.value_dense = layers.Dense(embed_dim) self.combine_heads = layers.Dense(embed_dim) def attention(self, query, key, value): score = tf.matmul(query, key, transpose_b=True) dim_key = tf.cast(tf.shape(key)[-1], tf.float32) scaled_score = score / tf.math.sqrt(dim_key) weights = tf.nn.softmax(scaled_score, axis=-1) output = tf.matmul(weights, value) return output, weights def separate_heads(self, x, batch_size): x = tf.reshape(x, (batch_size, -1, self.num_heads, self.projection_dim)) return tf.transpose(x, perm=[0, 2, 1, 3]) def call(self, inputs): # x.shape = [batch_size, seq_len, embedding_dim] batch_size = tf.shape(inputs)[0] query = self.query_dense(inputs) # (batch_size, seq_len, embed_dim) key = self.key_dense(inputs) # (batch_size, seq_len, embed_dim) value = self.value_dense(inputs) # (batch_size, seq_len, embed_dim) query = self.separate_heads( query, batch_size ) # (batch_size, num_heads, seq_len, projection_dim) key = self.separate_heads( key, batch_size ) # (batch_size, num_heads, seq_len, projection_dim) value = self.separate_heads( value, batch_size ) # (batch_size, num_heads, seq_len, projection_dim) attention, weights = self.attention(query, key, value) attention = tf.transpose( attention, perm=[0, 2, 1, 3] ) # (batch_size, seq_len, num_heads, projection_dim) concat_attention = tf.reshape( attention, (batch_size, -1, self.embed_dim) ) # (batch_size, seq_len, embed_dim) output = self.combine_heads( concat_attention ) # (batch_size, seq_len, embed_dim) return output ``` ## Implement a Transformer block as a layer ``` class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = MultiHeadSelfAttention(embed_dim, num_heads) self.ffn = keras.Sequential( [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) ``` ## Implement embedding layer Two seperate embedding layers, one for tokens, one for token index (positions). ``` class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super(TokenAndPositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, x): maxlen = tf.shape(x)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) x = self.token_emb(x) return x + positions ``` ## Download and prepare dataset ``` vocab_size = 20000 # Only consider the top 20k words maxlen = 200 # Only consider the first 200 words of each movie review (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size) print(len(x_train), "Training sequences") print(len(x_val), "Validation sequences") x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen) x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen) ``` ## Create classifier model using transformer layer Transformer layer outputs one vector for each time step of our input sequence. Here, we take the mean across all time steps and use a feed forward network on top of it to classify text. ``` embed_dim = 32 # Embedding size for each token num_heads = 2 # Number of attention heads ff_dim = 32 # Hidden layer size in feed forward network inside transformer inputs = layers.Input(shape=(maxlen,)) embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) x = embedding_layer(inputs) transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) x = transformer_block(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dropout(0.1)(x) x = layers.Dense(20, activation="relu")(x) x = layers.Dropout(0.1)(x) outputs = layers.Dense(2, activation="softmax")(x) model = keras.Model(inputs=inputs, outputs=outputs) ``` ## Train and Evaluate ``` model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"]) history = model.fit( x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val) ) ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('star_dataset.csv') df ``` ## 1. Consider three nominal features One of them, not more, may be taken from nominal features in your data ``` # Numerical features num_features = [ 'Temperature (K)', 'Luminosity(L/Lo)', 'Radius(R/Ro)', 'Absolute magnitude(Mv)' ] fig, axs = plt.subplots(len(num_features), figsize=(8, 24)) for i, feature in enumerate(num_features): axs[i].hist(df[feature], int(np.round(np.sqrt(df.shape[0])))) axs[i].set_title('Histogram of %s feature' % feature) temperature = df['Temperature (K)'] temperature_limit = temperature[temperature < 5000] plt.hist(temperature_limit, int(np.round(np.sqrt(temperature_limit.shape[0])))) plt.title("Histogram of temperature feature") plt.show() def get_block(value, blocks_limit): for i, x in enumerate(blocks_limit): if value < x: return i return 0 temperature_blocks_limit = [3000, 3500, 4500, 10000, 15000, 40000] df['Temperature_block'] = df['Temperature (K)'].apply(lambda x: get_block(x, temperature_blocks_limit)) absolute_magnitude_blocks_limit = [-10, -5, 5, 12, 15, 20] df['Absolute_magnitude_blocks'] = df['Absolute magnitude(Mv)'].apply(lambda x: get_block(x, absolute_magnitude_blocks_limit)) ``` We choose three features: `Star type`, `Temperature` and `Absolute magnitude`. `Temperature` and `Absolute magnitude` were splited on 6 blocks and we will use them as nominal features. ``` df features = ['Star type', 'Temperature_block', 'Absolute_magnitude_blocks'] for feature in features: counts = df[feature].value_counts() print(counts) print() ``` ## 2. Build two contingency tables over them Present a conditional frequency table and Quetelet relative index tables. Make comments on relations between categories of the common (to both tables) feature and two others. ``` _TOTAL = 'total' def q_index(cond_row): _total = cond_row[_TOTAL] return cond_row.apply( lambda x: (x-_total)/_total if x != _total else _total ) ``` ## Contingency tables Table for feature `Star type` and `Temperature` ``` type_temperature = pd.crosstab( df['Temperature_block'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Temperature'], colnames=['Star type'] ) type_temperature type_temperature_prob = pd.crosstab( df['Temperature_block'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Temperature Prob'], colnames=['Star type'], normalize='all' ) type_temperature_prob ``` Table for feature `Star type` and `Absolute magnitude` ``` type_magnitude = pd.crosstab( df['Absolute_magnitude_blocks'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Absolute magnitude'], colnames=['Star type'] ) type_magnitude type_magnitude_prob = pd.crosstab( df['Absolute_magnitude_blocks'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Absolute magnitude'], colnames=['Star type'], normalize='all' ) type_magnitude_prob ``` ### Conditional Probabilities Table for feature `Star type` and `Temperature` ``` type_temperature_cond = pd.crosstab( df['Temperature_block'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Temperature_cond'], colnames=['Star type'], normalize='columns' ) type_temperature_cond ``` Table for feature `Star type` and `Absolute magnitude` ``` type_magnitude_cond = pd.crosstab( df['Absolute_magnitude_blocks'], df['Star type'], margins = True, margins_name=_TOTAL, rownames=['Absolute magnitude'], colnames=['Star type'], normalize='columns' ) type_magnitude_cond type_temperature_quetelet = type_temperature_cond.apply(q_index, axis=1) type_temperature_quetelet type_magnitude_quetelet = type_magnitude_cond.apply(q_index, axis=1) type_magnitude_quetelet ``` ## 3. Compute and visualize the chi-square average-Quetelet-index over both tables. Comment on the meaning of the values in the data analysis context. ``` def chi2(freq_crosstab, prob_crosstab): return freq_crosstab.combine( prob_crosstab, lambda a, b: (b - a) ** 2 / a ).drop([_TOTAL], axis=0) \ .drop([_TOTAL], axis=1) \ .values.sum() def quetelet_summary(prob_crosstab, q_crosstab): return prob_crosstab \ .combine(q_crosstab, np.multiply) \ .drop([_TOTAL], axis=0) \ .drop([_TOTAL], axis=1) \ .values.sum() def freq_crosstab(prob_crosstab): return prob_crosstab.apply(lambda r: r[_TOTAL] * dd[_TOTAL]) \ .drop([_TOTAL], axis=0) \ .drop([_TOTAL], axis=1) type_temperature_freq = freq_crosstab(type_temperature_prob) type_temperature_freq type_magnitude_freq = freq_crosstab(type_magnitude_prob) type_magnitude_freq q = quetelet_summary(type_temperature_prob, type_temperature_quetelet) print('Average Quetelet index:', q) print('Chi2:', chi2(type_temperature_freq, type_temperature_prob)) q = quetelet_summary(type_magnitude_prob, type_magnitude_quetelet) print('Average Quetelet index:', q) print('Chi2:', chi2(type_magnitude_freq, type_magnitude_prob)) ``` ## 4. What numbers of observations would suffice to see the features as associated at 95% confidence level; at 99% confidence level. Degree of Freedom is $(5 - 1) \cdot (5 - 1) = 16$ According to the table reported: http://uregina.ca/~gingrich/appchi.pdf - under the hypothesis of independence, the $95\%$ confidence that $N * \chi^2$ is less than $t = 23.685$ For star type and temperature $\chi^2 = 1.1702$. We have $N > 23.685/1.1702 = 20.2$, that is, at any $N>21$ the hypothesis of statistical independence should be rejected at $95\%$ confidence level. For star type and Absolute magnitude $\chi^2 = 3.1613$. $N > 23.685/3.1613 = 7.4921$, $N > 8$. Similarly, - at the $99\%$ probability that chi-squared is less than $t = 32.000$ For star type and temperature we have $N > 32.000/1.1702 = 27.3$, that is, at any $N>28$ the hypothesis of statistical independence should be rejected at $99\%$ confidence level. For star type and Absolute magnitude $N > 32.000/3.1613 = 10.122$, $N > 11$. In out dataset we have $N = 240$ ``` def plot_heat_map(prob, quetelet): heat_map_quet_T_G = (prob * quetelet) \ .drop([_TOTAL], axis=0) \ .drop([_TOTAL], axis=1) \ .to_numpy() plt.imshow( heat_map_quet_T_G, cmap='hot', interpolation='nearest' ) plt.colorbar() plt.show() plot_heat_map(type_temperature_prob, type_temperature_quetelet) plot_heat_map(type_magnitude_prob, type_magnitude_quetelet) ```
github_jupyter
Note: this code is illustrative; it won't run without the raw gaze data, which is too big to upload. The code and figures show how Figure 4A in the paper was generated. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sys, os, glob, json, re import pygaze # import GazeParser from helpers import * from tqdm import tqdm import helpers sys.path.append('pygazeanalyser') import gazeplotter, opengazereader, idfreader, eyetribereader, edfreader, detectors, traces from gazeplotter import parse_fixations, gaussian # Read gaze data from server mounted to Finder proj_dir = '/Volumes/jvanbaar/projects/SOC_STRUCT_LEARN' data_dir = '/Volumes/jvanbaar/data/jvanbaar/SOC_STRUCT_LEARN' proj_dir ``` ## Load fixations ``` ROI_radius = 100 all_fix = pd.read_csv(proj_dir + '/Data/Cleaned/all_fixations_tagged_ROI_radius-%i.csv'%ROI_radius, index_col = 0) all_fix.head() ``` ##### Limit to payoff matrix ``` all_fix = all_fix.loc[(all_fix['x'] >= 200) & (all_fix['x'] <= 1120) & (all_fix['y'] >= 200) & (all_fix['y'] <= 950),:].reset_index(drop=True) all_fix.head() all_fix.shape ``` ## Create heatmaps Ran this part on Oscar using run_sub_pt_heatmap.py. Stored 1 mean heatmap per subject and player type. ## Load & average heatmaps ``` mean_heatmaps = dict() dispsize = [1680, 1050] subs = np.arange(5,55) pts = ['opt_nat','pess_nat','opt_inv','pess_inv'] for pt in pts: mean_heatmaps[pt] = np.zeros([dispsize[1],dispsize[0]]) for sub in tqdm(subs): mean_heatmaps[pt] += pd.read_csv(data_dir + '/Gaze/Results/sub_pt_heatmaps/sub-%03d_pt-%s_mean_heatmap.csv'%(sub,pt), index_col = 0).values mean_heatmaps[pt] = mean_heatmaps[pt] / len(subs) fig,ax = plt.subplots(len(pts),len(pts),figsize = [16,10]) # number_locations = get_number_locations() for pt1i, pt1 in enumerate(pts): print(pt1) for pt2i, pt2 in enumerate(pts): diff = mean_heatmaps[pt1] - mean_heatmaps[pt2] plot_heatmap(diff, dispsize, draw_numbers=True, alpha = .5, ax = ax[pt1i,pt2i]) # ax[pt1i,pt2i].imshow(np.flipud(diffs[pt1i,pt2i])*1e6, cmap = 'jet',zorder = 1) # for ri, row in number_locations.iterrows(): # ax[pt1i,pt2i].text(row['X'],dispsize[1] - row['Y'],row['num'],zorder = 2, # fontdict = {'fontsize':15, 'verticalalignment':'center','horizontalalignment':'center', 'color':'r'}) ax[pt1i,pt2i].set(xlim = [0,1680], ylim = [0,1050], title = 'diff %s > %s'%(pt1,pt2), xticks = [], yticks = []) plt.tight_layout() fig, ax = plt.subplots(1,2,figsize=[16,5], sharey=True) game_screen = image.imread('Game_screen.jpg') for ai,up_low in enumerate([['opt_nat','pess_nat'],['pess_nat','opt_nat']]): up,low = up_low diff = mean_heatmaps[up] - mean_heatmaps[low] ax[ai].imshow(np.flipud(game_screen)) plot_heatmap(diff, dispsize, draw_numbers=False, alpha = .5, ax = ax[ai]) ax[ai].set(title = 'Gaze difference when predicting\n%s > %s'%(up,low)) ax[0].invert_yaxis() plt.tight_layout() # plt.savefig(base_dir + '/gaze_analysis/Results/heatmap_diffs/Gaze_diff_opt_pess.pdf', # bbox_inches='tight', transparent = True, dpi = 400) ``` ## Try to plot bidirectional heatmap (hot-cold) ``` def plot_heatmap_bidir(heatmap, dispsize, ax = None, alpha = .5, remove_zeros = True, draw_numbers = False, num_fontsize = 30, S = None, T = None, vmax = None, vmin = None, cmap = 'coolwarm', zero_threshold = 1): if ax is None: fig, ax = plt.subplots(1,1,figsize=[8,5]) ax.set(xlim = [0,1680], ylim = [0,1050], aspect = 1) # Plot heatmap hmdat = heatmap.copy() if remove_zeros: posmean = np.mean(hmdat[hmdat>0]) # Remove points with below-mean 'heat' negmean = np.mean(hmdat[hmdat<0]) # Remove points with below-mean 'heat' hmdat[(hmdat<(posmean*zero_threshold)) & (hmdat>(negmean*zero_threshold))] = np.NaN hmap = ax.imshow(np.flipud(hmdat), cmap = cmap, alpha=alpha, zorder = 2, vmin = vmin, vmax = vmax) ax.set(xticks = [], yticks = []) if draw_numbers: number_locations = get_number_locations() # print(number_locations) if S is not None: number_locations.loc[number_locations['num']=='S','num'] = S if T is not None: number_locations.loc[number_locations['num']=='T','num'] = T for ri, row in number_locations.iterrows(): ax.text(row['X'],dispsize[1] - row['Y'],row['num'], fontdict = {'fontsize':num_fontsize, 'verticalalignment':'center','horizontalalignment':'center', 'color':'k'}, zorder = 2) return ax,hmdat,hmap from matplotlib.colors import LinearSegmentedColormap colors = [sns.color_palette('tab10',2)[0], [1,1,1], sns.color_palette('tab10',2)[1]] cm = LinearSegmentedColormap.from_list( 'BuOr', colors, N=1024) colors_alpha = [] for ci,colortuple in enumerate(colors): ap = list(colortuple).copy() ap.append(0) if ci == 1 else ap.append(1) colors_alpha.append(ap) cm_a = LinearSegmentedColormap.from_list( 'BuOr', colors_alpha, N=1024) sns.set_context('poster') fig, ax = plt.subplots(1,2,figsize=[12.5,7.5], gridspec_kw={'width_ratios':[12,.5]}) game_screen = image.imread('Game_screen.png') ax[0].imshow(np.flipud(game_screen)) diff = mean_heatmaps['pess_nat'] - mean_heatmaps['opt_nat'] lim = 1e-8 axout,hmdat,hmap = plot_heatmap_bidir(diff, dispsize, draw_numbers=False, alpha = 1, ax = ax[0], cmap = cm_a, vmin = -lim, vmax = lim, remove_zeros = True, zero_threshold = 1, ) ax[0].invert_yaxis() ax[0].set(title = 'Gaze difference between Optimist and Pessimist block') cbar = plt.colorbar(mappable=hmap, cax=ax[1]) cbar.set_ticks([]) plt.tight_layout() fig.savefig(data_dir + '/Gaze/Results/heatmap_diffs/Gaze_diff_pess-opt_bidirect_BuOr_cbar.png', bbox_inches='tight', transparent = True, dpi = 500) sns.set_context('poster') fig, ax = plt.subplots(1,1,figsize=[14,8.75]) game_screen = image.imread('Game_screen.png') ax.imshow(np.flipud(game_screen)) diff = mean_heatmaps['pess_nat'] - mean_heatmaps['opt_nat'] lim = 1e-8 axout,hmdat,hmap = plot_heatmap_bidir(diff, dispsize, draw_numbers=False, alpha = 1, ax = ax, cmap = cm_a, vmin = -lim, vmax = lim, remove_zeros = True, zero_threshold = 1, ) ax.invert_yaxis() ax.set(title = 'Gaze difference between Optimist and Pessimist block') # cbar = plt.colorbar(mappable=hmap, cax=ax[1]) # cbar.set_ticks([]) # plt.tight_layout() fig.savefig(data_dir + '/Gaze/Results/heatmap_diffs/Gaze_diff_pess-opt_bidirect_BuOr.png', bbox_inches='tight', transparent = True, dpi = 500) sns.set_context('poster') fig, ax = plt.subplots(1,2,figsize=[12.5,7.5], gridspec_kw={'width_ratios':[12,.5]}) diff = mean_heatmaps['pess_nat'] - mean_heatmaps['opt_nat'] lim = 1e-8 axout,hmdat,hmap = plot_heatmap_bidir(diff, dispsize, draw_numbers = True, alpha = 1, ax = ax[0], num_fontsize = 50, cmap = cm_a, vmin = -lim, vmax = lim, remove_zeros = True, zero_threshold = 1, ) ax[0].invert_yaxis() ax[0].set(title = 'Gaze difference Optimist vs Pessimist') cbar = plt.colorbar(mappable=hmap, cax=ax[1]) cbar.set_ticks([]) plt.tight_layout() ``` ## Contrast subjects by motives in model ``` base_dir = '/Volumes/jvanbaar/projects/SOC_STRUCT_LEARN' bestPerSubject_features = pd.read_csv(base_dir + '/Data/Cleaned/ModelFeaturesPerSubject.csv',index_col = 0) bestPerSubject_features['sub'] = bestPerSubject_features['subID'] - 5000 bestPerSubject_features.head() # sub_fix = sub_fix.merge(bestPerSubject_features, on = 'sub') # sub_fix.head() bestPerSubject_features.loc[bestPerSubject_features['Risk'],'sub'].unique() mean_heatmaps_motive = dict() dispsize = [1680, 1050] subs = np.arange(5,55) for risk_included in [False,True]: group_label = 'risk_%s'%risk_included mean_heatmaps_motive[group_label] = dict() subs = bestPerSubject_features.loc[bestPerSubject_features['Risk'] == risk_included,'sub'].unique() for pt in pts: print(pt) mean_heatmaps_motive[group_label][pt] = np.zeros([dispsize[1],dispsize[0]]) for sub in tqdm(subs): mean_heatmaps_motive[group_label][pt] += pd.read_csv(base_dir + '/gaze_analysis/Results/sub_pt_heatmaps/sub-%03d_pt-%s_mean_heatmap.csv'%(sub,pt), index_col = 0).values mean_heatmaps_motive[group_label][pt] = mean_heatmaps_motive[group_label][pt] / len(subs) fig,ax = plt.subplots(2,len(pts),figsize = [16,6]) # number_locations = get_number_locations() for ri, risk_included in enumerate([False,True]): print(risk_included) group_label = 'risk_%s'%risk_included op = not risk_included op_label = 'risk_%s'%op for pti, pt in enumerate(pts): # diff = mean_heatmaps[pt1] - mean_heatmaps[pt2] diff = mean_heatmaps_motive[group_label][pt] - mean_heatmaps_motive[op_label][pt] plot_heatmap(diff, dispsize, draw_numbers=True, alpha = .5, ax = ax[ri, pti]) # ax[pt1i,pt2i].imshow(np.flipud(diffs[pt1i,pt2i])*1e6, cmap = 'jet',zorder = 1) # for ri, row in number_locations.iterrows(): # ax[pt1i,pt2i].text(row['X'],dispsize[1] - row['Y'],row['num'],zorder = 2, # fontdict = {'fontsize':15, 'verticalalignment':'center','horizontalalignment':'center', 'color':'r'}) ax[ri, pti].set(xlim = [0,1680], ylim = [0,1050], title = '%s, %s > %s'%(pt, group_label,op_label), xticks = [], yticks = []) plt.tight_layout() ``` ##### For predicting Optimist only: ``` fig, ax = plt.subplots(1,2,figsize=[16,5], sharey=True) game_screen = image.imread('Game_screen.jpg') pt = 'opt_nat' labels_full = {False:'Does not consider Risk', True:'Considers Risk'} diffs = np.empty([2,dispsize[1],dispsize[0]]) for ai,up_low in enumerate([[False, True],[True, False]]): up,low = up_low up_label = 'risk_%s'%up low_label = 'risk_%s'%low diffs[ai] = mean_heatmaps_motive[up_label][pt] - mean_heatmaps_motive[low_label][pt] vmax = np.max(diffs) for ai,up_low in enumerate([[False, True],[True, False]]): up,low = up_low up_label = 'risk_%s'%up low_label = 'risk_%s'%low ax[ai].imshow(np.flipud(game_screen)) helpers.plot_heatmap(diffs[ai], dispsize, draw_numbers=False, alpha = .5, ax = ax[ai], vmax = vmax) ax[ai].set(title = 'Gaze difference when predicting Optimist\n%s > %s'%(labels_full[up],labels_full[low])) ax[0].invert_yaxis() plt.tight_layout() plt.savefig(base_dir + '/gaze_analysis/Results/heatmap_diffs/Gaze_diff_opt_risk-in-model.pdf', bbox_inches='tight', transparent = True, dpi = 400) ``` ##### For predicting Pessimist only: ``` fig, ax = plt.subplots(1,2,figsize=[16,5], sharey=True) game_screen = image.imread('Game_screen.jpg') pt = 'pess_nat' labels_full = {False:'Does not consider Risk', True:'Considers Risk'} for ai,up_low in enumerate([[False, True],[True, False]]): up,low = up_low up_label = 'risk_%s'%up low_label = 'risk_%s'%low diff = mean_heatmaps_motive[up_label][pt] - mean_heatmaps_motive[low_label][pt] ax[ai].imshow(np.flipud(game_screen)) helpers.plot_heatmap(diff, dispsize, draw_numbers=False, alpha = .5, ax = ax[ai], vmax = vmax) ax[ai].set(title = 'Gaze difference when predicting Pessimist\n%s > %s'%(labels_full[up],labels_full[low])) ax[0].invert_yaxis() plt.tight_layout() plt.savefig(base_dir + '/gaze_analysis/Results/heatmap_diffs/Gaze_diff_pess_risk-in-model.pdf', bbox_inches='tight', transparent = True, dpi = 400) ``` ##### For each group, contrast to Optimist ``` fig, ax = plt.subplots(1,2,figsize=[16,5], sharey=True) game_screen = image.imread('Game_screen.jpg') pt = 'pess_nat' labels_full = {False:'Does not consider Risk', True:'Considers Risk'} diffs = np.empty([2,dispsize[1],dispsize[0]]) up_label = 'pess_nat' low_label = 'opt_nat' for ai,risk_included in enumerate([False, True]): group_label = 'risk_%s'%risk_included diffs[ai] = mean_heatmaps_motive[group_label][up_label] - mean_heatmaps_motive[group_label][low_label] vmax = np.max(np.max(diffs)) for ai,risk_included in enumerate([False, True]): ax[ai].imshow(np.flipud(game_screen)) helpers.plot_heatmap(diffs[ai], dispsize, draw_numbers=False, alpha = .5, ax = ax[ai], vmax = vmax) ax[ai].set(title = 'Gaze difference Pessimist > Optimist\n%s'%(labels_full[risk_included])) ax[0].invert_yaxis() plt.tight_layout() plt.savefig(base_dir + '/gaze_analysis/Results/heatmap_diffs/Gaze_diff_pess-opt_by-risk-in-model.pdf', bbox_inches='tight', transparent = True, dpi = 800) ``` ##### Mean across all player types ``` pts = ['opt_nat','pess_nat','opt_inv','pess_inv'] for risk_included in [False,True]: group_label = 'risk_%s'%risk_included mean_heatmaps_motive[group_label] = np.zeros([dispsize[1],dispsize[0]]) subs = bestPerSubject_features.loc[bestPerSubject_features['Risk'] == risk_included,'sub'].unique() for pt in pts: # print(pt) # mean_heatmaps_motive[group_label][pt] = np.zeros([dispsize[1],dispsize[0]]) for sub in tqdm(subs): mean_heatmaps_motive[group_label] += pd.read_csv(base_dir + '/gaze_analysis/Results/sub_pt_heatmaps/sub-%03d_pt-%s_mean_heatmap.csv'%(sub,pt), index_col = 0).values mean_heatmaps_motive[group_label] = mean_heatmaps_motive[group_label] / (len(subs)*len(pts)) import pickle with open('/Users/jvanbaar/Desktop/mean_heatmaps_motive.p', "wb" ) as file: pickle.dump(mean_heatmaps_motive, file) mean_heatmaps_motive fig, ax = plt.subplots(1,1,figsize=[16,10], sharey=True) game_screen = image.imread('/Users/jvanbaar/Dropbox (Brown)/Postdoc FHL/JEROEN/SOC_STRUCT_LEARN/Study2_EyeTracking/Analysis_scripts/EyeTrackingAnalysis/Game_screen.jpg') labels_full = {False:'Does not consider Risk', True:'Considers Risk'} diffs = np.empty([2,dispsize[1],dispsize[0]]) up,low = [True,False] up_label = 'risk_%s'%up low_label = 'risk_%s'%low diffs = mean_heatmaps_motive[up_label] - mean_heatmaps_motive[low_label] vmax = np.max(diffs) ax.imshow(np.flipud(game_screen)) helpers.plot_heatmap(diffs, dispsize, draw_numbers=False, alpha = .5, ax = ax, vmax = vmax) ax.set(title = '%s > %s'%(labels_full[up],labels_full[low])) ax.invert_yaxis() # plt.tight_layout() plt.savefig(base_dir + '/gaze_analysis/Results/heatmap_diffs/Gaze_diff_risk-in-model.pdf', bbox_inches='tight', transparent = True, dpi = 400) ```
github_jupyter
# Triple Barrier Method This notebook will cover partial exercise answers: * Exercise 3.5 As we go along, there will be some explanations. More importantly, this method can be applied not just within mean-reversion strategy but also other strategies as well. Most of the functions below can be found under research/Labels. Contact: boyboi86@gmail.com ``` import numpy as np import pandas as pd import research as rs import matplotlib.pyplot as plt %matplotlib inline p = print #pls take note of version #numpy 1.17.3 #pandas 1.0.3 #sklearn 0.21.3 dollar = pd.read_csv('./research/Sample_data/dollar_bars.txt', sep=',', header=0, parse_dates = True, index_col=['date_time']) def bband(data: pd.DataFrame, window: int = 21, width: float = 0.001): avg = data['close'].ewm(span = window).mean() std = avg * width upper = avg + std lower = avg - std return avg, upper, lower, std dollar['ewm'], dollar['upper'], dollar['lower'], dollar['std'] = bband(dollar) # Check for normality, serial correlation, overall statistical properties, frequency count stability dollar['side'] = np.nan def side_pick(data: pd.DataFrame): for i in np.arange(data.index.shape[0]): if (data['close'].iloc[i] >= data['upper'].iloc[i]): data['side'].iat[i] = -1 elif (data['close'].iloc[i] <= data['lower'].iloc[i]): data['side'].iat[i] = 1 return data upper = dollar[dollar['upper'] < dollar['close']] # short signal lower = dollar[dollar['lower'] > dollar['close']] # long signal p("Num of times upper limit touched: {0}\nNum of times lower limit touched: {1}" .format(upper.count()[0], lower.count()[0])) # Recall white test as a benchmark and until this stage we filtered all those which did not meet min return dollar = side_pick(dollar) dollar.dropna(inplace= True) dollar['side'].value_counts() copy_dollar = dollar.copy() # make a back copy to be used in later exercise copy_dollar #up till this point the below dataframe should look like this, before tri_bar func. This is our primary model. d_vol = rs.vol(dollar['close'], span0 = 50) events = rs.cs_filter(dollar['close'], limit = d_vol.mean()) events vb = rs.vert_barrier(data = dollar['close'], events = events, period = 'days', freq = 1) vb # Show some example output tb = rs.tri_barrier(data = dollar['close'], events = events, trgt = d_vol, min_req = 0.002, num_threads = 3, ptSl = [0,2], #change ptSl into [0,2] t1 = vb, side = dollar['side']) tb # Show some example m_label = rs.meta_label(data = dollar['close'], events = tb, drop = False) m_label # Show some example m_label['bin'].value_counts(normalize=True) # Here is a quick look at our 'bin' values. # Slight imbalanced sample, but not much harm # 51.95% of the sample based on parameter touched vertical barrier first ``` * Exercise 3.5b Here onwards we will be using sklearn modules to perform ML related task. ``` from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split # A quick look at what we have till date using both primary and secondary model # as seen in previous example, only 48.04% was labeled 1. # Hence precision 1.0 = 0.48 (48% of the sample is relevant), while recall = 1 means fully correct (based on the 48% sample) ``` The below function report_matrix is what we have till date using both primary (bband func) and secondary model (tri_bar func). #### Classification Report As seen in previous example, only 48.0455% was labeled 1. Hence precision 1.0 = 0.48 (48.0455% of the sample is relevant). It's basically ML's way of saying are these "features" relevant when tested. While recall = 1 means fully correct (based on the 48% sample). In the case where ML model is fitted, this result will mean the percentage of "correct" label was chosen. In short, is the ML model reliability in True positive identification based on given sample. #### Confusion Matrix 8001 = False Positive (51.95%) 7399 = True Positive (48.0455%) #### Accuracy Score Is a mere reflection of True Positive, which again is 48.0455% ``` # this func can be found under Tools/stats_rpt forecast = rs.report_matrix(actual_data = m_label, prediction_data = None, ROC = None) ``` Built a list of features. 1. Volatility 2. Autocorrelation 3. Moving average 4. log-price return (optional) 5. Stationary series based on cumulative sum log-price return (optional) The last 2 items will be explained at AFML chapter 5, fractional differentiated features. ``` # Data that was copied earlier before tri_bar func, this is our primary model only copy_dollar # Show example # drop redundant columns and keep crossing moving avaerages pri_dollar = copy_dollar.drop(['open', 'high', 'low', 'cum_vol', 'cum_dollar', 'cum_ticks'], axis = 1) #include volatility, autocorrelation pri_dollar #include original volatility pri_dollar['volatility'] = rs.vol(pri_dollar.close, span0 = 50) # Optional: getting stationarity feature pri_dollar['log_price'] = pri_dollar.close.apply(np.log) pri_dollar['log_return'] = pri_dollar.log_price.diff() cs_log = pri_dollar.log_price.diff().dropna().to_frame() pri_dollar['stationary'] = rs.fracDiff_FFD(data = cs_log, d = 1.99999889 , thres = 1e-5) rs.unit_root(pri_dollar['stationary'].dropna()) #check for stationarity pri_dollar.dropna(inplace = True) # autocorrelation residual feature, we will add AR features up to 2 lags from statsmodels.tsa.arima_model import ARMA pri_dollar['ar_0'] = ARMA(pri_dollar['stationary'], order=(0,0)).fit().resid pri_dollar['ar_1'] = ARMA(pri_dollar['stationary'], order=(1,0)).fit().resid pri_dollar['ar_2'] = ARMA(pri_dollar['stationary'], order=(2,0)).fit().resid #final dataset secondary_dollar = pri_dollar.copy() ``` **Note * Good to include volume based or volume-weighted indicator as a predictive feature i.e. OBV, VWAP **Note** May try to add other types of trend related features as part of experimental Mathematics. (aka Trial & error) * Good to include volume based or volume-weighted indicator as a predictive feature i.e. OBV, VWAP * If not, try to add price based as predictive feature i.e. MOM, RSI ``` # Now we run all the steps to complete labels, to train random forest. # we will use both primary & secondary model events0 = rs.cs_filter(secondary_dollar['close'], limit = secondary_dollar['volatility'].mean()) vb0 = rs.vert_barrier(data = secondary_dollar['close'], events = events0, period = 'days', freq = 1) tb0 = rs.tri_barrier(data = secondary_dollar['close'], events = events0, trgt = secondary_dollar['volatility'], min_req = 0.002, num_threads = 3, ptSl = [0,2], #change ptSl into [0,2] t1 = vb0, side = secondary_dollar['side']) m_label0 = rs.meta_label(data = secondary_dollar['close'], events = tb0, drop = 0.05) m_label0 m_label0['bin'].value_counts() # we still get back the same count. This is correct. # Tri_bar func is to calculate if vert_bar was triggered and consolidates the target. # while label will check which are the ones that hitted vertical barriers or non-profitable will be label 0 # At this stage you may wish to run Grid search CV, but I'm skipping that. n_estimators, max_depth, c_random_state = 500, 7, 42 # Random Forest Model rf = RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators, criterion='entropy', class_weight = None, #This will be cover in next few chapters random_state=c_random_state) X = secondary_dollar.reindex(m_label0.index) # this dataframe only contain all our features y = m_label0['bin'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False) rf.fit(X_train, y_train.values.ravel()) # Performance Metrics y_prob = rf.predict_proba(X_train)[:, 1] #here we are only interested in True positive y_pred = rf.predict(X_train) p('Matrix training report for primary model & secondary model\n') rs.report_matrix(actual_data = y_train, # we need to use our train data from train_test_split prediction_data = y_pred, ROC = y_prob) # Meta-label # Performance Metrics y_prob = rf.predict_proba(X_test)[:, 1] #here we are only interested in True positive y_pred = rf.predict(X_test) p('Matrix test report for primary model & secondary model\n') rs.report_matrix(actual_data = y_test, prediction_data = y_pred, ROC = y_prob) rs.feat_imp(rf, X) **Now we start to create only primary model** events1 = rs.cs_filter(pri_dollar['close'], limit = pri_dollar['volatility'].mean()) vb1 = rs.vert_barrier(data = pri_dollar['close'], events = events1, period = 'days', freq = 1) tb1 = rs.tri_barrier(data = pri_dollar['close'], events = events1, trgt = pri_dollar['volatility'], min_req = 0.002, num_threads = 3, ptSl = [0,2], #change ptSl into [0,2] t1 = vb1, side = None) m_label1 = rs.meta_label(data = pri_dollar['close'], events = tb1, drop = 0.05) # take note we do not have a side hence we need to drop something # Random Forest Model rf = RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators, criterion='entropy', class_weight = None, #This will be cover in next few chapters random_state=c_random_state) X = pri_dollar.reindex(m_label1.index) # this dataframe only contain all our features y = m_label1['bin'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=False) rf.fit(X_train, y_train.values.ravel()) # Performance Metrics y_prob = rf.predict_proba(X_train)[:, 1] #here we are only interested in True positive y_pred = rf.predict(X_train) p('Matrix training report for primary model only\n') rs.report_matrix(actual_data = y_train, # we need to use our train data from train_test_split prediction_data = y_pred, ROC = y_prob) # Meta-label # Performance Metrics y_prob = rf.predict_proba(X_test)[:, 1] #here we are only interested in True positive y_pred = rf.predict(X_test) p('Matrix test report for primary model only\n') rs.report_matrix(actual_data = y_test, prediction_data = y_pred, ROC = y_prob) rs.feat_imp(rf, X) ``` ## Based on our matrix report The comparisons was made in ceteris paribus condition as much as possible. All comparison will be made based on test data only, train data will be excluded. ### Accuracy comparison **Accuracy is sum of True Positive and True Negative divided by overall set of items** There is an improvement in accuracy rate when meta-label (Primary & secondary model score yield 0.53), with original yielding only 0.48. Accuracy +0.05 improvement (10% increase) from the original, +0.0342 improvement (6.8% increase) from primary model only. However, in order to correctly use this method. Dr Marco Lopez De Prado did mentioned the below: >"First, we build a model that achieve high recall, even if precision is not particularly high. > > Second correct for low precision by applying meta-label to the positives predicted by the primary model." > > Advances in Financial Machine Learning, page 52 However, for primary model only case. (-1,1) are consider price actions label, which in my opinion does not seem to work well with ML. But it does improve accuracy score by a small margin against original data with no labels. In our case, we filtered out labels that touched vertical barrier first from primary model only. ### F1 scores comparison **Measures the efficiency of classifer (Harmonic mean of both precision and recall)** Using both primary and secondary models to identify True Positive yields a score of 0.6 (for both long and short). while using primary model only which gives 0.56 (for short) and 0.4 (for long). F1 score +0.04 (7% increase/ short) and +0.2 (50% increase/ long) improvement, when compared against primary model only. >"Meta-labeling is particularly helpful when you want to achieve higher F1-scores." > > Advances in Financial Machine Learning, page 52 ### Other observations <ins>Stationarity absolute return series</ins> as a optional key feature, does not seem relevant at all since the ML does not recognize it after log price absolute change. Hence it is lowly ranked in feature importance graph. **For secondary model** Crossing averages and volatility does seem to be at the top of the key features importance. This seems to say, ML model recognize that as key feature for predictions, while auto-correlation does not seem that important. The ML model did realised, we were using those indicators as our primary models. **For Primary model** The ML model only recognize we were using volatility as our benchmark (tri_barrier func cs_filter as trgt), when we did not use any (0,1) meta-labels. ### Conclusion To get a higher F1 score and better accuracy, quants should use both primary (To let it decide bet direction) while using a secondary model to decide bet size (To bet or not). Stationarity is an important concept, especially to mean-reversion strategy. As Stationary series act as an anchor which the strategy will revert to eventually. We will cover more with other examples regarding stationarity in the next chapter.
github_jupyter
``` %matplotlib inline %reload_ext autoreload %autoreload 2 ``` To get everything in a notebook. ``` from nb_007 import * PATH = Path('../data/cifar10/') torch.backends.cudnn.benchmark = True ``` Model definition ``` class Lambda(nn.Module): def __init__(self, func): super().__init__() self.func=func def forward(self, x): return self.func(x) def ResizeBatch(*size): return Lambda(lambda x: x.view((-1,)+size)) def Flatten(): return Lambda(lambda x: x.view((x.size(0), -1))) def PoolFlatten(): return nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()) def conv_2d(ni, nf, ks, stride): return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=False) def bn(ni, init_zero=False): m = nn.BatchNorm2d(ni) m.weight.data.fill_(0 if init_zero else 1) m.bias.data.zero_() return m def bn_relu_conv(ni, nf, ks, stride, init_zero=False): bn_initzero = bn(ni, init_zero=init_zero) return nn.Sequential(bn_initzero, nn.ReLU(inplace=True), conv_2d(ni, nf, ks, stride)) def noop(x): return x class BasicBlock(nn.Module): def __init__(self, ni, nf, stride, drop_p=0.0): super().__init__() self.bn = nn.BatchNorm2d(ni) self.conv1 = conv_2d(ni, nf, 3, stride) self.conv2 = bn_relu_conv(nf, nf, 3, 1) self.drop = nn.Dropout(drop_p, inplace=True) if drop_p else None self.shortcut = conv_2d(ni, nf, 1, stride) if ni != nf else noop def forward(self, x): x2 = F.relu(self.bn(x), inplace=True) r = self.shortcut(x2) x = self.conv1(x2) if self.drop: x = self.drop(x) x = self.conv2(x) * 0.2 return x.add_(r) def _make_group(N, ni, nf, block, stride, drop_p): return [block(ni if i == 0 else nf, nf, stride if i == 0 else 1, drop_p) for i in range(N)] class WideResNet(nn.Module): def __init__(self, num_groups, N, num_classes, k=1, drop_p=0.0, start_nf=16): super().__init__() n_channels = [start_nf] for i in range(num_groups): n_channels.append(start_nf*(2**i)*k) layers = [conv_2d(3, n_channels[0], 3, 1)] # conv1 for i in range(num_groups): layers += _make_group(N, n_channels[i], n_channels[i+1], BasicBlock, (1 if i==0 else 2), drop_p) layers += [nn.BatchNorm2d(n_channels[3]), nn.ReLU(inplace=True), nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(n_channels[3], num_classes)] self.features = nn.Sequential(*layers) def forward(self, x): return self.features(x) def wrn_22(): return WideResNet(num_groups=3, N=3, num_classes=10, k=6, drop_p=0.) model = wrn_22() ``` This is the way to create datasets in fastai_pytorch ``` train_ds,valid_ds = ImageDataset.from_folder(PATH/'train'), ImageDataset.from_folder(PATH/'test') cifar_mean,cifar_std = map(tensor, ([0.4914, 0.48216, 0.44653], [0.24703, 0.24349, 0.26159])) cifar_norm,cifar_denorm = normalize_funcs(cifar_mean,cifar_std) train_tfms = [pad(padding=4), crop(size=32, row_pct=(0,1), col_pct=(0,1)), flip_lr(p=0.5)] data = DataBunch.create(train_ds, valid_ds, bs=512, train_tfm=train_tfms, tfms=cifar_norm, num_workers=8) ``` A learner wraps together the data and the model, like in fastai. Here we test the usual training to 94% accuracy with AdamW. ``` model = wrn_22() learn = Learner(data, model) learn.metrics = [accuracy] ``` warm up ``` learn.fit_one_cycle(1, 3e-3, wd=0.4, div_factor=10) learn.recorder.plot_lr() ``` ## FP16 The same but in mixed-precision. ``` model = wrn_22() model = model2half(model) learn = Learner(data, model) learn.metrics = [accuracy] learn.callbacks.append(MixedPrecision(learn)) %time learn.fit_one_cycle(25, 3e-3, wd=0.4, div_factor=10) %time learn.fit_one_cycle(30, 3e-3, wd=0.4, div_factor=10) ```
github_jupyter
``` import pickle import pandas as pd import pre_processing_tr as pr import matplotlib.pyplot as plt from matplotlib import dates from sklearn.feature_extraction.text import CountVectorizer tweets = pd.read_excel('derlemler/nedenttoldu_user_tweets.xlsx') tweets.tail() print(f"Toplam örnek sayısı: {len(tweets)}") #Tweet zamanı ve yazıların düzenlenmesi tweets.Text = tweets.Text.apply(pr.pre_process) tweets.UTC = tweets.UTC.apply(lambda x: x[:10]) #Önceden eğitilmiş modelin yüklenmesi filename = 'modeller/5-kategori_vocab.sav' loaded_vocab = pickle.load(open(filename, 'rb')) cv = CountVectorizer(vocabulary=loaded_vocab) text_vector = cv.fit_transform(tweets.Text) filename = 'modeller/5-kategori.sav' loaded_model = pickle.load(open(filename, 'rb')) class_mapping = {0: 'Diğer', 1: 'Ekonomi', 2: 'Siyaset', 3: 'Spor', 4: 'Teknoloji_Bilim'} #Tahmin aşaması preds = loaded_model.predict(text_vector) preds_mapped = [] for pred in preds: preds_mapped.append(class_mapping[pred]) print("Bütün tweetlerin sınıflandırılması sonrası tahmin edilen kategorilere göre toplam haber sayısı.") for cat in set(preds_mapped): print(f"{cat}: {preds_mapped.count(cat)}") # figür için yardımcı metot def window_average(x,N): low_index = 0 high_index = low_index + N w_avg = [] while(high_index<len(x)): temp = sum(x[low_index:high_index])/N w_avg.append(temp) low_index = low_index + N high_index = high_index + N return w_avg #tarihe göre haber sayısı hesaplamaları data = pd.concat([tweets.UTC, pd.DataFrame(preds_mapped)], axis=1) data['day'] = data['UTC'].apply(lambda x: x.split()[0]) counts = data.groupby(['day', 0]).agg(len) counts = counts.unstack().fillna(0) date = pd.to_datetime(tweets.UTC).unique() #bir haftalık kayan pencerede çıkan toplam haber sayısı dgr = counts.UTC.iloc[:, 0].rolling(7).sum().values eko = counts.UTC.iloc[:, 1].rolling(7).sum().values spr = counts.UTC.iloc[:, 2].rolling(7).sum().values sys = counts.UTC.iloc[:, 3].rolling(7).sum().values tec = counts.UTC.iloc[:, 4].rolling(7).sum().values # Figür oluşturulması f = plt.figure() f.set_size_inches(12, 6) plt.plot(date, dgr, label = f"Diğer {preds_mapped.count('Diğer')}") plt.plot(date, eko, label = f"Ekonomi {preds_mapped.count('Ekonomi')}") plt.plot(date, spr, label = f"Spor {preds_mapped.count('Spor')}") plt.plot(date, sys, label = f"Siyaset {preds_mapped.count('Siyaset')}") plt.plot(date, tec, label = f"Teknoloji Bilim {preds_mapped.count('Teknoloji_Bilim')}") plt.xlabel('Tarih', fontsize=14) plt.ylabel('Haber Sayısı', fontsize=14) plt.legend(loc='upper left') ```
github_jupyter
# Can you make the objective function better via gm, compared to the purported 1-1? > Running GM on time stamped data with a known 1-1 correspondance - toc: false - badges: true - comments: true - categories: [graph-matching, ali-s-e] - hide: false - search_exclude: false ``` # collapse import sys sys.path sys.path.insert(0, '/Users/asaadeldin/Downloads/GitHub/scipy') from scipy.optimize import quadratic_assignment # collapse %pylab inline import pandas as pd from graspy.utils import pass_to_ranks import seaborn as sns ``` # Experiment Summary If $A_i$ is the adjacency matrix at time index $i$, then with $n$ time indices, for $i = [1, n-1]$ do $GM(A_i, A_{i+1})$, where $A_i$ and $A_{i+1}$ are pre-matched based on the known 1-1 correspondance. For each graph pair, run $GM$ $t = 20$ times, with each $t$ corresponding to a different random permutation on $A_{i+1}$. Internally in GM, $A_{i+1}$ is shuffled, that is $A_{i+1}' = Q A_{i+1} Q^T,$ where $Q$ is sampled uniformly from the set of $m x m$ permutations matrices, where $m$ is the size of the vertex set. $GM$ is run from the barycenter ($\gamma = 0$). Compare the objective function values of the matrices with the known matching ($trace (A_i A_{i+1}^T)$) and the average objective function resulting from $t$ runs of $GM(A_i, A_{i+1})$ ``` # collapse def load_adj(file): df = pd.read_csv(f'org_sig1_max/{file}.csv', names = ['from', 'to', 'weight']) return df # collapse times = [1,4,11,17,25,34,45,48,52,55,63,69,70,76,80,83,90,97,103,111,117,129,130,132,139,140,146,153,160,167, 174,181,188,192,195,202,209,216,223,229] # collapse from scipy.stats import sem t = 20 ofvs = np.zeros((len(times)-1,3)) # [opt_ofv, gm_ofv] for i in range(len(times)-1): # constructing the adjacency matrices Ael = load_adj(times[i]) Bel = load_adj(times[i+1]) nodes = np.concatenate((Ael['from'],Ael['to'],Bel['from'],Bel['to']), axis=0) nodes = list(set(nodes)) n = len(nodes) A = np.zeros((n,n)) B = np.zeros((n,n)) row_list_A = [nodes.index(x) for x in Ael['from']] col_list_A = [nodes.index(x) for x in Ael['to']] A[row_list_A, col_list_A] = Ael['weight'] row_list_B = [nodes.index(x) for x in Bel['from']] col_list_B = [nodes.index(x) for x in Bel['to']] B[row_list_B, col_list_B] = Bel['weight'] A = pass_to_ranks(A) B = pass_to_ranks(B) gm_ofvs = np.zeros(t) for j in range(t): gmp = {'maximize':True} res = quadratic_assignment(A,B, options=gmp) gm_ofvs[j] = res.fun gm_ofv = np.mean(gm_ofvs) gm_error = sem(gm_ofvs) opt_ofv = (A*B).sum() ofvs[i,:] = [opt_ofv, gm_ofv, 2*gm_error] # collapse sns.set_context('paper') sns.set(rc={'figure.figsize':(12,8)}) plt.scatter(np.arange(len(times)-1), ofvs[:,0], label = 'opt ofv') #plt.scatter(np.arange(len(times)), ofvs[:,1], label = 'gm ofv') plt.errorbar(np.arange(len(times)-1),ofvs[:,1], ofvs[:,2],label = 'average gm ofv +/- 2 s.e.',marker='o', fmt = ' ' ,capsize=3, elinewidth=1, markeredgewidth=1,color='orange') plt.legend() plt.ylabel('objective function value') plt.xlabel('time stamp (A_x & A_{x+1})') ``` ### Extremely low variance above (error bars not visible) ``` # collapse plt.scatter(np.arange(len(times)-1), ofvs[:,1]/ofvs[:,0]) plt.hlines(1,0,40,linestyles='dashed', color = 'red', label='y=1 (above means gm maximes ofv more)') plt.legend() plt.xlabel('Time Stamp') plt.ylabel('gm_ofv / pre-matched_ofv') # collapse df = pd.DataFrame(ofvs,columns=["Pre Matched OFV","Avergae GM OFV", "2*s.e. GM OFV"]) print(df) ```
github_jupyter
# Spatial Declustering in Python for Engineers and Geoscientists ## with GSLIB's DECLUS Program Converted to Python ### Michael Pyrcz, Associate Professor, University of Texas at Austin #### Contacts: [Twitter/@GeostatsGuy](https://twitter.com/geostatsguy) | [GitHub/GeostatsGuy](https://github.com/GeostatsGuy) | [www.michaelpyrcz.com](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) This is a tutorial for / demonstration of **spatial declustering in Python with GSLIB's DECLUS program translated to Python, wrappers and reimplementations of other GSLIB: Geostatistical Library methods** (Deutsch and Journel, 1997). Almost every spatial dataset is based on biased sampling. This includes clustering (increased density of samples) over specific ranges of values. For example, more samples in an area of high feature values. Spatial declustering is a process of assigning data weights based on local data density. The cell-based declustering approach (Deutsch and Journel, 1997; Pyrcz and Deutsch, 2014; Pyrcz and Deutsch, 2003, paper is available here: http://gaa.org.au/pdf/DeclusterDebias-CCG.pdf) is based on the use of a mesh over the area of interest. Each datum's weight is inverse to the number of data in each cell. Cell offsets of applied to smooth out influence of mesh origin. Multiple cell sizes are applied and typically the cell size that minimizes the declustered distribution mean is applied for preferential sampling in the high-valued locations (the maximizing cell size is applied if the data is preferential sampled in the low-valued locations). If there is a nominal data spacing with local clusters, then this spacing is the best cell size. This exercise demonstrates the cell-based declustering approach in Python with wrappers and reimplimentation of GSLIB methods. The steps include: 1. generate a 2D sequential Guassian simulation using a wrapper of GSLIB's sgsim method 2. apply regular sampling to the 2D realization 3. preferentially removing samples in the low-valued locations 4. calculate cell-based declustering weights with the **declus function** 5. visualize the location map of the declustering weights and the original exhaustive, sample and the new declustered distribution along with the scatter plot of declustered weight vs. cell size. To accomplish this I have provide wrappers or reimplementation in Python for the following GSLIB methods: 1. sgsim - sequantial Gaussian simulation limited to 2D and unconditional 2. hist - histograms plots reimplemented with GSLIB parameters using python methods 3. locmap - location maps reimplemented with GSLIB parameters using python methods 4. pixelplt - pixel plots reimplemented with GSLIB parameters using python methods 5. locpix - my modification of GSLIB to superimpose a location map on a pixel plot reimplemented with GSLIB parameters using Python methods 5. affine - affine correction adjust the mean and standard deviation of a feature reimplemented with GSLIB parameters using Python methods These methods are all in the functions declared upfront. To run this demo all one has to do is download and place in your working directory the following executables from the GSLIB/bin directory: 1. sgsim.exe 2. nscore.exe (not currently used in demo, but wrapper is included) The GSLIB source and executables are available at http://www.statios.com/Quick/gslib.html. For the reference on using GSLIB check out the User Guide, GSLIB: Geostatistical Software Library and User's Guide by Clayton V. Deutsch and Andre G. Journel. I did this to allow people to use these GSLIB functions that are extremely robust in Python. Also this should be a bridge to allow so many familar with GSLIB to work in Python as a kept the parameterization and displays consistent with GSLIB. The wrappers are simple functions declared below that write the parameter files, run the GSLIB executable in the working directory and load and visualize the output in Python. This will be included on GitHub for anyone to try it out https://github.com/GeostatsGuy/. This was my first effort to translate the GSLIB Fortran to Python. It was pretty easy so I'll start translating other critical GSLIB functions. #### Load the required libraries The following code loads the required libraries. ``` import os # to set current working directory import numpy as np # arrays and matrix math import pandas as pd # DataFrames import matplotlib.pyplot as plt # plotting ``` If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs. #### Declare functions Here are the wrappers and reimplementations of GSLIB method along with two utilities to load GSLIB's Geo-EAS from data files into DataFrames and 2D Numpy arrays. ``` # Some GeostatsPy Functions - by Michael Pyrcz, maintained at https://git.io/fNgR7. # A set of functions to provide access to GSLIB in Python. # GSLIB executables: nscore.exe, declus.exe, gam.exe, gamv.exe, vmodel.exe, kb2d.exe & sgsim.exe must be in the working directory import pandas as pd import os import numpy as np import matplotlib.pyplot as plt import random as rand image_type = 'tif'; dpi = 600 # utility to convert GSLIB Geo-EAS files to a 1D or 2D numpy ndarray for use with Python methods def GSLIB2ndarray(data_file,kcol,nx,ny): colArray = [] if ny > 1: array = np.ndarray(shape=(ny,nx),dtype=float,order='F') else: array = np.zeros(nx) with open(data_file) as myfile: # read first two lines head = [next(myfile) for x in range(2)] line2 = head[1].split() ncol = int(line2[0]) # get the number of columns for icol in range(0, ncol): # read over the column names head = [next(myfile) for x in range(1)] if icol == kcol: col_name = head[0].split()[0] if ny > 1: for iy in range(0,ny): for ix in range(0,nx): head = [next(myfile) for x in range(1)] array[ny-1-iy][ix] = head[0].split()[kcol] else: for ix in range(0,nx): head = [next(myfile) for x in range(1)] array[ix] = head[0].split()[kcol] return array,col_name # utility to convert GSLIB Geo-EAS files to a pandas DataFrame for use with Python methods def GSLIB2Dataframe(data_file): colArray = [] with open(data_file) as myfile: # read first two lines head = [next(myfile) for x in range(2)] line2 = head[1].split() ncol = int(line2[0]) for icol in range(0, ncol): head = [next(myfile) for x in range(1)] colArray.append(head[0].split()[0]) data = np.loadtxt(myfile, skiprows = 0) df = pd.DataFrame(data) df.columns = colArray return df # histogram, reimplemented in Python of GSLIB hist with MatPlotLib methods, displayed and as image file def hist(array,xmin,xmax,log,cumul,bins,weights,xlabel,title,fig_name): plt.figure(figsize=(8,6)) cs = plt.hist(array, alpha = 0.2, color = 'red', edgecolor = 'black', bins=bins, range = [xmin,xmax], weights = weights, log = log, cumulative = cumul) plt.title(title) plt.xlabel(xlabel); plt.ylabel('Frequency') plt.savefig(fig_name + '.' + image_type,dpi=dpi) plt.show() return # histogram, reimplemented in Python of GSLIB hist with MatPlotLib methods (version for subplots) def hist_st(array,xmin,xmax,log,cumul,bins,weights,xlabel,title): cs = plt.hist(array, alpha = 0.2, color = 'red', edgecolor = 'black', bins=bins, range = [xmin,xmax], weights = weights, log = log, cumulative = cumul) plt.title(title) plt.xlabel(xlabel); plt.ylabel('Frequency') return # location map, reimplemention in Python of GSLIB locmap with MatPlotLib methods def locmap(df,xcol,ycol,vcol,xmin,xmax,ymin,ymax,vmin,vmax,title,xlabel,ylabel,vlabel,cmap,fig_name): ixy = 0 plt.figure(figsize=(8,6)) im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, norm=None, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black") plt.title(title) plt.xlim(xmin,xmax) plt.ylim(ymin,ymax) plt.xlabel(xlabel) plt.ylabel(ylabel) cbar = plt.colorbar(im, orientation = 'vertical',ticks=np.linspace(vmin,vmax,10)) cbar.set_label(vlabel, rotation=270, labelpad=20) plt.savefig(fig_name + '.' + image_type,dpi=dpi) plt.show() return im # location map, reimplemention in Python of GSLIB locmap with MatPlotLib methods (version for subplots) def locmap_st(df,xcol,ycol,vcol,xmin,xmax,ymin,ymax,vmin,vmax,title,xlabel,ylabel,vlabel,cmap): ixy = 0 im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, norm=None, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black") plt.title(title) plt.xlim(xmin,xmax) plt.ylim(ymin,ymax) plt.xlabel(xlabel) plt.ylabel(ylabel) cbar = plt.colorbar(im, orientation = 'vertical',ticks=np.linspace(vmin,vmax,10)) cbar.set_label(vlabel, rotation=270, labelpad=20) return im # pixel plot, reimplemention in Python of GSLIB pixelplt with MatPlotLib methods def pixelplt(array,xmin,xmax,ymin,ymax,step,vmin,vmax,title,xlabel,ylabel,vlabel,cmap,fig_name): print(str(step)) xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step)) plt.figure(figsize=(8,6)) im = plt.contourf(xx,yy,array,cmap=cmap,vmin=vmin,vmax=vmax,levels=np.linspace(vmin,vmax,100)) plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) cbar = plt.colorbar(im,orientation = 'vertical',ticks=np.linspace(vmin,vmax,10)) cbar.set_label(vlabel, rotation=270, labelpad=20) plt.savefig(fig_name + '.' + image_type,dpi=dpi) plt.show() return im # pixel plot, reimplemention in Python of GSLIB pixelplt with MatPlotLib methods(version for subplots) def pixelplt_st(array,xmin,xmax,ymin,ymax,step,vmin,vmax,title,xlabel,ylabel,vlabel,cmap): xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step)) ixy = 0 x = [];y = []; v = [] # use dummy since scatter plot controls legend min and max appropriately and contour does not! cs = plt.contourf(xx,yy,array,cmap=cmap,vmin=vmin,vmax=vmax,levels = np.linspace(vmin,vmax,100)) im = plt.scatter(x,y,s=None, c=v, marker=None,cmap=cmap, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black") plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.clim(vmin,vmax) cbar = plt.colorbar(im, orientation = 'vertical') cbar.set_label(vlabel, rotation=270, labelpad=20) return cs # pixel plot and location map, reimplementation in Python of a GSLIB MOD with MatPlotLib methods def locpix(array,xmin,xmax,ymin,ymax,step,vmin,vmax,df,xcol,ycol,vcol,title,xlabel,ylabel,vlabel,cmap,fig_name): xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step)) ixy = 0 plt.figure(figsize=(8,6)) cs = plt.contourf(xx, yy, array, cmap=cmap,vmin=vmin, vmax=vmax,levels = np.linspace(vmin,vmax,100)) im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black") plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.xlim(xmin,xmax) plt.ylim(ymin,ymax) cbar = plt.colorbar(orientation = 'vertical') cbar.set_label(vlabel, rotation=270, labelpad=20) plt.savefig(fig_name + '.' + image_type,dpi=dpi) plt.show() return cs # pixel plot and location map, reimplementation in Python of a GSLIB MOD with MatPlotLib methods(version for subplots) def locpix_st(array,xmin,xmax,ymin,ymax,step,vmin,vmax,df,xcol,ycol,vcol,title,xlabel,ylabel,vlabel,cmap): xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step)) ixy = 0 cs = plt.contourf(xx, yy, array, cmap=cmap,vmin=vmin, vmax=vmax,levels = np.linspace(vmin,vmax,100)) im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black") plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.xlim(xmin,xmax) plt.ylim(ymin,ymax) cbar = plt.colorbar(orientation = 'vertical') cbar.set_label(vlabel, rotation=270, labelpad=20) # affine distribution correction reimplemented in Python with numpy methods def affine(array,tmean,tstdev): if array.ndim != 2: Print("Error: must use a 2D array") return nx = array.shape[0] ny = array.shape[1] mean = np.average(array) stdev = np.std(array) for iy in range(0,ny): for ix in range(0,nx): array[ix,iy]= (tstdev/stdev)*(array[ix,iy] - mean) + tmean return(array) def make_variogram(nug,nst,it1,cc1,azi1,hmaj1,hmin1,it2=1,cc2=0,azi2=0,hmaj2=0,hmin2=0): if cc2 == 0: nst = 1 var = dict([('nug', nug), ('nst', nst), ('it1', it1),('cc1', cc1),('azi1', azi1),('hmaj1', hmaj1), ('hmin1', hmin1), ('it2', it2),('cc2', cc2),('azi2', azi2),('hmaj2', hmaj2), ('hmin2', hmin2)]) if nug + cc1 + cc2 != 1: print('\x1b[0;30;41m make_variogram Warning: sill does not sum to 1.0, do not use in simulation \x1b[0m') if cc1 < 0 or cc2 < 0 or nug < 0 or hmaj1 < 0 or hmaj2 < 0 or hmin1 < 0 or hmin2 < 0: print('\x1b[0;30;41m make_variogram Warning: contributions and ranges must be all positive \x1b[0m') if hmaj1 < hmin1 or hmaj2 < hmin2: print('\x1b[0;30;41m make_variogram Warning: major range should be greater than minor range \x1b[0m') return var # sequential Gaussian simulation, 2D unconditional wrapper for sgsim from GSLIB (.exe must be in working directory) def GSLIB_sgsim_2d_uncond(nreal,nx,ny,hsiz,seed,var,output_file): import os import numpy as np nug = var['nug'] nst = var['nst']; it1 = var['it1']; cc1 = var['cc1']; azi1 = var['azi1']; hmaj1 = var['hmaj1']; hmin1 = var['hmin1'] it2 = var['it2']; cc2 = var['cc2']; azi2 = var['azi2']; hmaj2 = var['hmaj2']; hmin2 = var['hmin2'] max_range = max(hmaj1,hmaj2) hmn = hsiz * 0.5 hctab = int(max_range/hsiz)*2 + 1 sim_array = np.random.rand(nx,ny) file = open("sgsim.par", "w") file.write(" Parameters for SGSIM \n") file.write(" ******************** \n") file.write(" \n") file.write("START OF PARAMETER: \n") file.write("none -file with data \n") file.write("1 2 0 3 5 0 - columns for X,Y,Z,vr,wt,sec.var. \n") file.write("-1.0e21 1.0e21 - trimming limits \n") file.write("0 -transform the data (0=no, 1=yes) \n") file.write("none.trn - file for output trans table \n") file.write("1 - consider ref. dist (0=no, 1=yes) \n") file.write("none.dat - file with ref. dist distribution \n") file.write("1 0 - columns for vr and wt \n") file.write("-4.0 4.0 - zmin,zmax(tail extrapolation) \n") file.write("1 -4.0 - lower tail option, parameter \n") file.write("1 4.0 - upper tail option, parameter \n") file.write("0 -debugging level: 0,1,2,3 \n") file.write("nonw.dbg -file for debugging output \n") file.write(str(output_file) + " -file for simulation output \n") file.write(str(nreal) + " -number of realizations to generate \n") file.write(str(nx) + " " + str(hmn) + " " + str(hsiz) + " \n") file.write(str(ny) + " " + str(hmn) + " " + str(hsiz) + " \n") file.write("1 0.0 1.0 - nz zmn zsiz \n") file.write(str(seed) + " -random number seed \n") file.write("0 8 -min and max original data for sim \n") file.write("12 -number of simulated nodes to use \n") file.write("0 -assign data to nodes (0=no, 1=yes) \n") file.write("1 3 -multiple grid search (0=no, 1=yes),num \n") file.write("0 -maximum data per octant (0=not used) \n") file.write(str(max_range) + " " + str(max_range) + " 1.0 -maximum search (hmax,hmin,vert) \n") file.write(str(azi1) + " 0.0 0.0 -angles for search ellipsoid \n") file.write(str(hctab) + " " + str(hctab) + " 1 -size of covariance lookup table \n") file.write("0 0.60 1.0 -ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC \n") file.write("none.dat - file with LVM, EXDR, or COLC variable \n") file.write("4 - column for secondary variable \n") file.write(str(nst) + " " + str(nug) + " -nst, nugget effect \n") file.write(str(it1) + " " + str(cc1) + " " +str(azi1) + " 0.0 0.0 -it,cc,ang1,ang2,ang3\n") file.write(" " + str(hmaj1) + " " + str(hmin1) + " 1.0 - a_hmax, a_hmin, a_vert \n") file.write(str(it2) + " " + str(cc2) + " " +str(azi2) + " 0.0 0.0 -it,cc,ang1,ang2,ang3\n") file.write(" " + str(hmaj2) + " " + str(hmin2) + " 1.0 - a_hmax, a_hmin, a_vert \n") file.close() os.system('"sgsim.exe sgsim.par"') sim_array = GSLIB2ndarray(output_file,0,nx,ny) return(sim_array[0]) # extract regular spaced samples from a model def regular_sample(array,xmin,xmax,ymin,ymax,step,mx,my,name): x = []; y = []; v = []; iix = 0; iiy = 0; xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step)) iiy = 0 for iy in range(0,ny): if iiy >= my: iix = 0 for ix in range(0,nx): if iix >= mx: x.append(xx[ix,iy]);y.append(yy[ix,iy]); v.append(array[ix,iy]) iix = 0; iiy = 0 iix = iix + 1 iiy = iiy + 1 df = pd.DataFrame(np.c_[x,y,v],columns=['X', 'Y', name]) return(df) ``` Here's the translation of declus to Python (Michael Pyrcz, Jan. 2019 - let me know if you find any issues). ``` import numpy as np import pandas as pd # GSLIB's DECLUS program (Deutsch and Journel, 1998) converted from the original Fortran to Python # by Michael Pyrcz, the University of Texas at Austin (Jan, 2019) # note this was simplified to 2D only def declus(df,xcol,ycol,vcol,iminmax,noff,ncell,cmin,cmax): # Parameters - consistent with original GSLIB # df - Pandas DataFrame with the spatial data # xcol, ycol - name of the x and y coordinate columns # vcol - name of the property column # iminmax - 1 / True for use cell size with max decluster mean, 0 / False for declustered mean minimizing cell size # noff - number of offsets # ncell - number of cell sizes # cmin, cmax - min and max cell size # # Load Data and Set Up Arrays nd = len(df) x = df[xcol].values y = df[ycol].values v = df[vcol].values wt = np.zeros(nd) wtopt = np.ones(nd) index = np.zeros(nd, np.int32) xcs_mat = np.zeros(ncell+2) # we use 1,...,n for this array vrcr_mat = np.zeros(ncell+2) # we use 1,...,n for this array anisy = 1.0 # hard code the cells to 2D isotropic roff = float(noff) # Calculate extents xmin = np.min(x); xmax = np.max(x) ymin = np.min(y); ymax = np.max(y) # Calculate summary statistics vmean = np.mean(v) vstdev = np.std(v) vmin = np.min(v) vmax = np.max(v) xcs_mat[0] = 0.0; vrcr_mat[0] = vmean; vrop = vmean # include the naive case print('There are ' + str(nd) + ' data with:') print(' mean of ' + str(vmean) + ' ') print(' min and max ' + str(vmin) + ' and ' + str(vmax)) print(' standard dev ' + str(vstdev) + ' ') # define a "lower" origin to use for the cell sizes: xo1 = xmin - 0.01 yo1 = ymin - 0.01 # define the increment for the cell size: xinc = (cmax-cmin) / ncell yinc = xinc # loop over "ncell+1" cell sizes in the grid network: ncellx = int((xmax-(xo1-cmin))/cmin)+1 ncelly = int((ymax-(yo1-cmin*anisy))/(cmin))+1 ncellt = ncellx*ncelly cellwt = np.zeros(ncellt) xcs = cmin - xinc ycs = (cmin*anisy) - yinc # MAIN LOOP over cell sizes: for lp in range(1,ncell+2): # 0 index is the 0.0 cell, note n + 1 in Fortran xcs = xcs + xinc ycs = ycs + yinc # initialize the weights to zero: wt.fill(0.0) # determine the maximum number of grid cells in the network: ncellx = int((xmax-(xo1-xcs))/xcs)+1 ncelly = int((ymax-(yo1-ycs))/ycs)+1 ncellt = float(ncellx*ncelly) # loop over all the origin offsets selected: xfac = min((xcs/roff),(0.5*(xmax-xmin))) yfac = min((ycs/roff),(0.5*(ymax-ymin))) for kp in range(1,noff+1): xo = xo1 - (float(kp)-1.0)*xfac yo = yo1 - (float(kp)-1.0)*yfac # initialize the cumulative weight indicators: cellwt.fill(0.0) # determine which cell each datum is in: for i in range(0,nd): icellx = int((x[i] - xo)/xcs) + 1 icelly = int((y[i] - yo)/ycs) + 1 icell = icellx + (icelly-1)*ncellx index[i] = icell cellwt[icell] = cellwt[icell] + 1.0 # The weight assigned to each datum is inversely proportional to the # number of data in the cell. We first need to get the sum of weights # so that we can normalize the weights to sum to one: sumw = 0.0 for i in range(0,nd): ipoint = index[i] sumw = sumw + (1.0 / cellwt[ipoint]) sumw = 1.0 / sumw # Accumulate the array of weights (that now sum to one): for i in range(0,nd): ipoint = index[i] wt[i] = wt[i] + (1.0/cellwt[ipoint])*sumw # End loop over all offsets: # compute the weighted average for this cell size: sumw = 0.0 sumwg = 0.0 for i in range(0,nd): sumw = sumw + wt[i] sumwg = sumwg + wt[i]*v[i] vrcr = sumwg / sumw vrcr_mat[lp] = vrcr xcs_mat[lp] = xcs # see if this weighting is optimal: if iminmax and vrcr < vrop or not iminmax and vrcr > vrop or ncell == 1: best = xcs vrop = vrcr wtopt = wt.copy() # deep copy # END MAIN LOOP over all cell sizes: # Get the optimal weights: sumw = 0.0 for i in range(0,nd): sumw = sumw + wtopt[i] wtmin = np.min(wtopt) wtmax = np.max(wtopt) facto = float(nd) / sumw wtopt = wtopt * facto return wtopt,xcs_mat,vrcr_mat ``` #### Set the working directory I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Also, in this case make sure to place the required (see above) GSLIB executables in this directory or a location identified in the environmental variable *Path*. ``` os.chdir("c:/PGE337/DataAnalysis") # set the working directory ``` You will have to update the part in quotes with your own working directory and the format is different on a Mac (e.g. "~/PGE"). ##### Make a 2D spatial model The following are the basic parameters for the demonstration. This includes the number of cells in the 2D regular grid, the cell size (step) and the x and y min and max along with the color scheme. Then we make a single realization of a Gausian distributed feature over the specified 2D grid and then apply affine correction to ensure we have a reasonable mean and spread for our feature's distribution, assumed to be Porosity (e.g. no negative values) while retaining the Gaussian distribution. Any transform could be applied at this point. We are keeping this workflow simple. *This is our truth model that we will sample*. The parameters of *GSLIB_sgsim_2d_uncond* are (nreal,nx,ny,hsiz,seed,hrange1,hrange2,azi,output_file). nreal is the number of realizations, nx and ny are the number of cells in x and y, hsiz is the cell siz, seed is the random number seed, hrange and hrange2 are the variogram ranges in major and minor directions respectively, azi is the azimuth of the primary direction of continuity (0 is aligned with Y axis) and output_file is a GEO_DAS file with the simulated realization. The ouput is the 2D numpy array of the simulation along with the name of the property. ``` nx = 100; ny = 100; cell_size = 10 # grid number of cells and cell size xmin = 0.0; ymin = 0.0; # grid origin xmax = xmin + nx * cell_size; ymax = ymin + ny * cell_size # calculate the extent of model seed = 74073 # random number seed for stochastic simulation range_max = 1800; range_min = 500; azimuth = 65 # Porosity variogram ranges and azimuth vario = make_variogram(0.0,nst=1,it1=1,cc1=1.0,azi1=65,hmaj1=1800,hmin1=500) mean = 10.0; stdev = 2.0 # Porosity mean and standard deviation #cmap = plt.cm.RdYlBu vmin = 4; vmax = 16; cmap = plt.cm.plasma # color min and max and using the plasma color map # calculate a stochastic realization with standard normal distribution sim = GSLIB_sgsim_2d_uncond(1,nx,ny,cell_size,seed,vario,"simulation") sim = affine(sim,mean,stdev) # correct the distribution to a target mean and standard deviation. sampling_ncell = 10 # sample every 10th node from the model samples = regular_sample(sim,xmin,xmax,ymin,ymax,sampling_ncell,10,10,'Realization') samples_cluster = samples.drop([80,79,78,73,72,71,70,65,64,63,61,57,56,54,53,47,45,42]) # this removes specific rows (samples) samples_cluster = samples_cluster.reset_index(drop=True) # we reset and remove the index (it is not sequential anymore) locpix(sim,xmin,xmax,ymin,ymax,cell_size,vmin,vmax,samples_cluster,'X','Y','Realization','Porosity Realization and Regular Samples','X(m)','Y(m)','Porosity (%)',cmap,"Por_Samples") ``` Let's compare the distribution and means of the truth model and the spatially clustered samples. We do this with the hist function that is reimplemented from GSLIB's hist method for histogram plotting. The parameters of hist are (array,xmin,xmax,log,cumul,bins,weights,xlabel,title), including array, xmin and xmax the data array and minimum and maximum of the feature, log and cumul with true for log axis and cumulative distribution function, bins for the number of bins, weights for an array of same size of the data array with weights and the remainder are labels. ``` plt.subplot(121) hist_st(sim.flatten(),vmin,vmax,log=False,cumul=False,bins=20,weights=None,xlabel="Porosity (%)",title="Porosity Realization") plt.subplot(122) hist_st(samples_cluster["Realization"],vmin,vmax,log=False,cumul=False,bins=20,weights=None,xlabel="Porosity (%)",title="Porosity Samples") plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.5, wspace=0.2, hspace=0.2) plt.show() sm_mean = np.average(samples_cluster['Realization']) ex_mean = np.average(sim) print('Truth Mean = ',round(ex_mean,2),', Clustered Sample Mean = ',round(sm_mean,2),', Error = ',round((sm_mean-ex_mean)/ex_mean,3)*100,'%') ``` Note the shift in mean from the truth model to the clustered sample. There is a 4.8% inflation in the clustered sample mean! This will be a good demonstration clustered data set for the value of cell-based declustering. We have created a biased sample set with spatial clustering. Now we can try some declustering. Let's apply the Python translation of **declus**, the GSLIB cell-based declustering program, to this sample set. The declus method has the following parameters (df,xcol,ycol,vcol,cmin,cmax,cnum,bmin) where df, xcol, ycol, vcol are the DataFrame with the data and the columns with x, y and feature, cmin and cmax are the minimum and maximum cell sizes, cnum is the number of cell sizes (discretization of this range) and bmin is true for selecting the cell size that minimizes the declustered mean (set to false for the cell that maximizes the declustered mean). The output from the declus function is a 1D numpy array of weigths with the same size and order as the input DataFrame for the optmum cell size and also the cell sizes and declustered average for each cell size (that's 3 1D ndarrays). After we calculate the weights numpy array we convert it to a DataFrame and append it (concat) it to our sample DataFrame. Then we visualize the histogram and location map of the weights. We will take a wide range of cell sizes from 1m to 2,000m going from much smaller than the minimum data spacing to twice the model extent. ``` wts,cell_sizes,averages = declus(samples_cluster,'X','Y','Realization',iminmax=1,noff=5,ncell=100,cmin=1,cmax=2000) ``` Let's visualize the declustered output. We should check out the porosity distribution naive and declustered, the distribution and location map of the delucstered weights and the plot of cell size vs. declustered mean. ``` import scipy.stats samples_cluster['wts'] = wts # add the weights to the sample data samples_cluster.head() plt.subplot(321) locmap_st(samples_cluster,'X','Y','wts',xmin,xmax,ymin,ymax,0.0,2.0,'Declustering Weights','X (m)','Y (m)','Weights',cmap) plt.subplot(322) hist_st(samples_cluster['wts'],0.0,2.0,log=False,cumul=False,bins=20,weights=None,xlabel="Weights",title="Declustering Weights") plt.ylim(0.0,20) plt.subplot(323) hist_st(samples_cluster['Realization'],0.0,20.0,log=False,cumul=False,bins=20,weights=None,xlabel="Porosity",title="Naive Porosity") plt.ylim(0.0,20) plt.subplot(324) hist_st(samples_cluster['Realization'],0.0,20.0,log=False,cumul=False,bins=20,weights=samples_cluster['wts'],xlabel="Porosity",title="Naive Porosity") plt.ylim(0.0,20) plt.subplot(325) plt.scatter(cell_sizes,averages, c = "black", marker='o', alpha = 0.2, edgecolors = "none") plt.xlabel('Cell Size (m)') plt.ylabel('Porosity Average (%)') plt.title('Porosity Average vs. Cell Size') plt.ylim(8,12) plt.xlim(0,2000) print(scipy.stats.describe(wts)) plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=3.5, wspace=0.2, hspace=0.2) plt.show() ``` There are so many more exercised and tests that one could attempt to gain experience with decison trees. I'll end here for brevity, but I invite you to continue. Consider, on your own apply other data sets or attempting modeling with random forest and boosting. I hope you found this tutorial useful. I'm always happy to discuss geostatistics, statistical modeling, uncertainty modeling and machine learning, *Michael* **Michael Pyrcz**, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin On Twitter I'm the **GeostatsGuy** and on YouTube my lectures are on the channel, **GeostatsGuy Lectures**.
github_jupyter
``` import json aug_data_path = "/Users/minjoons/data/squad/dev-v1.0-aug.json" aug_data = json.load(open(aug_data_path, 'r')) def compare_answers(): for article in aug_data['data']: for para in article['paragraphs']: deps = para['deps'] nodess = [] for dep in deps: nodes, edges = dep if dep is not None: nodess.append(nodes) else: nodess.append([]) wordss = [[node[0] for node in nodes] for nodes in nodess] for qa in para['qas']: for answer in qa['answers']: text = answer['text'] word_start = answer['answer_word_start'] word_stop = answer['answer_word_stop'] answer_words = wordss[word_start[0]][word_start[1]:word_stop[1]] yield answer_words, text ca = compare_answers() print(next(ca)) print(next(ca)) print(next(ca)) print(next(ca)) def counter(): count = 0 for article in aug_data['data']: for para in article['paragraphs']: deps = para['deps'] nodess = [] for dep in deps: if dep is None: count += 1 print(count) counter() def bad_node_counter(): count = 0 for article in aug_data['data']: for para in article['paragraphs']: sents = para['sents'] deps = para['deps'] nodess = [] for dep in deps: if dep is not None: nodes, edges = dep for node in nodes: if len(node) != 5: count += 1 print(count) bad_node_counter() def noanswer_counter(): count = 0 for article in aug_data['data']: for para in article['paragraphs']: deps = para['deps'] nodess = [] for dep in deps: if dep is not None: nodes, edges = dep nodess.append(nodes) else: nodess.append([]) wordss = [[node[0] for node in nodes] for nodes in nodess] for qa in para['qas']: for answer in qa['answers']: text = answer['text'] word_start = answer['answer_word_start'] word_stop = answer['answer_word_stop'] if word_start is None: count += 1 print(count) noanswer_counter() print(sum(len(para['qas']) for a in aug_data['data'] for para in a['paragraphs'])) import nltk def _set_span(t, i): if isinstance(t[0], str): t.span = (i, i+len(t)) else: first = True for c in t: cur_span = _set_span(c, i) i = cur_span[1] if first: min_ = cur_span[0] first = False max_ = cur_span[1] t.span = (min_, max_) return t.span def set_span(t): assert isinstance(t, nltk.tree.Tree) try: return _set_span(t, 0) except: print(t) exit() def same_span_counter(): count = 0 for article in aug_data['data']: for para in article['paragraphs']: consts = para['consts'] for const in consts: tree = nltk.tree.Tree.fromstring(const) set_span(tree) if len(list(tree.subtrees())) > len(set(t.span for t in tree.subtrees())): count += 1 print(count) same_span_counter() ```
github_jupyter
We first get necessary external data and code ``` !git clone https://github.com/AllenInstitute/deepinterpolation.git !mkdir -p ephys ``` Install deepinterpolation package ``` !pip install git+https://github.com/AllenInstitute/deepinterpolation.git import deepinterpolation as de import sys from shutil import copyfile import os from deepinterpolation.generic import JsonSaver, ClassLoader import datetime from typing import Any, Dict import pathlib import sys ``` This is used for record-keeping ``` now = datetime.datetime.now() run_uid = now.strftime("%Y_%m_%d_%H_%M") ``` Initialize meta-parameters objects ``` training_param = {} generator_param = {} network_param = {} generator_test_param = {} ``` An epoch is defined as the number of batches pulled from the dataset. Because our datasets are VERY large. Often, we cannot go through the entirity of the data so we define an epoch slightly differently than is usual. ``` steps_per_epoch = 10 ``` Those are parameters used for the Validation test generator. Here the test is done on the beginning of the data but this can be a separate file ``` generator_test_param["type"] = "generator" # type of collection generator_test_param["name"] = "EphysGenerator" # Name of object in the collection generator_test_param[ "pre_post_frame" ] = 30 # Number of frame provided before and after the predicted frame generator_test_param["train_path"] = os.path.join( "deepinterpolation", "sample_data", "ephys_tiny_continuous.dat2", ) generator_test_param["batch_size"] = 100 generator_test_param["start_frame"] = 0 generator_test_param["end_frame"] = 1999 generator_test_param[ "pre_post_omission" ] = 1 # Number of frame omitted before and after the predicted frame generator_test_param["steps_per_epoch"] = -1 # No step necessary for testing as epochs are not relevant. -1 deactivate it. ``` Those are parameters used for the main data generator ``` generator_param["type"] = "generator" generator_param["steps_per_epoch"] = steps_per_epoch generator_param["name"] = "EphysGenerator" generator_param["pre_post_frame"] = 30 generator_param["train_path"] = os.path.join( "deepinterpolation", "sample_data", "ephys_tiny_continuous.dat2", ) generator_param["batch_size"] = 100 generator_param["start_frame"] = 2000 generator_param["end_frame"] = 7099 generator_param["pre_post_omission"] = 1 ``` Those are parameters used for the training process ``` training_param["type"] = "trainer" training_param["name"] = "transfer_trainer" training_param["run_uid"] = run_uid # Path to model to transfer and fine-tune training_param["model_path"] = "ephys/unet_single_ephys_1024_mean_absolute_error_2020_11_12_18_34_2020_11_12_18_34/2020_11_12_18_34_unet_single_ephys_1024_mean_absolute_error_2020_11_12_18_34_model.h5" training_param["batch_size"] = generator_test_param["batch_size"] training_param["steps_per_epoch"] = steps_per_epoch training_param[ "period_save" ] = 25 # network model is potentially saved during training between a regular nb epochs training_param["nb_gpus"] = 0 training_param["apply_learning_decay"] = 0 training_param[ "nb_times_through_data" ] = 1 # if you want to cycle through the entire data. Two many iterations will cause noise overfitting training_param["learning_rate"] = 0.0001 training_param["pre_post_frame"] = generator_test_param["pre_post_frame"] training_param["loss"] = "mean_absolute_error" training_param[ "nb_workers" ] = 1 # this is to enable multiple threads for data generator loading. Useful when this is slower than training training_param["model_string"] = ( "transfer" + "_" + training_param["loss"] + "_" + training_param["run_uid"] ) ``` Where do you store ongoing training progress ``` jobdir = os.path.join( "ephys", training_param["model_string"] + "_" + run_uid, ) training_param["output_dir"] = jobdir try: os.mkdir(jobdir) except: print("folder already exists") ``` Here we create all json files that are fed to the training. This is used for recording purposes as well as input to the training proces ``` path_training = os.path.join(jobdir, "training.json") json_obj = JsonSaver(training_param) json_obj.save_json(path_training) path_generator = os.path.join(jobdir, "generator.json") json_obj = JsonSaver(generator_param) json_obj.save_json(path_generator) path_test_generator = os.path.join(jobdir, "test_generator.json") json_obj = JsonSaver(generator_test_param) json_obj.save_json(path_test_generator) path_network = os.path.join(jobdir, "network.json") json_obj = JsonSaver(network_param) json_obj.save_json(path_network) ``` Here we create all objects for training. ``` # We find the generator obj in the collection using the json file generator_obj = ClassLoader(path_generator) generator_test_obj = ClassLoader(path_test_generator) # We find the training obj in the collection using the json file trainer_obj = ClassLoader(path_training) # We build the generators object. This will, among other things, calculate normalizing parameters. train_generator = generator_obj.find_and_build()(path_generator) test_generator = generator_test_obj.find_and_build()(path_test_generator) # We build the training object. training_class = trainer_obj.find_and_build()(train_generator, test_generator, path_training) ``` Start training. This can take very long time. ``` training_class.run() ``` Finalize and save output of the training. ``` training_class.finalize() ```
github_jupyter
``` #hide from fastai2_audio import * ``` # Fastai2 Audio > An audio module for v2 of fastai. We want to help you build audio machine learning applications while minimizing the need for audio domain expertise. Currently under development. # IMPORTANT This version of the library is no longer being supported. All of the future development can be found here: https://github.com/fastaudio/fastaudio # Quick Start [Google Colab Notebook](https://colab.research.google.com/gist/PranY/ba0245752fff8ec2eb645afcc13f74f6/music.ipynb) [Zachary Mueller's class](https://youtu.be/0IQYJNkAI3k?t=1665) ## Install In the future we will offer conda and pip installs, but as the code is rapidly changing, we recommend that only those interested in contributing and experimenting install for now. Everyone else should use [Fastai audio v1](https://github.com/mogwai/fastai_audio) To install: ``` pip install packaging pip install git+https://github.com/rbracco/fastai2_audio.git ``` If you plan on contributing to the library instead, you will need to do a editable install: ``` pip install packaging nbdev --upgrade git clone https://github.com/rbracco/fastai2_audio cd fastai2_audio nbdev_install_git_hooks pip install -e . ``` # Contributing to the library We are looking for contributors of all skill levels. If you don't have time to contribute, please at least reach out and give us some feedback on the library by posting in the [v2 audio thread](https://forums.fast.ai/t/fastai-v2-audio/53535) or contact us via PM [@baz](https://forums.fast.ai/u/baz/) or [@madeupmasters](https://forums.fast.ai/u/MadeUpMasters/) ### How to contribute Create issues, write documentation, suggest/add features, submit PRs. We are open to anything. A good first step would be posting in the [v2 audio thread](https://forums.fast.ai/t/fastai-v2-audio/53535) introducing yourself. ### How to submit a PR The first step to create a new PR is to install `jupyter notebook` or `jupyterlab`. This library is build using [nbdev](https://nbdev.fast.ai/), meaning that all of the code, documentation and test is written into notebooks, and the `.py` python files, online docs and CI tests are created by nbdev based on those notebooks. All of the files that you will change are present into the `nbs/` folder. Your general workflow while developing will be: * Open the notebooks using jupyter * Change what is present * Save the file * Run `make` and nbdev will update the library files, documentation and run the tests all at once * Check if any errors occured while running the last step, then fix it and run `make` again * Commit the changes ### Advanced PR tips * The command `nbdev_diff_nbs` can let you know if there is a difference between the local library and the notebooks. * If you made a change to the notebooks in one of the exported cells, you can update the library with `nbdev_build_lib` or `make fastai2`. Note that this command will only update the library code, so before any commit, you'll need to run `make` as usual to update the docs and run the tests. * If you made a change to the library, you can export it back to the notebooks with `nbdev_update_lib`. # Active Contributors - [kevinbird15](https://github.com/kevinbird15) - [mogwai](https://github.com/mogwai) - [rbracco](https://github.com/rbracco) - [Hiromis](https://github.com/hiromis) - [scart97](https://github.com/scart97)
github_jupyter
# Unsupervised vs. Supervised Machine Translation In this tutorial, we'll use Adaptor to look at the **difference in accuracy** when training domain-specific translator using standard **Supervised** vs. **Unsupervised objectives**. We'll also use one extra domain to estimate the distributional **robustness** of our translator throughout the adaptation. * For the supervised adaptation and evaluations, we'll use standard *Sequence2Sequence* (i.e. MLE) objective. * For the unsupervised adaptation, we'll use Adaptor's unsupervised *BackTranslation* objectice. #### Requirements ``` # %%capture !git clone https://github.com/gaussalgo/adaptor.git !pip install -e adaptor # add the utils directory to the working directory, so we can easily import it !mv adaptor/examples . ``` ### Dataset resolution We will use Adaptor's `OPUSDataset` wrapper available in [/examples](https://github.com/gaussalgo/adaptor/blob/master/examples/data_utils_opus.py) of github repo. For a supported set of domains, this wrapper will download or reload the cached dataset and parse it into `source` and `target` list of strings. New domains can be added by [adding their urls](https://github.com/gaussalgo/adaptor/blob/db33e6e439babc68fe801a8946d87116ff44f170/examples/data_utils_opus.py#L10) from [https://opus.nlpl.eu](https://opus.nlpl.eu/). Note that `data_utils_opus.py` also takes care of deduplicating the datasets - if the sample of given source text was already loaded, it will be skipped in the next-loaded `OPUSDataset`s. This is to make sure that no data leakage between train and validation splits exist. ``` from examples.data_utils_opus import OPUSDataset src_lang = "cs" tgt_lang = "en" data_dir = "examples" val_size = 100 test_size = 1000 wiki_pairs = OPUSDataset("wikimedia", "train", src_lang, tgt_lang, data_dir=data_dir) wiki_val_pairs = OPUSDataset("wikimedia", "val", src_lang, tgt_lang, data_dir=data_dir, firstn=val_size) wiki_test_pairs = OPUSDataset("wikimedia", "test", src_lang, tgt_lang, data_dir=data_dir, firstn=test_size) opensub_pairs = OPUSDataset("OpenSubtitles", "train", src_lang, tgt_lang, data_dir=data_dir, firstn=val_size) opensub_val_pairs = OPUSDataset("OpenSubtitles", "val", src_lang, tgt_lang, data_dir=data_dir, firstn=val_size) opensub_test_pairs = OPUSDataset("OpenSubtitles", "test", src_lang, tgt_lang, data_dir=data_dir, firstn=test_size) wiki_pairs.source[:10] ``` ^^ this is a default format of the **samples** of Sequence2Sequence and inherited objectives ``` wiki_pairs.target[:10] ``` ^^ this is a default format of the **labels** of Sequence2Sequence and inherited objectives ## Running adaptation As our base translator for adaptation, we pick a general Helsinki-NLP model. This model has an architecture of the Transformer-base and has been pre-trained on a bulk dump of a subset of OPUS domains. Likely, it has already been exposed to our domains of adaptation and evaluation (Wiki & OpenSubtitles). ``` from adaptor.lang_module import LangModule lang_module = LangModule("Helsinki-NLP/opus-mt-%s-%s" % (src_lang, tgt_lang)) ``` Throughout the training, we evaluate model's BLEU. Using the identical interface, Adaptor allows you to evaluate any of other generative measures, that can better fit your task. Alternatively, you can relatively easily also implement your own generative evaluator. Take a look [here](https://github.com/gaussalgo/adaptor/blob/db33e6e439babc68fe801a8946d87116ff44f170/adaptor/evaluators/generative.py). ``` from adaptor.evaluators.generative import BLEU evaluators = [BLEU(additional_sep_char="▁", decides_convergence=True)] # "▁" is a specific separation token that sometimes relains left after output decoding. ``` ### Supervised adaptation In the first experiment, we use standard Sequence2Sequence (also used under a name MLE -Maximum Likelihood Estimation- objective). This objective maximises probability of every subsequent token under the assuptions of given input and the coreectly-generated preceding output. Adaptor Objectives provide high-level interface, expecting both the input texts and output texts (=labels) in a form of: * either a `List[str]`, with the texts and labels of the matching length * or a paths to a `.txt` files with one sample / label per line. ``` from adaptor.objectives.seq2seq import Sequence2Sequence seq_wiki = Sequence2Sequence(lang_module, texts_or_path=wiki_pairs.source, labels_or_path=wiki_pairs.target, val_texts_or_path=wiki_val_pairs.source, val_labels_or_path=wiki_val_pairs.target, source_lang_id=src_lang, target_lang_id=tgt_lang, batch_size=8, val_evaluators=evaluators, objective_id="Wiki") ``` Using the same interface, we also instantiate objectives used only for evaluation. Note that in order to avoid initialising a separate head of the shared model, you need to pass `share_other_objective_head` argument with a reference to other objective that will fully share its model with the new objective. The head of the evaluation objective would otherwise never be tuned. ``` eval_opensub = Sequence2Sequence(lang_module, texts_or_path=opensub_pairs.source, labels_or_path=opensub_pairs.target, val_texts_or_path=opensub_val_pairs.source, val_labels_or_path=opensub_val_pairs.target, source_lang_id=src_lang, target_lang_id=tgt_lang, batch_size=8, val_evaluators=evaluators, share_other_objective_head=seq_wiki, objective_id="Opensub") ``` Once we are done with the datasets, objectives and evaluators, we set up the `AdaptationArguments`. These are a small extension of 🤗 's [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer?highlight=launch#transformers.TrainingArguments), the extra parameters are documented with [AdaptationArguments definition](https://github.com/gaussalgo/adaptor/blob/db33e6e439babc68fe801a8946d87116ff44f170/adaptor/utils.py#L77). ``` from adaptor.utils import AdaptationArguments, StoppingStrategy training_arguments = AdaptationArguments(output_dir="experiments", learning_rate=2e-5, stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED, stopping_patience=5, do_train=True, do_eval=True, warmup_steps=5000, gradient_accumulation_steps=4, logging_steps=100, eval_steps=100, save_steps=1000, num_train_epochs=10, evaluation_strategy="steps") ``` Then, we define a `Schedule`, defining an order of application of selected `Objective`s. If our training is a single-objective, we can pick any Schedule available - it makes no difference. Finally, we define a main object called `Adapter`: this is again merely a small adjustment of 🤗 [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer?highlight=launch#transformers.Trainer), that takes care of data iteration according to selected `Schedule`, collection of `Objective`s' logs or applying selected multi-objective early-stopping strategy. ``` from adaptor.schedules import SequentialSchedule from adaptor.adapter import Adapter schedule = SequentialSchedule(objectives=[seq_wiki], extra_eval_objectives=[eval_opensub], args=training_arguments) adapter = Adapter(lang_module, schedule, args=training_arguments) adapter.train() ``` The training terminates when the selected `StoppingStrategy` is satisfied. There is a slightly larger list of options to pick from, to cover the wider variety of multi-objective situations. See the [StoppingStrategy options](https://github.com/gaussalgo/adaptor/blob/db33e6e439babc68fe801a8946d87116ff44f170/adaptor/utils.py#L19). Let's quickly check if the training was terminated by a number of epochs (as the log says -- note the `Scheduler reached a termination condition` message in the log), or our `stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED` (according to BLEU, since we've initialised it with `decides_convergence=True`) ``` # eval BLEU import pandas as pd pd.Series(seq_wiki.evaluations_history["eval"][evaluators[0]]).plot(figsize=(15, 7), grid=True) # eval loss import pandas as pd pd.Series(seq_wiki.evaluations_history["eval"]["loss"]).plot(figsize=(15, 7), grid=True) ``` ### Unsupervised adaptation In this experiment we'll see hiw far can we get without a large set of aligned supervised texts. ``` from adaptor.lang_module import LangModule lang_module = LangModule("Helsinki-NLP/opus-mt-%s-%s" % (src_lang, tgt_lang)) ``` We'll use `BackTranslation` objective, that first translates the target texts into the source language using given `BackTranslator` instance. This way, we do not need to provide the objective an aligned set of samples, but we still need unsupervised data of the target language. Of course, the eventual quality of such-trained model also heavily depends on a quaility of the BackTranslator. ``` from adaptor.objectives.backtranslation import BackTranslation, BackTranslator backtrans_wiki = BackTranslation(lang_module, back_translator=BackTranslator("Helsinki-NLP/opus-mt-%s-%s" % (tgt_lang, src_lang)), texts_or_path=wiki_pairs.target, val_texts_or_path=wiki_val_pairs.target, batch_size=8, share_other_objective_head=seq_wiki, objective_id="Wiki-Back") ``` All the other pieces of puzzle remain the same. We'll initialise extra evaluation objective though, to be able to compare the logs of the two evaluation objectives based on their history. ``` eval_opensub_unsup = Sequence2Sequence(lang_module, texts_or_path=opensub_pairs.source, labels_or_path=opensub_pairs.target, val_texts_or_path=opensub_val_pairs.source, val_labels_or_path=opensub_val_pairs.target, source_lang_id=src_lang, target_lang_id=tgt_lang, batch_size=8, val_evaluators=evaluators, share_other_objective_head=seq_wiki, objective_id="Opensub") from adaptor.schedules import SequentialSchedule from adaptor.adapter import Adapter schedule = SequentialSchedule(objectives=[backtrans_wiki], extra_eval_objectives=[eval_opensub_unsup], args=training_arguments) adapter = Adapter(lang_module, schedule, args=training_arguments) ``` Though this training will take somewhat longer, since the back-translation of the samples in the first epoch is performed on-the-fly. ``` adapter.train() ``` ## Analysis Let's see how the supervised adaptation is doing in comparison with the unsupervised BackTranslation objective. Thanks to Adaptor's separation to objectives, we can conveniently take a look at the validation accuracies of the two experiments, separately on in-distribution and out-of-distribution BLEU. ``` import pandas as pd wiki_bleus_sup = pd.Series(seq_wiki.evaluations_history['eval'][evaluators[0]]) wiki_bleus_unsup = pd.Series(backtrans_wiki.evaluations_history['eval'][evaluators[0]]) index = range(0, len(wiki_bleus_sup)*training_arguments.eval_steps, training_arguments.eval_steps) wiki_bleus_sup.index = index wiki_bleus_unsup.index = index wiki_bleus_sup.plot(figsize=(14, 5), grid=True, ylim=(0.75, 0.9), color="blue", label="In-domain validation BLEU of supervised adaptation") wiki_bleus_unsup.plot(figsize=(14, 5), grid=True, ylim=(0.75, 0.9), color="blue", alpha=0.7, label="in-domain validation BLEU of unsupervised adaptation") opensub_bleus_sup = pd.Series(eval_opensub.evaluations_history['eval'][evaluators[0]]) opensub_bleus_unsup = pd.Series(eval_opensub_unsup.evaluations_history['eval'][evaluators[0]]) index = range(0, len(opensub_bleus_sup)*training_arguments.eval_steps, training_arguments.eval_steps) opensub_bleus_sup.index = index opensub_bleus_unsup.index = index opensub_bleus_sup.plot(figsize=(14, 5), grid=True, ylim=(0.75, 0.9), color="blue", label="Out-of-domain validation BLEU of supervised adaptation") opensub_bleus_unsup.plot(figsize=(14, 5), grid=True, ylim=(0.75, 0.9), color="blue", alpha=0.7, label="Out-of-domain validation BLEU of unsupervised adaptation") ``` ## Evaluation Finally, we evaluate the out models of both trainings, to see which performs better on a held-out set. Note, that in both cases, we've used `stopping_strategy=StoppingStrategy.ALL_OBJECTIVES_CONVERGED`, so the evaluated model is the model after passing this stopping criteria. ``` supervised_model = seq_wiki.compatible_head_model unsupervised_model = backtrans_wiki.compatible_head_model evaluator = BLEU(additional_sep_char="▁") def evaluate_bleu(dataset, model) -> float: references = [] hypotheses = [] for src_text, ref_text in zip(wiki_test_pairs.source, wiki_test_pairs.target): references.append(ref_text) inputs = lang_module.tokenizer(src_text, truncation=True, return_tensors="pt").to(test_device) outputs = translator_model.generate(**inputs) translations = lang_module.tokenizer.batch_decode(outputs, remove_special_tokens=True) hypotheses.append(translations[0]) bleu = evaluator.evaluate_str(references, hypotheses) return bleu ``` ### In-distribution BLEUs: supervised & unsupervised ``` bleu_id_sup = evaluate_bleu(opensub_test_pairs, supervised_model) print("Test BLEU of supervised model on Wiki: %s" % bleu_id_sup) bleu_id_unsup = evaluate_bleu(wiki_test_pairs, unsupervised_model) print("Test BLEU of supervised model on Wiki: %s" % bleu_id_unsup) ``` ### Out-of-distribution BLEUs: supervised & unsupervised ``` bleu_ood_sup = evaluate_bleu(opensub_test_pairs, supervised_model) print("Test BLEU of supervised model on Wiki: %s" % bleu_ood_sup) bleu_ood_unsup = evaluate_bleu(opensub_test_pairs, unsupervised_model) print("Test BLEU of supervised model on Wiki: %s" % evaluate_bleu(opensub_test_pairs, unsupervised_model)) ```
github_jupyter
``` import numpy as np import pandas as pd import keras from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, Lambda, MaxPool2D, BatchNormalization, Input, concatenate, K, Reshape, LSTM from keras.utils import np_utils from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import RMSprop from keras.callbacks import Callback, EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, TensorBoard from keras.utils.np_utils import to_categorical from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score import xml.etree.ElementTree as ET import sklearn import itertools import cv2 import scipy import os import csv import matplotlib.pyplot as plt %matplotlib inline from tqdm import tqdm class1 = {1:'NEUTROPHIL',2:'EOSINOPHIL',3:'MONOCYTE',4:'LYMPHOCYTE'} class2 = {0:'Mononuclear',1:'Polynuclear'} tree_path = '../input/dataset-master/dataset-master/Annotations' image_path = '../input/dataset-master/dataset-master/JPEGImages' #Sample image generation image = cv2.imread(image_path+'/BloodImage_00002.jpg') tree = ET.parse(tree_path+'/BloodImage_00002.xml') try: image.shape print("Checked for shape. Shape is {}".format(image.shape)) except AttributeError: print("Error: Invalid shape.") for elem in tree.iter(): if 'object' in elem.tag or 'part' in elem.tag: for attr in list(elem): if 'name' in attr.tag: name = attr.text if 'bndbox' in attr.tag: for dim in list(attr): if 'xmin' in dim.tag: xmin = int(round(float(dim.text))) if 'ymin' in dim.tag: ymin = int(round(float(dim.text))) if 'xmax' in dim.tag: xmax = int(round(float(dim.text))) if 'ymax' in dim.tag: ymax = int(round(float(dim.text))) if name[0] == "R": cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 1) cv2.putText(image, name, (xmin + 10, ymin + 15), cv2.FONT_HERSHEY_SIMPLEX, 1e-3 * image.shape[0], (0, 255, 0), 1) if name[0] == "W": cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (0, 0, 255), 1) cv2.putText(image, name, (xmin + 10, ymin + 15), cv2.FONT_HERSHEY_SIMPLEX, 1e-3 * image.shape[0], (0, 0, 255), 1) if name[0] == "P": cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (255, 0, 0), 1) cv2.putText(image, name, (xmin + 10, ymin + 15), cv2.FONT_HERSHEY_SIMPLEX, 1e-3 * image.shape[0], (255, 0, 0), 1) plt.figure(figsize=(16,16)) plt.imshow(image) plt.show() df1 = pd.read_csv('../input/dataset-master/dataset-master/labels.csv') df1 = df1.drop(columns=['Unnamed: 0']).dropna() df1 #reader = csv.reader(open('/dataset-master/labels.csv')) # skip thev header y3 = df1[~df1["Category"].str.contains(",", na=False)]['Category'] y3 encoder = LabelEncoder() encoder.fit(y3) encoded_y = encoder.transform(y3) counts = np.bincount(encoded_y) print(counts) fig, ax = plt.subplots() plt.bar(list(range(5)), counts) ax.set_xticklabels(('', 'Basophil', 'Eosinophil', 'Lymphocyte', 'Monocyte', 'Neutrophil')) ax.set_ylabel('Number of Cells') def get_data(folder): # Load the data and labels from the given folder. X = [] y = [] z = [] for wbc_type in os.listdir(folder): print(wbc_type) if not wbc_type.startswith('.'): if wbc_type in ['NEUTROPHIL']: label = 1 label2 = 1 elif wbc_type in ['EOSINOPHIL']: label = 2 label2 = 1 elif wbc_type in ['MONOCYTE']: label = 3 label2 = 0 elif wbc_type in ['LYMPHOCYTE']: label = 4 label2 = 0 else: label = 5 label2 = 0 for image_filename in tqdm(os.listdir(folder + wbc_type)): img_file = cv2.imread(folder + wbc_type + '/' + image_filename) if img_file is not None: img_file = cv2.resize(img_file, dsize=(80,60), interpolation=cv2.INTER_CUBIC) img_arr = np.asarray(img_file) X.append(img_arr) y.append(label) z.append(label2) X = np.asarray(X) y = np.asarray(y) z = np.asarray(z) return X,y,z X_train, y_train, z_train = get_data('../input/dataset2-master/dataset2-master/images/TRAIN/') X_test, y_test, z_test = get_data('../input/dataset2-master/dataset2-master/images/TEST/') # Encode labels to hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0]) from keras.utils.np_utils import to_categorical y_trainHot = to_categorical(y_train, num_classes = 5) y_testHot = to_categorical(y_test, num_classes = 5) z_trainHot = to_categorical(z_train, num_classes = 2) z_testHot = to_categorical(z_test, num_classes = 2) print(class1) print(class2) def plotHistogram(a): #Plot histogram of RGB Pixel Intensities plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.imshow(a) plt.axis('off') histo = plt.subplot(1,2,2) histo.set_ylabel('Count') histo.set_xlabel('Pixel Intensity') n_bins = 30 plt.hist(a[:,:,0].flatten(), bins= n_bins, lw = 0, color='r', alpha=0.5); plt.hist(a[:,:,1].flatten(), bins= n_bins, lw = 0, color='g', alpha=0.5); plt.hist(a[:,:,2].flatten(), bins= n_bins, lw = 0, color='b', alpha=0.5); plotHistogram(X_train[1]) X_train=np.array(X_train) X_train=X_train/255.0 X_test=np.array(X_test) X_test=X_test/255.0 plotHistogram(X_train[1]) def rgb_to_grayscale(input): """Average out each pixel across its 3 RGB layers resulting in a grayscale image""" return K.mean(input, axis=3) def rgb_to_grayscale_output_shape(input_shape): return input_shape[:-1] # Helper Functions Learning Curves and Confusion Matrix class MetricsCheckpoint(Callback): """Callback that saves metrics after each epoch""" def __init__(self, savepath): super(MetricsCheckpoint, self).__init__() self.savepath = savepath self.history = {} def on_epoch_end(self, epoch, logs=None): for k, v in logs.items(): self.history.setdefault(k, []).append(v) np.save(self.savepath, self.history) def plotKerasLearningCurve(): plt.figure(figsize=(10,5)) metrics = np.load('logs.npy')[()] filt = ['acc'] # try to add 'loss' to see the loss learning curve for k in filter(lambda x : np.any([kk in x for kk in filt]), metrics.keys()): l = np.array(metrics[k]) plt.plot(l, c= 'r' if 'val' not in k else 'b', label='val' if 'val' in k else 'train') x = np.argmin(l) if 'loss' in k else np.argmax(l) y = l[x] plt.scatter(x,y, lw=0, alpha=0.25, s=100, c='r' if 'val' not in k else 'b') plt.text(x, y, '{} = {:.4f}'.format(x,y), size='15', color= 'r' if 'val' not in k else 'b') plt.legend(loc=4) plt.axis([0, None, None, None]); plt.grid() plt.xlabel('Number of epochs') plt.ylabel('Accuracy') def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): #This function prints and plots the confusion matrix. #Normalization can be applied by setting `normalize=True`. plt.figure(figsize = (5,5)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=90) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') def plot_learning_curve(history): plt.figure(figsize=(8,8)) plt.subplot(1,2,1) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.savefig('./accuracy_curve.png') #plt.clf() # summarize history for loss plt.subplot(1,2,2) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.savefig('./loss_curve.png') def runKerasCNNAugment(a,b,c,d,e, epochs, classes): batch_size = 128 num_classes = len(b[0]) # img_rows, img_cols = a.shape[1],a.shape[2] img_rows,img_cols=60,80 input_shape = (img_rows, img_cols, 3) input_tensor = Input(shape=input_shape) #Creating CNN modelcnn = Sequential() modelcnn.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape,strides=e)) modelcnn.add(Conv2D(64, (3, 3), activation='relu')) modelcnn.add(MaxPooling2D(pool_size=(2, 2))) modelcnn.add(Dropout(0.25)) modelcnn.add(Flatten()) cnn_output = modelcnn(input_tensor) # Creating RNN modelrnn = Lambda(rgb_to_grayscale, rgb_to_grayscale_output_shape)(input_tensor) modelrnn = LSTM(64, return_sequences=True, dropout=0.25, recurrent_dropout=0.25)(modelrnn) rnn_output = LSTM(64, dropout=0.25, recurrent_dropout=0.25)(modelrnn) # Merging both cnn bottleneck and rnn's output wise element wise multiplication merge_layer = concatenate([cnn_output, rnn_output]) last_process = Dense(128, activation='relu')(merge_layer) last_process = Dropout(0.5)(last_process) output_tensor = Dense(num_classes, activation='softmax')(last_process) model = Model(input=input_tensor, output=output_tensor) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening zoom_range=0.1, rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images history = model.fit_generator(datagen.flow(a,b, batch_size=32), steps_per_epoch=len(a) / 32, epochs=epochs, validation_data = [c, d],callbacks = [MetricsCheckpoint('logs')]) score = model.evaluate(c,d, verbose=0) print('\nKeras CNN #1C - accuracy:', score[1],'\n') y_pred = model.predict(c) map_characters = classes print('\n', sklearn.metrics.classification_report(np.where(d > 0)[1], np.argmax(y_pred, axis=1), target_names=list(map_characters.values())), sep='') Y_pred_classes = np.argmax(y_pred,axis=1) Y_true = np.argmax(d,axis=1) plotKerasLearningCurve() plt.show() plot_learning_curve(history) plt.show() confusion_mtx = confusion_matrix(Y_true, Y_pred_classes) plot_confusion_matrix(confusion_mtx, classes = list(classes.values())) plt.show() runKerasCNNAugment(X_train,y_trainHot,X_test,y_testHot,1, 30, class1) runKerasCNNAugment(X_train,z_trainHot,X_test,z_testHot,2, 30, class2) ```
github_jupyter
## Lending Club Data *** ``` import pandas as pd import numpy as np from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.feature_selection import RFE from sklearn.svm import SVR from sklearn.svm import LinearSVC from sklearn.svm import LinearSVR import seaborn as sns import matplotlib.pylab as pl %matplotlib inline #import matplotlib.pyplot as plt ``` ### Columns Interested **loan_status** -- Current status of the loan<br/> loan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/> int_rate -- interest rate of the loan <br/> grade -- LC assigned loan grade<br/> sub_grade -- LC assigned sub loan grade <br/> purpose -- A category provided by the borrower for the loan request. <br/> -- **dummy** annual_inc -- The self-reported annual income provided by the borrower during registration.<br/> emp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. <br/> -- **dummie** fico_range_low fico_range_high home_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER <br/> tot_cur_bal -- Total current balance of all accounts num_actv_bc_tl -- number of active bank accounts<br/> (*avg_cur_bal -- average current balance of all accounts *)<br/> mort_acc -- number of mortgage accounts<br/> num_actv_rev_tl -- Number of currently active revolving trades<br/> dti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income. pub_rec_bankruptcies - Number of public record bankruptcies<br/> delinq_amnt -- ----------- title -- mths_since_last_delinq -- The number of months since the borrower's last delinquency.<br/> mths_since_recent_revol_delinq -- Months since most recent revolving delinquency.<br/> total_cu_tl -- Number of finance trades<br/> last_credit_pull_d -- The most recent month LC pulled credit for this loan<br/> ``` ## 2015 df_app_2015 = pd.read_csv('data/LoanStats3d_securev1.csv.zip', compression='zip', low_memory=False,\ header=1) df_app_2015.loan_status.unique() df_app_2015.head(5) df_app_2015['delinq_amnt'].unique() df_app_2015.info(max_cols=111) df_app_2015.groupby('title').loan_amnt.mean() df_app_2015.groupby('purpose').loan_amnt.mean() df_app_2015['emp_length'].unique() ``` ## Decriptive Analyss 1. Annual income distribution 2. Total loan amount groupby interest rate chunks 3. Average loan amount groupby grade 4. Average loan amount groupby ``` ## selected columns df = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'grade', 'sub_grade',\ 'purpose',\ 'annual_inc', 'emp_length', 'home_ownership',\ 'fico_range_low','fico_range_high',\ 'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\ 'pub_rec_bankruptcies','dti' ]] df.head(3) len(df.dropna()) df.shape df.loan_status.unique() len(df[df['loan_status']=='Fully Paid']) len(df[df['loan_status']=='Default']) len(df[df['loan_status']=='Charged Off']) len(df[df['loan_status']=='Late (31-120 days)']) df.info() df.loan_status.unique() ## Convert applicable fields to numeric (I only select "Interest Rate" to use for this analysis) df.ix[:,'int_rate'] = df.ix[:,['int_rate']]\ .applymap(lambda e: pd.to_numeric(str(e).rstrip()[:-1], errors='coerce')) df.info() df = df.rename(columns={"int_rate": "int_rate(%)"}) df.head(3) #len(df.dropna(thresh= , axis=1).columns) df.describe() # 1. Loan Amount distribution # # create plots and histogram to visualize total loan amounts fig = pl.figure(figsize=(8,10)) ax1 = fig.add_subplot(211) ax1.plot(range(len(df)), sorted(df.loan_amnt), '.', color='purple') ax1.set_xlabel('Loan Applicant Count') ax1.set_ylabel('Loan Amount ($)') ax1.set_title('Fig 1a - Sorted Issued Loan Amount (2015)', size=15) # all_ histogram # pick upper bound 900 to exclude too large numbers ax2 = fig.add_subplot(212) ax2.hist(df.loan_amnt, range=(df.loan_amnt.min(), 36000), color='purple') ax2.set_xlabel('Loan Amount -$', size=12) ax2.set_ylabel('Counts',size=12) ax2.set_title('Fig 1b - Sorted Issued Loan Amount (2015)', size=15) ``` **Fig 1a** shows the sorted issued loan amounts from low to high.<br/> **Fig 2c** is a histogram showing the distribution of the issued loan amounts. **Obeservation**<br/> The Loan amounts vary from $1000 to $35,000, and the most frequent loan amounts issued are around $10,000. ``` inc_75 = df.describe().loc['75%', 'annual_inc'] count_75 = int(len(df)*0.75) # 2. Applicant Anual Income Distribution fig = pl.figure(figsize=(8,16)) ax0 = fig.add_subplot(311) ax0.plot(range(len(df.annual_inc)), sorted(df.annual_inc), '.', color='blue') ax0.set_xlabel('Loan Applicant Count') ax0.set_ylabel('Applicant Annual Income ($)') ax0.set_title('Fig 2a - Sorted Applicant Annual Income-all ($) (2015)', size=15) # use 75% quantile to plot the graph and histograms -- excluding extreme values inc_75 = df.describe().loc['75%', 'annual_inc'] inc_below75 = df.annual_inc[df.annual_inc <= inc_75] count_75 = int(len(df)*0.75) ax1 = fig.add_subplot(312) ax1.plot(range(count_75), sorted(df.annual_inc)[:count_75], '.', color='blue') ax1.set_xlabel('Loan Applicant Count') ax1.set_ylabel('Applicant Annual Income ($)') ax1.set_title('Fig 2b - Sorted Applicant Annual Income-75% ($) (2015)',size=15) # all_ histogram # pick upper bound 900 to exclude too large numbers ax2 = fig.add_subplot(313) ax2.hist(df.annual_inc, range=(df.annual_inc.min(), inc_75), color='blue') ax2.set_xlabel('Applicant Annual Income -$', size=12) ax2.set_ylabel('Counts',size=12) ax2.set_title('Fig 2c - Sorted Applicant Income-75% ($) (2015)',size=15) ``` **Fig 2a** and **Fig 2b** both show the sorted applicant annual income from low to high. The former indicates extreme values, and the latter plots only those values below the 75% quantile, which looks more sensible.<br/> **Fig 2c** is a histogram showing the distribution of the applicants' income (below 75% quantile). **Obeservation** The most frequent annual income amounts of ths applicants are between $40,000 and below $60,000. ``` 4.600000e+04 # 3. Loan amount and Applicant Annual Income # View all pl.figure(figsize=(6,4)) pl.plot(df.annual_inc, df.loan_amnt, '.') pl.ylim(0, 40000) pl.xlim(0, 0.2e7) # df.annual_inc.max() pl.title('Fig 3a - Loan Amount VS Applicant Annual Income_all', size=15) pl.ylabel('Loan Amount ($)', size=15) pl.xlabel('Applicant Annual Income ($)', size=15) ``` **Fig 3a** shows the approved loan amount against the applicants' annual income. <br/> ** Oberservation:**<br/> We can see that there are a few people with self-reported income that is very high, while majority of the applicants are with income less than $100,000. These extreme values indicate a possibility of outliers. **Method to deal with Outliers** <br/> Locate Outliers using Median-Absolute-Deviation (MAD) test and remove them for further analysis Pick samples to set outlier range using the mean of the outlier boundries-- the method could be improved by using ramdom sampling ``` # 3b pl.figure(figsize=(6,4)) pl.plot(df.annual_inc, df.loan_amnt, '.') pl.ylim(0, 40000) pl.xlim(0, inc_75) pl.title('Fig 3b - Loan Amount VS Applicant Annual Income_75%', size=15) pl.ylabel('Loan Amount ($)', size=15) pl.xlabel('Applicant Annual Income ($)', size=15) ``` **Fig 3b** is plot of the loan amount VS applicant annual income with all extreme income amounts being excluded. **Observation:**<br/> Now it is clearer to see that there is quite "rigid" standard to determine loan amounts based on income, however, there are still exceptions (sparse points above the "division line". ``` pl.plot(np.log(df.annual_inc), np.log(df.loan_amnt), '.') # 4. Average loan amount groupby grade mean_loan_grade = df.groupby('grade')['loan_amnt'].mean() mean_loan_grade sum_loan_grade = df.groupby('grade')['loan_amnt'].sum() sum_loan_grade fig = pl.figure(figsize=(8,12)) #16,5 ax0 = fig.add_subplot(211) ax0.plot(range(len(mean_loan_grade)), mean_loan_grade, 'o', color='blue') ax0.set_ylim(0, 23000) ax0.set_xlim(-0.5, len(mean_loan_grade)) ax0.set_xticks(range(len(mean_loan_grade))) ax0.set_xticklabels(('A','B','C','D','E','F','G')) ax0.set_xlabel('Grade') ax0.set_ylabel('Average Loan Amount ($)') ax0.set_title('Fig 4a - Average Loan Amount by Grade ($) (2015)', size=15) ax1 = fig.add_subplot(212) ax1.plot(range(len(sum_loan_grade)), sum_loan_grade, 'o', color='brown') ax1.set_ylim(0, 2.3e9) ax1.set_xlim(-0.5, len(sum_loan_grade)) ax1.set_xticks(range(len(sum_loan_grade))) ax1.set_xticklabels(('A','B','C','D','E','F','G')) ax1.set_xlabel('Grade') ax1.set_ylabel('Total Loan Amount ($)') ax1.set_title('Fig 4b - Total Loan Amount by Grade ($) (2015)', size=15) ``` **Fig 4a** shows the avereage approved loan amounts corresponded to the grades determined by the Lending Club. <br/> **Fig 4b** shows the total approved loan amounts corresponded to the grades determined by the Lending Club. <br/> ** Oberservation:**<br/> It is interesting to see that the points in these two charts have different trends-- the total loan amount gets higher from grade A to C, and then fall to a very low level; the average loan amount falls a little from grade A to grade B, and then gradually increases as the grade goes from B to G (increased by more than $5,000 from B to G).
github_jupyter
# Step 5 ## Predict behavioral performance from the match of the gradients We combine steps 2 and 4 to investigate a potential behavioral relevance of the match of time series to gradients ``` %matplotlib inline import numpy as np import h5py as h5 import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from scipy.stats import spearmanr from fgrad.predict import features_targets, predict_performance ``` ## Prepare the features Same as in Step 3, we simply calculate the average values for each 100-subject group ``` f = h5.File('/Users/marcel/projects/HCP/volumes_embedded_full.hdf5') d_LR = f['Working_memory/Run1'] d_RL = f['Working_memory/Run2'] labels = dict() labels['WM_fix'] = 0 labels['WM_0back'] = 1 labels['WM_2back'] = 2 # Block onsets expressed as TRs # We add 6 volumes (4.32 s) to each onset to take into account hemodynamic lag # and additional 4 volumes (2.88 s) to account for instruction nback_LR_2b = np.round(np.array([7.977, 79.369, 150.553, 178.689])/0.72).astype(int)+10 nback_LR_0b = np.round(np.array([36.159, 107.464, 221.965, 250.18])/0.72).astype(int)+10 nback_RL_2b = np.round(np.array([7.977, 79.369, 178.769, 250.22])/0.72).astype(int)+10 nback_RL_0b = np.round(np.array([36.159, 107.464, 150.567, 222.031])/0.72).astype(int)+10 nback_fix = np.array([88, 187, 286])+6 # Each block lasts for 27.5 seconds vols_2b_LR = np.concatenate([range(x,x+38) for x in nback_LR_2b]) vols_0b_LR = np.concatenate([range(x,x+38) for x in nback_LR_0b]) vols_2b_RL = np.concatenate([range(x,x+38) for x in nback_RL_2b]) vols_0b_RL = np.concatenate([range(x,x+38) for x in nback_RL_0b]) vols_fix = np.concatenate([range(x,x+22) for x in nback_fix]) vols_fix = np.concatenate([vols_fix, range(395, 405)]) # Targets nback_targets_LR = np.zeros(405) nback_targets_LR[vols_2b_LR] = 1 nback_targets_LR[vols_fix] = -1 nback_targets_RL = np.zeros(405) nback_targets_RL[vols_2b_RL] = 1 nback_targets_RL[vols_fix] = -1 # Get random group assignments subjects = f['Working_memory/Subjects'][...] np.random.seed(123) sind = np.arange(len(subjects)) G1 = sorted(np.random.choice(sind, 100, replace = False )) sind = np.delete(sind,G1) G2 = sorted(np.random.choice(sind, 100, replace = False )) sind = np.delete(sind,G2) G3 = sorted(np.random.choice(sind, 100, replace = False )) conds = ['WM_fix', 'WM_0back', 'WM_2back'] grads = [0,1,2] # Group 1 f_WM_train1, t_WM_train1 = features_targets(data = d_LR, subjects = G1, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_train2, t_WM_train2 = features_targets(data = d_RL, subjects = G1, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) # Group 2 f_WM_train3, t_WM_train3 = features_targets(data = d_LR, subjects = G2, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_train4, t_WM_train4 = features_targets(data = d_RL, subjects = G2, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) # Group 3 f_WM_test1, t_WM_test1 = features_targets(data = d_LR, subjects = G3, inds = nback_targets_LR, condnames = conds, gradients = grads, labels = labels) f_WM_test2, t_WM_test2 = features_targets(data = d_RL, subjects = G3, inds = nback_targets_RL, condnames = conds, gradients = grads, labels = labels) ``` ## Prepare targets Get the estimated d-prime and bias and try to predict them from the gradient match ``` data_performance = pd.read_csv('../data/WM_SDT.csv', index_col=0) data_performance.subject_id = data_performance.subject_id.astype('str') data_performance = data_performance.reset_index(drop = True) G1_ID = subjects[G1] G2_ID = subjects[G2] G3_ID = subjects[G3] # Indices to include from the behavioral data WM_ind_G1 = [] WM_ind_G2 = [] WM_ind_G3 = [] # These are the indices of features to remove because of missing data remove_G1 = [] remove_G2 = [] remove_G3 = [] for i, n in enumerate(G1_ID): try: WM_ind_G1.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G1.append(i) for i, n in enumerate(G2_ID): try: WM_ind_G2.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G2.append(i) for i, n in enumerate(G3_ID): try: WM_ind_G3.append(np.where(data_performance.iloc[:,0] == n)[0][0]) except: remove_G3.append(i) ``` #### Prepare the vectors ``` t_WM_2b_train1 = data_performance['dprime_2b_LR'][WM_ind_G1] t_WM_2b_train2 = data_performance['dprime_2b_RL'][WM_ind_G1] t_WM_2b_train3 = data_performance['dprime_2b_LR'][WM_ind_G2] t_WM_2b_train4 = data_performance['dprime_2b_RL'][WM_ind_G2] t_WM_2b_test1 = data_performance['dprime_2b_LR'][WM_ind_G3] t_WM_2b_test2 = data_performance['dprime_2b_RL'][WM_ind_G3] t_WM_0b_train1 = data_performance['dprime_0b_LR'][WM_ind_G1] t_WM_0b_train2 = data_performance['dprime_0b_RL'][WM_ind_G1] t_WM_0b_train3 = data_performance['dprime_0b_LR'][WM_ind_G2] t_WM_0b_train4 = data_performance['dprime_0b_RL'][WM_ind_G2] t_WM_0b_test1 = data_performance['dprime_0b_LR'][WM_ind_G3] t_WM_0b_test2 = data_performance['dprime_0b_RL'][WM_ind_G3] f_WM_2b_t1 = np.delete(f_WM_train1[2::3], remove_G1, axis = 0) f_WM_2b_t2 = np.delete(f_WM_train2[2::3], remove_G1, axis = 0) f_WM_2b_t3 = np.delete(f_WM_train3[2::3], remove_G2, axis = 0) f_WM_2b_t4 = np.delete(f_WM_train4[2::3], remove_G2, axis = 0) f_WM_2b_test1 = np.delete(f_WM_test1[2::3], remove_G3, axis = 0) f_WM_2b_test2 = np.delete(f_WM_test2[2::3], remove_G3, axis = 0) f_WM_0b_t1 = np.delete(f_WM_train1[1::3], remove_G1, axis = 0) f_WM_0b_t2 = np.delete(f_WM_train2[1::3], remove_G1, axis = 0) f_WM_0b_t3 = np.delete(f_WM_train3[1::3], remove_G2, axis = 0) f_WM_0b_t4 = np.delete(f_WM_train4[1::3], remove_G2, axis = 0) f_WM_0b_test1 = np.delete(f_WM_test1[1::3], remove_G3, axis = 0) f_WM_0b_test2 = np.delete(f_WM_test2[1::3], remove_G3, axis = 0) ``` ## Predict performance ``` from sklearn.preprocessing import StandardScaler from sklearn.metrics import explained_variance_score import statsmodels.api as sm ``` ### 2-back ``` features_A_2b = np.vstack([f_WM_2b_t1, f_WM_2b_t2]) features_B_2b = np.vstack([f_WM_2b_t3, f_WM_2b_t4]) features_C_2b = np.vstack([f_WM_2b_test1, f_WM_2b_test2]) targets_A_2b = np.concatenate([t_WM_2b_train1, t_WM_2b_train2]) targets_B_2b = np.concatenate([t_WM_2b_train3, t_WM_2b_train4]) targets_C_2b = np.concatenate([t_WM_2b_test1, t_WM_2b_test2]) predict_performance(features_A_2b, targets_A_2b, features_B_2b, targets_B_2b, features_C_2b, targets_C_2b) sns.set_style('whitegrid') size = 6 fig = plt.figure(figsize = (15,12)) ax = fig.add_subplot(3,3,1) ax.scatter(features_A_2b[:,0], targets_A_2b, s = size) ax.set_ylabel('Data split 1', fontsize = 18) ax.set_title('Gradient 1', fontsize = 18) ax = fig.add_subplot(3,3,2) ax.scatter(features_A_2b[:,1], targets_A_2b, s = size) ax.set_title('Gradient 2', fontsize = 18) ax = fig.add_subplot(3,3,3) ax.scatter(features_A_2b[:,2], targets_A_2b, s = size) ax.set_title('Gradient 3', fontsize = 18) ax = fig.add_subplot(3,3,4) ax.scatter(features_B_2b[:,0], targets_B_2b, s = size) ax.set_ylabel('Data split 2', fontsize = 18) ax = fig.add_subplot(3,3,5) ax.scatter(features_B_2b[:,1], targets_B_2b, s = size) ax = fig.add_subplot(3,3,6) ax.scatter(features_B_2b[:,2], targets_B_2b, s = size) ax = fig.add_subplot(3,3,7) ax.scatter(features_C_2b[:,0], targets_C_2b, s = size) ax.set_ylabel('Data split 3', fontsize = 18) ax = fig.add_subplot(3,3,8) ax.scatter(features_C_2b[:,1], targets_C_2b, s = size) ax = fig.add_subplot(3,3,9) ax.scatter(features_C_2b[:,2], targets_C_2b, s = size) plt.tight_layout() scaler = StandardScaler() scaler.fit(features_A_2b[:,[1,2]]) ols_2b = sm.OLS(targets_A_2b, sm.add_constant(scaler.transform(features_A_2b[:,[1,2]]))) ols_2b = ols_2b.fit(cov_type="HC1") print ols_2b.summary() pred_2b_B_A = ols_2b.predict(sm.add_constant(scaler.transform(features_A_2b[:,[1,2]]))) pred_2b_B_C = ols_2b.predict(sm.add_constant(scaler.transform(features_C_2b[:,[1,2]]))) print "Variance explained (A): %.2f" % explained_variance_score(targets_A_2b, pred_2b_B_A) print "Variance explained (C): %.2f" % explained_variance_score(targets_C_2b, pred_2b_B_C) ``` ### 0-back ``` scaler = StandardScaler() scaler.fit(f_WM_0b_t1) features_A_0b = np.vstack([f_WM_0b_t1, f_WM_0b_t2]) features_B_0b = np.vstack([f_WM_0b_t3, f_WM_0b_t4]) features_C_0b = np.vstack([f_WM_0b_test1, f_WM_0b_test2]) targets_A_0b = np.concatenate([t_WM_0b_train1, t_WM_0b_train2]) targets_B_0b = np.concatenate([t_WM_0b_train3, t_WM_0b_train4]) targets_C_0b = np.concatenate([t_WM_0b_test1, t_WM_0b_test2]) predict_performance(features_A_0b, targets_A_0b, features_B_0b, targets_B_0b, features_C_0b, targets_C_0b) scaler = StandardScaler() scaler.fit(features_A_0b[:,2]) ols_0b = sm.OLS(targets_A_0b, sm.add_constant(scaler.transform(features_A_0b[:,2]))) ols_0b = ols_0b.fit(cov_type="HC1") print ols_0b.summary() pred_0b_B_A = ols_0b.predict(sm.add_constant(scaler.transform(features_A_0b[:,2]))) pred_0b_B_C = ols_0b.predict(sm.add_constant(scaler.transform(features_C_0b[:,2]))) print "Variance explained (A): %.2f" % explained_variance_score(targets_A_0b, pred_0b_B_A) print "Variance explained (C): %.2f" % explained_variance_score(targets_C_0b, pred_0b_B_C) sns.set_style('whitegrid') size = 6 fig = plt.figure(figsize = (15,12)) ax = fig.add_subplot(3,3,1) ax.scatter(features_A_0b[:,0], targets_A_0b, s = size) ax.set_ylabel('Data split 1', fontsize = 18) ax.set_title('Gradient 1', fontsize = 18) ax = fig.add_subplot(3,3,2) ax.scatter(features_A_0b[:,1], targets_A_0b, s = size) ax.set_title('Gradient 2', fontsize = 18) ax = fig.add_subplot(3,3,3) ax.scatter(features_A_0b[:,2], targets_A_0b, s = size) ax.set_title('Gradient 3', fontsize = 18) ax = fig.add_subplot(3,3,4) ax.scatter(features_B_0b[:,0], targets_B_0b, s = size) ax.set_ylabel('Data split 2', fontsize = 18) ax = fig.add_subplot(3,3,5) ax.scatter(features_B_0b[:,1], targets_B_0b, s = size) ax = fig.add_subplot(3,3,6) ax.scatter(features_B_0b[:,2], targets_B_0b, s = size) ax = fig.add_subplot(3,3,7) ax.scatter(features_C_0b[:,0], targets_C_0b, s = size) ax.set_ylabel('Data split 3', fontsize = 18) ax = fig.add_subplot(3,3,8) ax.scatter(features_C_0b[:,1], targets_C_0b, s = size) ax = fig.add_subplot(3,3,9) ax.scatter(features_C_0b[:,2], targets_C_0b, s = size) plt.tight_layout() ``` ## Plot relationships for winning models ``` sns.set_style('white') dotsize = 10 fig = plt.figure(figsize = (10,10)) ax = plt.subplot2grid((2,2),(1, 0)) ax.scatter(features_B_2b[:,1], targets_B_2b, s = dotsize) ax.set_xlabel('Gradient 2', fontsize = 16) ax.set_ylabel('2-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) ax = plt.subplot2grid((2,2),(1, 1)) ax.scatter(features_B_2b[:,2], targets_B_2b, s = dotsize) ax.set_xlabel('Gradient 3', fontsize = 16) ax.set_ylabel('2-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) ax = plt.subplot2grid((2,2),(0, 0), colspan = 2) ax.scatter(features_B_0b[:,2], targets_B_0b, s = dotsize) ax.set_xlabel('Gradient 3', fontsize = 16) ax.set_ylabel('0-back D-Prime', fontsize = 16) #ax.set_title('2-back', fontsize = 16) fig.savefig('../figures/grad-performance_B.pdf') ```
github_jupyter
<p><font size="6"><b>Scientific Python essentials</b></font></p> > *Introduction to GIS scripting* > *May, 2017* > *© 2017, Stijn Van Hoey (<mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)* ## Introduction There is a large variety of packages available in Python to support research. Importing a package is like getting a piece of lab equipment out of a storage locker and setting it up on the bench for use in a project. Once a library is set up (imported), it can be used or called to perform many tasks. In this notebook, we will focus on two fundamental packages within most scientific applications: 1. Numpy 1. Pandas Furthermore, if plotting is required, this will be done with matplotlib package (we only use `plot` and `imshow` in this tutorial): ``` import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-white') ``` ## Numpy ### Introduction NumPy is the fundamental package for **scientific computing** with Python. Information for the *freaks*: * a powerful N-dimensional array/vector/matrix object * sophisticated (broadcasting) functions * function implementation in C/Fortran assuring good performance if vectorized * tools for integrating C/C++ and Fortran code * useful linear algebra, Fourier transform, and random number capabilities *In short*: Numpy is the Python package to do **fast** calculations! It is a community agreement to import the numpy package with the prefix `np` to identify the usage of numpy functions. Use the `CTRL` + `SHIFT` option to check the available functions of numpy: ``` import numpy as np # np. # explore the namespace ``` Numpy provides many mathematical functions, which operate element-wise on a so-called **`numpy.ndarray`** data type (in short: `array`). <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li> There is a lot of functionality in Numpy. Knowing **how to find a specific function** is more important than knowing all functions... </ul> </div> You were looking for some function to derive quantiles of an array... ``` np.lookfor("quantile") ``` Different methods do read the manual: ``` #?np.percentile # help(np.percentile) # use SHIFT + TAB ``` ### Showcases * You like to play boardgames, but you want to better know you're chances of rolling a certain combination (sum) with 2 dices: ``` throws = 1000 # number of rolls with the dices stone1 = np.random.uniform(1, 6, throws) # outcome of throws with dice 1 stone2 = np.random.uniform(1, 6, throws) # outcome of throws with dice 2 total = stone1 + stone2 # sum of each outcome histogram = plt.hist(total, bins=20) # plot as histogram ``` * Consider a random 10x2 matrix representing cartesian coordinates (between 0 and 1), how to convert them to polar coordinates? ``` # random numbers (X, Y in 2 columns) Z = np.random.random((10,2)) X, Y = Z[:,0], Z[:,1] # Distance R = np.sqrt(X**2 + Y**2) # Angle T = np.arctan2(Y, X) # Array of angles in radians Tdegree = T*180/(np.pi) # If you like degrees more # NEXT PART (purely for illustration) # plot the cartesian coordinates plt.figure(figsize=(14, 6)) ax1 = plt.subplot(121) ax1.plot(Z[:,0], Z[:,1], 'o') ax1.set_title("Cartesian") #plot the polar coorsidnates ax2 = plt.subplot(122, polar=True) ax2.plot(T, R, 'o') ax2.set_title("Polar") ``` * Rescale the values of a given array to values in the range [0-1] and mark zero values are Nan: ``` nete_bodem = np.load("../data/nete_bodem.npy") plt.imshow(nete_bodem) plt.colorbar(shrink=0.6) nete_bodem_rescaled = (nete_bodem - nete_bodem.min())/(nete_bodem.max() - nete_bodem.min()) # rescale nete_bodem_rescaled[nete_bodem_rescaled == 0.0] = np.nan # assign Nan values to zero values plt.imshow(nete_bodem_rescaled) plt.colorbar(shrink=0.6) ``` (**Remark:** There is no GIS-component in the previous manipulation, these are pure element-wise operations on an array!) ### Creating numpy array ``` np.array([1, 1.5, 2, 2.5]) #np.array(anylist) ``` <div class="alert alert-warning"> <b>R comparison:</b><br> <p>One could compare the numpy array to the R vector. It contains a single data type (character, float, integer) and operations are element-wise.</p> </div> Provide a range of values, with a begin, end and stepsize: ``` np.arange(5, 12, 2) ``` Provide a range of values, with a begin, end and number of values in between: ``` np.linspace(2, 13, 3) ``` Create empty arrays or arrays filled with ones: ``` np.zeros((5, 2)), np.ones(5) ``` Request the `shape` or the `size` of the arrays: ``` np.zeros((5, 2)).shape, np.zeros((5, 2)).size ``` And creating random numbers: ``` np.random.rand(5,5) # check with np.random. + TAB for sampling from other distributions! ``` Reading in from binary file: ``` nete_bodem = np.load("../data/nete_bodem.npy") plt.imshow(nete_bodem) ``` Reading in from a **text**-file: ``` nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out") plt.imshow(nete_bodem_subset) ``` ### Slicing (accessing values in arrays) This i equivalent to the slicing of a `list`: ``` my_array = np.random.randint(2, 10, 10) my_array my_array[:5], my_array[4:], my_array[-2:] my_array[0:7:2] sequence = np.arange(0, 11, 1) sequence, sequence[::2], sequence[1::3], ``` Assign new values to items ``` my_array[:2] = 10 my_array my_array = my_array.reshape(5, 2) my_array ``` With multiple dimensions, we get the option of slice amongst these dimensions: ``` my_array[0, :] ``` ### Aggregation calculations ``` my_array = np.random.randint(2, 10, 10) my_array print('Mean value is', np.mean(my_array)) print('Median value is', np.median(my_array)) print('Std is', np.std(my_array)) print('Variance is', np.var(my_array)) print('Min is', my_array.min()) print('Element of minimum value is', my_array.argmin()) print('Max is', my_array.max()) print('Sum is', np.sum(my_array)) print('Prod', np.prod(my_array)) print('Unique values in this array are:', np.unique(my_array)) print('85% Percentile value is: ', np.percentile(my_array, 85)) my_other_array = np.random.randint(2, 10, 10).reshape(2, 5) my_other_array ``` use the argument `axis` to define the ax to calculate a specific statistic: ``` my_other_array.max(), my_other_array.max(axis=1), my_other_array.max(axis=0) ``` ### Element-wise operations ``` my_array = np.random.randint(2, 10, 10) my_array print('Cumsum is', np.cumsum(my_array)) print('CumProd is', np.cumprod(my_array)) print('CumProd of 5 first elements is', np.cumprod(my_array)[4]) np.exp(my_array), np.sin(my_array) my_array%3 # == 0 ``` Using the numpy available function from the library or using the object method? ``` np.cumsum(my_array) == my_array.cumsum() my_array.dtype ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Check the documentation of both `np.cumsum()` and `my_array.cumsum()`. What is the difference?</li> <li>Why do we use brackets () to run `cumsum` and we do not use brackets when asking for the `dtype`?</li> </ul> </div> <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li> `np.cumsum` operates a <b>method/function</b> from the numpy library with input an array, e.g. `my_array` <li> `my_array.cumsum()` is a <b>method/function</b> available to the object `my_array` <li> `dtype` is an attribute/characteristic of the object `my_array` </ul> </div> <div class="alert alert-danger"> <ul> <li>It is all about calling a **method/function()** on an **object** to perform an action. The available methods are provided by the packages (or any function you write and import). <li>Objects also have **attributes**, defining the characteristics of the object (these are not actions) </ul> </div> ``` my_array.cumsum() my_array.max(axis=0) my_array * my_array # element-wise ``` <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li> The operations do work on all elements of the array at the same time, you don't need a <strike>`for` loop<strike> </ul> </div> What is the added value of the numpy implementation compared to 'basic' python? ``` a_list = range(1000) %timeit [i**2 for i in a_list] an_array = np.arange(1000) %timeit an_array**2 ``` ### Boolean indexing and filtering (!) This is a fancy term for making selections based on a **condition**! Let's start with an array that contains random values: ``` row_array = np.random.randint(1, 20, 10) row_array ``` Conditions can be checked (*element-wise*): ``` row_array > 5 boolean_mask = row_array > 5 boolean_mask ``` You can use this as a filter to select elements of an array: ``` row_array[boolean_mask] ``` or, also to change the values in the array corresponding to these conditions: ``` row_array[boolean_mask] = 20 row_array ``` in short - making the values equal to 20 now -20: ``` row_array[row_array == 20] = -20 row_array ``` <div class="alert alert-warning"> <b>R comparison:</b><br> <p>This is similar to conditional filtering in R on vectors...</p> </div> <div class="alert alert-danger"> Understanding conditional selections and assignments is CRUCIAL! </div> This requires some practice... ``` AR = np.random.randint(0, 20, 15) AR ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Count the number of values in AR that are larger than 10 (note: you can count with True = 1 and False = 0)</li> </ul> </div> ``` # %load ../notebooks/_solutions/02-scientific-python-introduction52.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Change all even numbers of `AR` into zero-values.</li> </ul> </div> ``` # %load ../notebooks/_solutions/02-scientific-python-introduction53.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Change all even positions of matrix AR into 30 values</li> </ul> </div> ``` # %load ../notebooks/_solutions/02-scientific-python-introduction54.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Select all values above the 75th `percentile` of the following array AR2 ad take the square root of these values</li> </ul> </div> ``` AR2 = np.random.random(10) AR2 # %load ../notebooks/_solutions/02-scientific-python-introduction56.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Convert all values -99. of the array AR3 into Nan-values (Note that Nan values can be provided in float arrays as `np.nan`)</li> </ul> </div> ``` AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.]) # %load ../notebooks/_solutions/02-scientific-python-introduction58.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Get an overview of the unique values present in the array `nete_bodem_subset`</li> </ul> </div> ``` nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out") # %load ../notebooks/_solutions/02-scientific-python-introduction60.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Reclassify the values of the array `nete_bodem_subset` (binary filter):</li> <ul> <li>values lower than or equal to 100000 should be 0</li> <li>values higher than 100000 should be 1</li> </ul> </ul> </div> ``` nete_bodem_subset = np.loadtxt("../data/nete_bodem_subset.out") # %load ../notebooks/_solutions/02-scientific-python-introduction62.py ``` <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li> No need to retain everything, but have the reflex to search in the documentation (online docs, SHIFT-TAB, help(), lookfor())!! <li> Conditional selections (boolean indexing) is crucial! </ul> </div> This is just touching the surface of Numpy in order to proceed to the next phase (Pandas and GeoPandas)... More extended material on Numpy is available online: * http://www.scipy-lectures.org/intro/numpy/index.html (great resource to start with scientifi python!) * https://github.com/stijnvanhoey/course_python_introduction/blob/master/scientific/numpy.ipynb (more extended version of the material covered in this tutorial) ## Pandas: data analysis in Python ### Introduction For data-intensive work in Python, the Pandas library has become essential. Pandas originally meant **Pan**el **Da**ta, though many users probably don't know that. What is pandas? * Pandas can be thought of as **NumPy arrays with labels for rows and columns**, and better support for heterogeneous data types, but it's also much, much more than that. * Pandas can also be thought of as **R's data.frame** in Python. * Powerful for working with missing data, working with **time series** data, for reading and writing your data, for reshaping, grouping, merging your data,... Pandas documentation is available on: http://pandas.pydata.org/pandas-docs/stable/ ``` # community agreement: import as pd import pandas as pd ``` ### Data exploration Reading in data to DataFrame ``` surveys_df = pd.read_csv("../data/surveys.csv") surveys_df.head() # Try also tail() surveys_df.shape surveys_df.columns surveys_df.info() surveys_df.dtypes surveys_df.describe() ``` <div class="alert alert-warning"> <b>R comparison:</b><br> <p>See the similarities and differences with the R `data.frame` - e.g. you would use `summary(df)` instead of `df.describe()` :-)</p> </div> ``` surveys_df["weight"].hist(bins=20) ``` ### Series and DataFrames A Pandas **Series** is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created: ``` a_series = pd.Series([0.1, 0.2, 0.3, 0.4]) a_series a_series.index, a_series.values ``` Series do have an index and values (*a numpy array*!) and you can give the series a name (amongst other things) ``` a_series.name = "example_series" a_series a_series[2] ``` Unlike the NumPy array, though, this index can be something other than integers: ``` a_series2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd']) a_series2['c'] ``` A DataFrame is a tabular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index. <img src="../img/schema-dataframe.svg" width=50%><br> Note that in the IPython notebook, the dataframe will display in a rich HTML view: ``` surveys_df.head() surveys_df["species_id"].head() ``` If you selecte a single column of a DataFrame, you end up with... a Series: ``` type(surveys_df), type(surveys_df["species_id"]) ``` ### Aggregation and element-wise calculations Completely similar to Numpy, aggregation statistics are available: ``` print('Mean weight is', surveys_df["weight"].mean()) print('Median weight is', surveys_df["weight"].median()) print('Std of weight is', surveys_df["weight"].std()) print('Variance of weight is', surveys_df["weight"].var()) print('Min is', surveys_df["weight"].min()) print('Element of minimum value is', surveys_df["weight"].argmin()) print('Max is', surveys_df["weight"].max()) print('Sum is', surveys_df["weight"].sum()) print('85% Percentile value is: ', surveys_df["weight"].quantile(0.85)) ``` Calculations are **element-wise**, e.g. adding the normalized weight (relative to its mean) as an additional column: ``` surveys_df['weight_normalised'] = surveys_df["weight"]/surveys_df["weight"].mean() ``` Pandas and Numpy collaborate well (Numpy methods can be applied on the DataFrame values, as these are actually numpy arrays): ``` np.sqrt(surveys_df["hindfoot_length"]).head() ``` **Groupby** provides the functionality to do an aggregation or calculation for each group: ``` surveys_df.groupby('sex')[['hindfoot_length', 'weight']].mean() # Try yourself with min, max,... ``` <div class="alert alert-warning"> <b>R comparison:</b><br> <p>Similar with groupby in R, i.e. working with factors</p> </div> ### Slicing <div class="alert alert-info"> <b>ATTENTION!:</b><br><br> One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br>We now have to distuinguish between: <ul> <li> selection by **label**: loc <li> selection by **position** iloc </ul> </div> ``` # example dataframe from scratch data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries = countries.set_index('country') countries ``` #### The shortcut [] ``` countries['area'] # single [] countries[['area', 'population']] # double [[]] countries['France':'Netherlands'] ``` #### Systematic indexing with loc and iloc When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes: * `loc`: selection by label * `iloc`: selection by position ``` countries.loc['Germany', 'area'] countries.loc['France':'Germany', ['area', 'population']] ``` Selecting by position with iloc works **similar as indexing numpy arrays**: ``` countries.iloc[0:2,1:3] ``` ### Boolean indexing In short, similar to Numpy: ``` countries['area'] > 100000 ``` Selecting by conditions: ``` countries[countries['area'] > 100000] countries['size'] = np.nan # create an exmpty new column countries countries.loc[countries['area'] > 100000, "size"] = 'LARGE' countries.loc[countries['area'] <= 100000, "size"] = 'SMALL' countries ``` ### Combining DataFrames (!) An important way to combine `DataFrames` is to use columns in each dataset that contain common values (a common unique id) as is done in databases. Combining `DataFrames` using a common field is called *joining*. Joining DataFrames in this way is often useful when one `DataFrames` is a “lookup table” containing additional data that we want to include in the other. As an example, consider the availability of the species information in a separate lookup-table: ``` species_df = pd.read_csv("../data/species.csv", delimiter=";") ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Check the other `read_` functions that are available in the Pandas package yourself. </ul> </div> ``` species_df.head() surveys_df.head() ``` We see that both tables do have a common identifier column (`species_id`), which we ca use to join the two tables together with the command `merge`: ``` merged_left = pd.merge(surveys_df, species_df, how="left", on="species_id") merged_left.head() ``` ### Optional section: Pandas is great with time series ``` flowdata = pd.read_csv("../data/vmm_flowdata.csv", index_col=0, parse_dates=True) flowdata.head() ``` <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li> `pd.read_csv` provides a lot of built-in functionality to support this kind of transactions when reading in a file! Check the **help** of the read_csv function... </ul> </div> The index provides many attributes to work with: ``` flowdata.index.year, flowdata.index.dayofweek, flowdata.index.dayofyear #,... ``` Subselecting periods can be done by the string representation of dates: ``` flowdata["2012-01-01 09:00":"2012-01-04 19:00"].plot() ``` or shorter when possible: ``` flowdata["2009"].plot() ``` Combinations with other selection criteria is possible, e.g. to get all months with 30 days in the year 2009: ``` flowdata.loc[(flowdata.index.days_in_month == 30) & (flowdata.index.year == 2009), "L06_347"].plot() ``` Select all 'daytime' data (between 8h and 20h) for all days, station "L06_347": ``` flowdata[(flowdata.index.hour > 8) & (flowdata.index.hour < 20)].head() # OR USE flowdata.between_time('08:00', '20:00') ``` A **very powerful method** is `resample`: converting the frequency of the time series (e.g. from hourly to daily data). ``` flowdata.resample('A').mean().plot() ``` A practical example is: Plot the monthly minimum and maximum of the daily average values of the `LS06_348` column ``` daily = flowdata['LS06_348'].resample('D').mean() # calculate the daily average value daily.resample('M').agg(['min', 'max']).plot() # calculate the monthly minimum and maximum values ``` Other plots are supported as well, e.g. a bar plot of the mean of the stations in year 2013 ``` flowdata['2013'].mean().plot(kind='barh') ``` Acknowledgments and Material * J.R. Johansson (robert@riken.jp) http://dml.riken.jp/~rob/ * http://scipy-lectures.github.io/intro/numpy/index.html * http://www.labri.fr/perso/nrougier/teaching/numpy.100/index.html
github_jupyter
``` !pip install --upgrade git+https://github.com/EmGarr/kerod.git # Useful for tensorboard !pip install --upgrade grpcio #%tensorflow_version 2.x import tensorflow as tf device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` # Download Pascal VOC Download and preprocess Pascal VOC to the following format (required by od networks): ```python dataset = { 'images' : A tensor of float32 and shape [1, height, widht, 3], 'images_info': A tensor of float32 and shape [1, 2] , 'bbox': A tensor of float32 and shape [1, num_boxes, 4], 'labels': A tensor of int32 and shape [1, num_boxes], 'num_boxes': A tensor of int32 and shape [1, 1], 'weights': A tensor of float32 and shape [1, num_boxes] } ``` ``` import tensorflow as tf import tensorflow_datasets as tfds from kerod.dataset.preprocessing import preprocess, expand_dims_for_single_batch ds_train, ds_info = tfds.load(name="voc", split="train", shuffle_files=True, with_info=True) ds_train = ds_train.map(preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE) ds_train = ds_train.map(expand_dims_for_single_batch, num_parallel_calls=tf.data.experimental.AUTOTUNE) ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE) ds_test = tfds.load(name="voc", split="test", shuffle_files=False) ds_test = ds_test.map(preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE) ds_test = ds_test.map(expand_dims_for_single_batch, num_parallel_calls=tf.data.experimental.AUTOTUNE) ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE) ds_info ``` # Load and train the network ``` from kerod.core.standard_fields import BoxField from kerod.core.learning_rate_schedule import LearningRateScheduler from kerod.model import factory from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint # Number of classes of Pascal Voc classes = ds_info.features['labels'].names num_classes = len(classes) model_faster_rcnn = factory.build_model(num_classes) base_lr = 0.02 optimizer = tf.keras.optimizers.SGD(learning_rate=base_lr, momentum=0.9) model_faster_rcnn.compile(optimizer=optimizer, loss=None) callbacks = [LearningRateScheduler(base_lr, 1, epochs=[8, 10], init_lr=0.0001), TensorBoard(), ModelCheckpoint('.checkpoints/')] model_faster_rcnn.fit(ds_train, validation_data=ds_test, epochs=11, callbacks=callbacks) # Export a saved model for serving purposes model_faster_rcnn.export_for_serving('serving') ``` # Tensorboard ``` # Load TENSORBOARD %load_ext tensorboard # Start TENSORBOARD %tensorboard --logdir logs ```
github_jupyter
<a href="https://colab.research.google.com/github/maithoi/CodeSignal/blob/arcade/Arcade.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # [$Markdown$](https://colab.research.google.com/github/gitedio/examples/blob/master/Working%20With%20Markdown%20Cells.ipynb#scrollTo=BMwHlyDAUm9v) ## getPoints ``` def getPoints(answers, p): # return a integer value corresponding with Ith index or penalty score questionPoints = lambda i, ans: i+1 if ans else -p res = 0 for i, ans in enumerate(answers): res += questionPoints(i, ans) return res lst = [True, True, False, True] getPoints(lst, -2) ``` ## Is Test Solvable ``` def isTestSolvable(ids, k): digitSum = lambda number: 0 if number == 0 else (number % 10) + digitSum(number//10) sm = 0 for questionId in ids: sm += digitSum(questionId) return sm % k == 0 101 / 10 101 // 10 101 % 10 ``` ## Word Power ``` def wordPower(word): num = {ch:ord(word[idx]) &31 for idx, ch in enumerate(word)} print(num) return sum([num[ch] for ch in word]) wordPower('hello') ``` # Day4 (30/6) ## Cool Paris ``` def coolPairs(a, b): uniqueSums = {(i+j) for i in a for j in b if (i*j) % (i+j) ==0} return len(uniqueSums) a = [4, 5, 6, 7, 8] b = [8, 9, 10, 11, 12] coolPairs(a, b) ``` ## Multiplication Table ``` multiplicationTable(n) = [[1, 2, 3, 4, 5 ], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15], [4, 8, 12, 16, 20], [5, 10, 15, 20, 25]] ``` * Multiplication table of size `n × n`, i.e. a square matrix that has value `i * j` at the intersection of the ith row and the jth column (both 1-based). ``` def multiplicationTable(n): return [[(i+1)*(j+1) for j in range(n)] for i in range(n)] multiplicationTable(5) ``` # Dictionaries ## Unique Characters * Sort input string in ascending order by their ASCII codes - To get the ASCII code of character in python3 we used `ord()` function - [Get sorted using Lambda function](https://dev.to/zmbailey/getting-sorted-part-3-python-and-lambda-functions-28ij) ### Examples For `document = "Todd told Tom to trot to the timber"`, the output should be `uniqueCharacters(document) = [' ', 'T', 'b', 'd', 'e', 'h', 'i', 'l', 'm', 'o', 'r', 't']`. ``` def uniqueCharacters(document): return [char for char in sorted(list(set(document)), key=lambda x: ord(x))] document = "Todd told Tom to trot to the timber" uniqueCharacters(document) ``` # Day5 (1/7) ## Fix Result * Using map() function to apply another calculation to a given list ``` def fixResult(result): def fix(x): return x // 10 return list(map(fix, result)) result = [1, 10, 21, 22, 30] fixResult(result) ``` ## College Courses Given a list of `courses`, remove the courses with titles consisting of `x` letters and return the result. For `x = 7` and `courses = ["Art", "Finance", "Business", "Speech", "History", "Writing", "Statistics"]`, the output should be `collegeCourses(x, courses) = ["Art", "Business", "Speech", "Statistics"]`. ``` def collegeCourses(x, courses): def shouldConsider(course): return len(course) != x return list(filter(shouldConsider, courses)) courses = ["Art", "Finance", "Business", "Speech", "History", "Writing", "Statistics"] collegeCourses(7, courses) ``` ## Create Histogram For `ch = '*'` and `assignments = [12, 12, 14, 3, 12, 15, 14]`, the output should be ``` createHistogram(ch, assignments) = ["************", "************", "**************", "***", "************", "***************", "**************"] ``` ``` def createHistogram(ch, assignments): return list(map(lambda i: ch*i, assignments)) ch = '$' assignments = [1, 5, 10, 15, 20] createHistogram(ch, assignments) ``` ## Least Common Denominator For `denominators = [2, 3, 4, 5, 6]`, the output should be `leastCommonDenominator(denominators) = 60`. ``` # Function to calculate LCM #Least common multiple (LCM) of numbers a and b is defined as the smallest number that is divisible by both a and b. def lcm(x, y): from fractions import gcd # or can import gcd from `math` in Python 3 return x * y // gcd(x, y) ``` For the given list of `denominators`, find the least common denominator by finding their `LCM`. ``` # Find LCM using GCD (Greatest Common Divisor) in the MATH library def lcm(a, b): return abs(a*b) // math.gcd(a, b) import functools import math from fractions import gcd def leastCommonDenominator(denominators): return functools.reduce(lambda x,y:abs(x*y) // math.gcd(x, y), denominators) denominators = [2, 3, 4, 5, 6] leastCommonDenominator(denominators) ``` # Day6 (2/7) ## Correct Scholarships `bestStudents` --> representing ids of the best students `scholarships` --> students that will get a scholarship `allStudents` --> all the students in th university `correctly distributed` ==> if all best students have it, but not all students in the university do --- For `bestStudents = [3, 5]`, `scholarships = [3, 5, 7]`, and `allStudents = [1, 2, 3, 4, 5, 6, 7]`, the output should be `correctScholarships(bestStudents, scholarships, allStudents) = true`; --- For `bestStudents = [3, 5]`, `scholarships = [3, 5]`, and `allStudents = [3, 5]`, the output should be `correctScholarships(bestStudents, scholarships, allStudents) = false`. --- For `bestStudents = [3]`, `scholarships = [1, 3, 5]`, and `allStudents = [1, 2, 3]`, the output should be `correctScholarships(bestStudents, scholarships, allStudents) = false` ``` a = [3, 5] b = [3, 5] set_a = set(a) set_b = set(b) set_a.issubset(set_b) print(a==b) ``` * flow of thoughts 1. check if all best students have scholarships 2. check if all scholarships are correctly distributed to all students 3. check that not all students have scholarships * technical expression [Check if Python list contains all elements of another list](https://www.techbeamers.com/program-python-list-contains-elements/#custom-search) ``` def correctScholarships(bestStudents, scholarships, allStudents): return all(best in scholarships for best in bestStudents) and all(scholar in allStudents for scholar in scholarships) and set(scholarships) != set(allStudents) bestStudents = [3] scholarships = [1, 3, 5] allStudents = [1, 2, 3] correctScholarships(bestStudents, scholarships, allStudents) ``` #Day7 (3/7) ## Startup Name --- [Set Mathematics Notation](https://www.rapidtables.com/math/symbols/Set_Symbols.html) ``` companies = ["coolcompany", "nicecompany", "legendarycompany"] cmp1 = set(companies[0]) cmp2 = set(companies[1]) cmp3 = set(companies[2]) cmp1 cmp2 cmp3 all = cmp1.union(cmp2, cmp3) # intersection of all three set common = cmp1.intersection(cmp2, cmp3) # objects that belong to cmp1 only only_cmp1 = cmp1.union(cmp2, cmp3)- cmp2.union(cmp3) # objects that belong to cmp3 only only_cmp3 = cmp1.union(cmp2,cmp3) - cmp1.union(cmp2) # objects that belong to cmp2 only only_cmp2 = cmp1.union(cmp2,cmp3) - cmp1.union(cmp3) # popular names is the remaining objects of the subtraction of all unique objects in three subsets to # the only objects of each subset and the common objects of three subsets res = all - only_cmp1 - only_cmp2 - only_cmp3 - common list(sorted(list(res))) def startupName(companies): cmp1 = set(companies[0]) cmp2 = set(companies[1]) cmp3 = set(companies[2]) res = cmp1.union(cmp2,cmp3)-(cmp1.union(cmp2,cmp3)-cmp1.union(cmp2))-(cmp1.union(cmp2,cmp3)-cmp1.union(cmp3))-(cmp1.union(cmp2,cmp3)-cmp2.union(cmp3))-cmp1.intersection(cmp2,cmp3) return list(sorted(list(res))) companies = ["coolcompany", "nicecompany", "legendarycompany"] startupName(companies) ``` #Day8 (5/7) ## Words Recognition ``` # Use Lambda Funtion word1 = 'program' word2 = 'develop' unq1 = set(word1).difference(set(word2)) unq1 ''.join(sorted(unq1)) unq2 = set(word2).difference(set(word1)) unq2 ''.join(sorted(unq2)) ''.join(sorted(i for i in unq2)) def wordsRecognition(word1, word2): def getIdentifier(w1, w2): return ''.join(sorted(i for i in set(w1).difference(set(w2)))) return [getIdentifier(word1, word2), getIdentifier(word2, word1)] word1 = 'program' word2 = 'develop' wordsRecognition(word1, word2) ``` # Day9(7/7/2020) ## Transpose Dictionary --- 1. Flow of though for solving this problems * Sort Dictionary by Value * Iterate through a dictionary --- 2. * Input --> initial dictionary. Both keys and values of the dictionary are guaranteed to be strings that contain only English letters. It is also guaranteed that all dictionary values are unique. `scriptByExtension = { "validate": "py", "getLimits": "md", "generateOutputs": "json" }` * Output --> Array of pairs `[extension, script]`, sorted `lexicographically` by the `extension`. `transposeDictionary(scriptByExtension) = [["json", "generateOutputs"], ["md", "getLimits"], ["py", "validate"]]` ``` def transposeDictionary(scriptByExtension): return [[value, key] for key, value in sorted(scriptByExtension.items(), key=lambda item: item[1])] scriptByExtension = { "validate": "py", "getLimits": "md", "generateOutputs": "json" } transposeDictionary(scriptByExtension) ``` # Collections — Container datatypes https://docs.python.org/3/library/collections.html # Day10 (6/10/7) ## Doodled Password ![Password Generation](https://drive.google.com/uc?export=view&id=1M5Vij47wbzjpU-YUMvz6UrU_peJDIc2H) --- Given a list of `digits` as they are written in the clockwise order, find all other combinations the password could possibly have. Example For `digits = [1, 2, 3, 4, 5]`, the output should be `doodledPassword(digits) = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 1], [3, 4, 5, 1, 2], [4, 5, 1, 2, 3], [5, 1, 2, 3, 4]]` ``` digits = [1, 2, 3, 4, 5] from collections import deque n = len(digits) res = [deque(digits) for _ in range(n)] res # inital a a = deque([1,2,3,4,5]) a0 = a a0.rotate(0) a0 a a1 = a.copy() a1.rotate(-1) a1 a a2 = a.copy() a2.rotate(-2) a2 a a3 = a.copy() a3.rotate(-3) a3 a4 = a.copy() a4.rotate(-4) a4 # original a a res lst = [1,2,3,4,5] lst[-5] == lst[0] lst[4] == lst[-1] # use map to iterate a list # at each queue rotate it with # idx --> 0, 1, 2, 3, 4 # rotate --> 0, -1, -2, -3, -4 for que, idx in enumerate(res): print(que, idx) ``` # Flow of thoughs + [Getting idex of item while processing a list using map](https://stackoverflow.com/questions/5432762/getting-index-of-item-while-processing-a-list-using-map-in-python) 1. using enumerate() to get value & index of corresponding value of each elements inside list + [Using QUEUE function from COLLECTIONS](https://docs.python.org/3/library/collections.html#collections.deque) 1. using rotate() at each queue with the follow-up rule: * at each queue have index: 0, 1, 2, 3, 4,...n /n we rotate for iterators with the corresponding negative values of index ==> rotate(i) ==> i = -idx ``` from collections import deque def doodledPassword(digits): n = len(digits) res = [deque(digits) for _ in range(n)] deque(map(lambda x: x[1].rotate(-x[0]), enumerate(res)), 0) return [list(d) for d in res] digits = [1, 2, 3, 4, 5] doodledPassword(digits) ``` # Day11 (11/7) ## Frequency Analysis ``` from collections import Counter def frequencyAnalysis(encryptedText): return Counter(encryptedText).most_common(1)[0][0] encryptedText = "$~NmiNmim$/NVeirp@dlzrCCCCfFfQQQ" frequencyAnalysis(encryptedText) encryptedText = "Agoodglassinthebishop'shostelinthedevil'sseattwenty-onedegreesandthirteenminutesnortheastandbynorthmainbranchseventhlimbeastsideshootfromthelefteyeofthedeath's-headabeelinefromthetreethroughtheshotfiftyfeetout." frequencyAnalysis(encryptedText) ``` # Itertools — Functions creating iterators for efficient looping https://docs.python.org/3/library/itertools.html For `pills = ["Notforgetan", "Antimoron", "Rememberin", "Bestmedicen", "Superpillsus"]`, the output should be `memoryPills(pills) = ["Bestmedicen", "Superpillsus", ""]`. ``` from itertools import ... def memoryPills(pills): gen = ... next(gen) return [next(gen) for _ in range(3)] ``` ``` pills = ["Notforgetan", "Antimoron", "Rememberin", "Bestmedicen", "Superpillsus"] from itertools import dropwhile filter_pill = iter(list(dropwhile(lambda x: len(x) % 2 != 0, pills)) + list(['', '', ''])) next(filter_pill) [next(filter_pill) for _ in range(3)] ``` # Day12 (17/7) # Efficient Iterating # Flow of Thought ## Varied types of iterating 1. Drop all elements until meet the element having even length 2. Contain all elements from element having even lenght 3. Add three empty string for preventing the iterating lists not having enough three elements 4. effortlessly iterating using next() to get the medicines,which are most effective as you need to ``` from itertools import dropwhile import time start_time = time.time() def memoryPills(pills): gen = iter(list(dropwhile(lambda x: len(x) % 2 != 0, pills)) + list(['', '', ''])) next(gen) return [next(gen) for _ in range(3)] # estimate running time print("--- %s Second --- " %(time.time() - start_time)) pills = ["Notforgetan", "Antimoron", "Rememberin", "Bestmedicen", "Superpillsus"] memoryPills(pills) ``` ## How Range() works even with Float. ## Iterators ``` from itertools import ... def floatRange(start, stop, step): gen = ... return list(gen) ``` from itertools import takewhile, count import decimal def floatRange(start, stop, step): start = type(start + step)(start) gen = (takewhile(lambda x: x< stop, count(start, step))) return list(gen) floatRange(start=-0.9, stop=0.45, step=0.2) ``` # Day13 (July29) # Crazy ball ## Sorted a nested list with each inner list also sorted ## Expected output ``` crazyball(players, k) = [["Newbie", "Ninja", "Trainee"], ["Newbie", "Ninja", "Warrior"], ["Newbie", "Trainee", "Warrior"], ["Ninja", "Trainee", "Warrior"]] ``` ``` from itertools import combinations def crazyball(players, k): return list(combinations(sorted(players), k)) players = ["Ninja", "Warrior", "Trainee", "Newbie"] k = 3 # original output crazyball(players, k) ``` ## Kth Permuatations of numbers ``` ``` # Day14 (July30) # Cracking Password For `digits = [1, 5, 2]`, `k = 2`, and `d = 3`, the output should be `crackingPassword(digits, k, d) = ["12", "15", "21", "51"]` ``` from itertools import ... def crackingPassword(digits, k, d): def createNumber(digs): return "".join(map(str, digs)) return ``` ``` def createNumber(digs): return "".join(map(str, digs)) # CASE1 # digits = [1, 5, 2] # k = 2 # d = 3 # CASE2 digits = [4, 6, 0, 3] k = 4 d = 13 cn = createNumber(i for i in digits) cn from itertools import product lst_tup = list(product(cn, repeat=k)) lst_tup lst_str = [''.join(i) for i in lst_tup] lst_str lst_int = list(map(int, [i for i in lst_str])) lst_int lst_div = [i for i in lst_int if i % 3 == 0] lst_div # [i for i in list(map(int, [i for i in [''.join(i) for i in list(product(createNumber(i for i in digits), repeat=2))]])) if i % 3 == 0] sorted(list(map(str, [i for i in lst_div]))) from itertools import product def crackingPassword(digits, k, d): def createNumber(digs): return "".join(map(str, digs)) return sorted( # final steps --> sort the result lexicographically after padding zero with fixed lengh of k [i.zfill(k) for i in # dealing with fixed lengh of string sorted( list(map(str, [i for i in # format with string type & lexicographically sorting [i for i in list(map(int, [i for i in # combination with replacement numbers generated from set of digits [''.join(i) for i in list( product(createNumber(i for i in digits), repeat=k))]])) if i % d == 0 # chose only numbers that divisible by d ]] )) ) ] ) digits = [4, 6, 0, 3] k = 4 d = 13 crackingPassword(digits, k, d) ``` ## How do I format a number with a variable number of digits in Python? ``` k = 4 '{num:{fill}{width}}'.format(num=123, fill='0', width=k) '01'.zfill(4) k = 3 [i.zfill(k) for i in ['12', '15', '21', '51']] ``` # Day15 (July 31) ## Kth Permutation # Pressure Gauges For `morning = [3, 5, 2, 6]` and `evening = [1, 6, 6, 6]`, the output should be `pressureGauges(morning, evening) = [[1, 5, 2, 6], [3, 6, 6, 6]]`. ``` morning = [3, 5, 2, 6] evening = [1, 6, 6, 6] concat = [morning, evening] concat for i in range(len(concat[0])): if concat[0][i] > concat[1][i]: concat[0][i], concat[1][i] = concat[1][i], concat[0][i] concat res_zip = list(zip(morning, evening)) res_zip ``` ## Sort each tuple inside a list ``` sort_lst = [sorted(i) for i in res_zip] sort_lst ``` ## unzip to get swapped elements at same index of two lists ``` list(zip(*sort_lst)) ``` ## Map tuple to list to output a list of lists ``` list(map(list, list(zip(*sort_lst)))) ``` ## Flow of integrating small function ``` # output a list of list [list(t) for t in list(zip(*[sorted(i) for i in list(zip(morning, evening))]))] ```
github_jupyter
# Building a Neural Movie Recommender System Despite its name (and the original purpose), `timeserio` is a general-purpose tool for rapid model development. In this example, we use it to train a state-of-the-art movie recommender system in a few lines of code. ## Dataset The MovieLens dataset is commonly used to benchmark recommender systems - see https://grouplens.org/datasets/movielens/100k/. Our task is to learn to predict how user $u$ would rate a movie $m$ ($r_{um}$) based on an available dataset of ratings $r_{ij}$. Importantly, each user has only given ratings to some of the movies, and each movie has only been rated by some of the users. This is a classic example of transfer learning, commonly known as collaborative filtering in the context of recommender systems. ## Introduction We make use of `keras` to define three models: - a *user embedder* that learns to represent each user's preference as a vector - a *movie embedder* that learns to represent each movie as a vector - a *rating model* that concatenates user and movie embedding networks, and applies a dense neural network to predict a (non-negative) rating By wrapping the three models in a `multinetwork` (of the `MultiNetworkBase` class), we can for example - train the rating model end-to-end, then use one of the embedding models - freeze one or both of the embedding models and re-train the dense layers, or - freeze the dense layers, and re-train embeddings for new users only To make our job even simpler, we further wrap our `multinetwork` in a `MultiModel` class, which allows us to take data directly from `pandas` DataFrames, and apply pre-processing pipelines if needed. ``` import os import numpy as np import pandas as pd from tqdm import tqdm import matplotlib.pyplot as plt import seaborn as sns plt.xkcd(); ``` ## Download data First, we download the freely available dataset and define a few helper functions for importing data. ``` !mkdir -p datasets; cd datasets; wget http://files.grouplens.org/datasets/movielens/ml-100k.zip; unzip -o ml-100k.zip; rm ml-100k.zip def get_ratings(part='u.data'): """Return a DataFrame of user-movie ratings.""" return pd.read_csv( os.path.join('datasets/ml-100k', part), header=None, sep='\t', names=['user_id', 'item_id', 'rating', 'timestamp'], ).rename(columns={'item_id': 'movie_id'}) def get_users(): """Return a DataFrame of all users.""" return pd.read_csv( os.path.join('datasets/ml-100k', 'u.user'), header=None, sep='|', names=['user_id', 'age', 'gender', 'occupation', 'zip_code'], ).rename(columns={'item_id': 'movie_id'}) ITEM_PROPS = ['movie_id', 'movie_title', 'video_release_date', 'unknown', 'IMDb_URL'] GENRES = ['Action', 'Adventure', 'Animation', 'Childrens', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western'] def get_movies(): """Return a DataFrame of all movies.""" return pd.read_csv( os.path.join('datasets/ml-100k', 'u.item'), header=None, index_col=False, sep='|', encoding="iso-8859-1", names=ITEM_PROPS + GENRES, ) get_ratings().head(3) get_users().head(3) get_movies().head(3) ``` ## Define the model architecture We start by defining the network architecture. All we need to do is sub-class `MultiNetworkBase`, and define the `_model` method. - keyword arguments to `_model` are used to parametrise our network architecture, e.g. by specifying a settable number of neurons or layers - the `_model` method is expected to return a dictionary of `keras.models.Model` objects. ``` from keras.layers import Input, Embedding, Dense, Concatenate, Flatten from keras.models import Model from timeserio.keras.multinetwork import MultiNetworkBase class MovieLensNetwork(MultiNetworkBase): def _model(self, user_dim=2, item_dim=2, max_user=10000, max_item=10000, hidden=8): user_input = Input(shape=(1,), name='user') item_input = Input(shape=(1,), name='movie') user_emb = Flatten(name='flatten_user')(Embedding(max_user, user_dim, name='embed_user')(user_input)) item_emb = Flatten(name='flatten_movie')(Embedding(max_item, item_dim, name='embed_movie')(item_input)) output = Concatenate(name='concatenate')([user_emb, item_emb]) output = Dense(hidden, activation='relu', name='dense')(output) output = Dense(1, name='rating')(output) user_model = Model(user_input, user_emb) item_model = Model(item_input, item_emb) rating_model = Model([user_input, item_input], output) rating_model.compile(optimizer='Adam', loss='mse', metrics=['mae']) return {'user': user_model, 'movie': item_model, 'rating': rating_model} ``` The three models are initialized on-demand, e.g. when we access `multinetwork.model`. Note that the inputs and embedding layers are shared, and therefore changes made to e.g. the `user` model are instantly available in the `rating` model. ``` multinetwork = MovieLensNetwork() multinetwork.model from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils.layer_utils import print_summary SVG(model_to_dot(multinetwork.model['user'], rankdir='LR').create(prog='dot', format='svg')) SVG(model_to_dot(multinetwork.model['movie'], rankdir='LR').create(prog='dot', format='svg')) SVG(model_to_dot(multinetwork.model['rating']).create(prog='dot', format='svg')) print_summary(multinetwork.model['rating']) ``` ## From Multinetwork to Multimodel We can train a specific model by using its name. Note that we must provide `numpy` feature arrays to each input, and also an array of training labels: ```python multinetwork.fit([X_user, X_movie], y_rating, model='rating') ``` In our case, we could simply write `X_user = df["user_id"].values` etc. However, we prefer different models to be fed from one data source, typically a `pandas.DataFrame`, with any details of feature pre-processing, or input ordering, taken care of by encapsulated pipelines, providing an interface of the form ```python multimodel.fit(df, model='rating') ``` Let's work through the necessary steps - these may seem trivial for a simple problem, but save a lot of headaches when developing and deploying complex models. ### Define individual pipelines We start by defining a pipeline (a `scikit-learn` transformer) for each of the model inputs and labels: ``` from timeserio.preprocessing import PandasValueSelector user_pipe = PandasValueSelector('user_id') item_pipe = PandasValueSelector('movie_id') rating_pipe = PandasValueSelector('rating') ``` ### Group the pipelines in a `MultiPipeline` The `MultiPipeline` object provides a container for all the pipelines, with convenience features such as easy parameter accesss. All we need is to provide a name for each pipeline: ``` from timeserio.pipeline import MultiPipeline multipipeline = MultiPipeline({ 'user_pipe': user_pipe, 'movie_pipe': item_pipe, 'rating_pipe': rating_pipe, }) ``` ### Connect pipelines to models To finish the plumbing exercise, we specify which pipeline connects to each input or output of each model using a *manifold*. Each key-value in the manifold has the form `model_name: (input_pipes, output_pipes)`, where `input_pipes` is either a single pipe name, or a list of pipe names (one per input). Similarly, the `output_pipe` will have one ore more pipe names, one per output of the model - we use `None` for models that we do not intend to train using supervised labels. ``` manifold = { 'user': ('user_pipe', None), 'movie': ('movie_pipe', None), 'rating': (['user_pipe', 'movie_pipe'], 'rating_pipe') } ``` ### Put it all together The `MultiModel` holds all three parts: - the `multinetwork` specifies the model architectures, and also training parameters and callbacks - the `multipipeline` specifies the feature processing pipelines - the `manifold` specifies which pipelines is plumbed to which input (or output) of which neural network model ``` from timeserio.multimodel import MultiModel multimodel = MultiModel( multinetwork=multinetwork, multipipeline=multipipeline, manifold=manifold ) ``` ## Fit the `MultiModel` We load one train-test split, and fitting our neural recommender system ``` df_train = get_ratings('u1.base') df_val = get_ratings('u1.test') len(df_train), len(df_val) from kerashistoryplot.callbacks import PlotHistory # Note: `PlotHistory` callback is rather slow multimodel.fit( reset_weights=True, df=df_train, model='rating', validation_data=df_val, batch_size=4096, epochs=50, callbacks=[PlotHistory(batches=True, n_cols=2, figsize=(15, 8))] ) ``` The `multimodel` provides all the familiar methods such as `fit`, `predict`, or `evaluate`: ``` mse, mae = multimodel.evaluate(df_val, model="rating") print(f"MSE: {mse}, RMSE: {np.sqrt(mse)}, MAE: {mae}") ``` ## Cross-Validate our approach To evaluate how well recommender system performs, we perform 5-fold cross-validation and compare scores established benchmarks. ``` from sklearn.metrics import mean_absolute_error, mean_squared_error %%time folds = [1, 2, 3, 4, 5] folds_mse = [] folds_rmse = [] folds_mae = [] for fold in tqdm(folds, total=len(folds)): multimodel.multinetwork._init_model() df_train = get_ratings(f'u{fold}.base') df_val = get_ratings(f'u{fold}.test') multimodel.fit( df=df_train, model='rating', validation_data=df_val, batch_size=4096, epochs=50, verbose=0, reset_weights=True ) y_pred = multimodel.predict(df=df_val, model='rating') mse = mean_squared_error(df_val['rating'], y_pred) mae = mean_absolute_error(df_val['rating'], y_pred) folds_mse.append(mse) folds_rmse.append(np.sqrt(mse)) folds_mae.append(mae) print( f"5-fold Cross-Validation results: \n" f"RMSE: {np.mean(folds_rmse):.2f} ± {np.std(folds_rmse):.2f} \n" f"MAE: {np.mean(folds_mae):.2f} ± {np.std(folds_mae):.2f} \n" ) ``` Benchmarks for some modern algorithms can be seen e.g. at http://surpriselib.com/ or https://www.librec.net/release/v1.3/example.html - our approach is in fact competitive with the state of the art before any tuning! By using dense embeddings, we did not use any user features such as gender or age - all we need is to learn their preference embedding as part of our end-to-end model. We are now free to experiment with embedding dimensions for users and movies, or tweak the dense layers. ## Using multiple models We now use a trained `MultiModel` to inspect the embeddings. Because we defined user and movie embedders as independent models, we can simple call `.predict(..., model=...)` with different model names. ### User embeddings ``` user_df = get_users() embeddings = multimodel.predict(user_df, model='user') user_df['emb_0'] = embeddings[:, 0] user_df['emb_1'] = embeddings[:, 1] sns.scatterplot(x='emb_0', y='emb_1', hue='gender', size='age', data=user_df) ``` ### And the movie embeddings... ``` movie_df = get_movies() embeddings = multimodel.predict(movie_df, model='movie') movie_df['emb_0'] = embeddings[:, 0] movie_df['emb_1'] = embeddings[:, 1] ``` Out of curiosity, we can compute mean embeddings for movies tagged with each genre. ``` genre_df = pd.DataFrame() for genre in GENRES: mean = movie_df[movie_df[genre] == 1][['emb_0', 'emb_1']].mean() mean['genre'] = genre genre_df = genre_df.append(mean, ignore_index=True) ``` ### Movie and Genre embeddings ``` fig, axes = plt.subplots(ncols=2, figsize=(20, 8)) sns.scatterplot(x='emb_0', y='emb_1', data=movie_df, ax=axes[0], palette='bright') sns.scatterplot(x='emb_0', y='emb_1', hue='genre', data=genre_df, ax=axes[1], palette='bright') ``` We can even consider similarity between genres by: - computing centroid for each genre - performing hierarchical clustering on genre centroids - plotting the distance matrix with a fancy colour scheme ``` from sklearn.metrics import pairwise_distances from scipy.cluster import hierarchy X = genre_df[['emb_0', 'emb_1']].values Z = hierarchy.linkage(X) order = hierarchy.leaves_list(hierarchy.optimal_leaf_ordering(Z, X)) genre_df_ordered = genre_df.iloc[order] embs_ord = genre_df_ordered[['emb_0', 'emb_1']].values dist_ord = pairwise_distances(embs_ord) genres_ord = genre_df_ordered['genre'].values sns.heatmap(dist_ord, xticklabels=genres_ord, yticklabels=genres_ord, cmap='plasma_r'); ``` We see that the two genres furthest apart are *Horror* and *Musical*, while *Romance* and *Mystery* or *Crime* and *Adventure* evoke similar rating patterns! ## Freezing and partial updating Finally, we mention another key advantage of the `MultiModel` approach: partial re-training. Imagine we have a powerful production system, but new users register with our service every day. We don't want to re-train the full model, only the embeddings for new users. This is trivial: ```python multimodel.fit( trainable_models=['user'], df=df_new, model='rating', **kwargs ) ``` This will ensure that only the user embeddings are updated (and only for users present in `df_new`, while dense layer weights and movie embeddings remain frozen.
github_jupyter
# Raw data ## Overview In this project we use the following raw data: * *Corine Land Cover* (CLC) from 2018 and information about the class nomenclature. * Sentinel-2 grid * Harmonized Landsat Sentinel-2 (HLS) This notebook describes the raw data collection and creation process. Thus all the data described here can be found in the *data/raw* folder. ``` %load_ext autoreload %autoreload 2 %matplotlib inline import geopandas as gpd import os import pandas as pd from pathlib import Path from shapely import wkt from urllib.request import urlretrieve import nasa_hls from src import configs prjconf = configs.ProjectConfigParser() ``` We will prepare the data for the following tiles this notebook: ``` tilenames = prjconf.get("Params", "tiles").split(" ") tilenames tilenames =['32UNU', '32UPU', '32UQU', '33UUP', '32TPT', '32TQT', '33TUN'] ``` ### Tile grid We download the Sentinel-2 grid in the following cell. The link to this nice Sentinel-2 grid file has been found on the [bencevans/sentinel-2-grid GitHub project](https://github.com/bencevans/sentinel-2-grid). ``` url = 'https://unpkg.com/sentinel-2-grid/data/grid.json' path__tile_grid = prjconf.get_path("Raw", "tile_grid") path__tile_grid.parent.mkdir(exist_ok=True, parents=True) if not path__tile_grid.exists(): urlretrieve(url, path__tile_grid) ``` From this file we create single footprint file for the tile we want to process. This is a good starting point for using Snakemake later. ``` overwrite = False footprints_exist = [prjconf.get_path("Raw", "tile_footprint", tile).exists() for tile in tilenames] if not all(footprints_exist): tile_grid = gpd.read_file(path__tile_grid) for tilename in tilenames: path__tile_footprint = prjconf.get_path("Raw", "tile_footprint", tilename) if not Path(path__tile_footprint).exists() or overwrite: tile = tile_grid[tile_grid["name"] == tilename] tile = tile.to_crs(epsg=tile["epsg"].values[0]) tile["geometry"] = tile["utmWkt"].apply(wkt.loads) Path(path__tile_footprint).parent.mkdir(parents=True, exist_ok=True) tile.to_file(path__tile_footprint, driver="GPKG") ``` Fast access to important parameters and file paths: ``` print(prjconf.get_path("Raw", "tile_grid")) for tile in tilenames: print(prjconf.get_path("Raw", "tile_footprint", tile)) ``` ## Create raw data ### CLC We downloaded *Corine Land Cover - GeoPackage* dataset manually after registration from [Copernicus Land Monitoring Service](https://land.copernicus.eu/pan-european/corine-land-cover/clc2018?tab=download) and extracted the file into *data/raw/clc/clc2018_clc2018_v2018_20_geoPackage*. ### CLC - raster - OUTDATED We downloaded *Corine Land Cover Raster - 100m* dataset manually after registration from [Copernicus Land Monitoring Service](https://land.copernicus.eu/pan-european/corine-land-cover/clc2018?tab=download) and extracted the file into *data/raw/clc/clc2018_clc2018_v2018_20b2_raster100m*. The most important file is *data/raw/clc/clc2018_clc2018_v2018_20b2_raster100m/clc2018_clc2018_V2018.20b2.tif* We copied the CLC legend from the *CORINE LAND COVER LEGEND* table found on the [nomenclature site of clc.gios.gov.pl](http://clc.gios.gov.pl/index.php/9-gorne-menu/clc-informacje-ogolne/58-klasyfikacja-clc-2), pasted it into LibreOffice Calc and saved it as ';'-separated csv file under *data/raw/clc/clc_legend_raw.csv*. An extended legend with the empty cells filled up and the level 2 and 3 class indices added is created here in the following cell and can be found under *data/raw/clc/clc_legend.csv* once the cell has been executed. #### Legend **TODO**: Recover all the columns as created previously: path__clc_legend_raw = prjconf.get_path("Raw", "rootdir") / "clc" / "clc_legend_raw.csv" path__clc_legend = prjconf.get_path("Raw", "rootdir") / "clc" / "clc_legend.csv" if not path__clc_legend.exists(): clc_legend = pd.read_csv(path__clc_legend_raw, delimiter=";").iloc[0:44, :] clc_legend.columns = ["l1_name", "l2_name", "l3_name", "grid_code", "rgb"] clc_legend_ids = clc_legend["l3_name"].str[:5].str.split(".", expand=True) clc_legend["l1_id"] = clc_legend_ids[0].astype("uint8") clc_legend["l2_id"] = (clc_legend_ids[0] + clc_legend_ids[1]).astype("uint8") clc_legend["l3_id"] = (clc_legend_ids[0] + clc_legend_ids[1] + clc_legend_ids[2]).astype("int") clc_legend["l1_name"] = clc_legend["l1_name"].str[3::] clc_legend["l2_name"] = clc_legend["l2_name"].str[4::] clc_legend["l3_name"] = clc_legend["l3_name"].str[6::] clc_legend = clc_legend.fillna(method="ffill") clc_legend.to_csv(path__clc_legend, index=False) ``` clc_legend = prjconf.get_clc_legend() clc_legend ``` #### Vector data The most important file is *data/raw/clc/clc2018_clc2018_v2018_20_geoPackage/CLC2018_CLC2018_V2018_20.gpkg*. Let's first load the whole data and convert the ID column to a integer ID column (from *EU-< ID >* to *< ID >*). Then, we loop through the previously created tiles and select the polygons which are within the respective tile. We save all polygons and the ones with a area less then or equal to 50 ha and 100 ha as separate files. ``` overwrite = False clc = None for tilename in tilenames: print("*" * 80) print(tilename) path__clc = prjconf.get_path("Raw", "clc", tilename) path__clc_lte100ha = prjconf.get_path("Raw", "clc_lte100ha", tilename) path__clc_lte50ha = prjconf.get_path("Raw", "clc_lte50ha", tilename) # # THAT would make sense in one UTM Zone but not over UTM Zones ); #path__clc_inner = prjconf.get_path("Raw", "clc_inner", tilename) #path__clc_inner_lte100ha = prjconf.get_path("Raw", "clc_inner_lte100ha", tilename) #path__clc_inner_lte50ha = prjconf.get_path("Raw", "clc_inner_lte50ha", tilename) if path__clc_lte50ha.exists() and not overwrite: # this is the last file we write and so we assume if this is there all the rest is as well print(f"Skipping since {path__clc_lte50ha.name} exists.") continue if clc is None: # we need to load this only once path__clc_complete = prjconf.get_path("Raw", "clc_complete") clc = gpd.read_file(path__clc_complete) assert clc.ID.is_unique clc["pid"] = clc.ID.str.split("-", expand=True)[1].astype(int) assert clc.pid.is_unique # MORE PERFORMANT SOLUTION # https://gis.stackexchange.com/questions/270043/select-by-location-using-ogr2ogr-sqlite # EASY SOLUTION - WHAT WE DO HERE # https://gis.stackexchange.com/questions/279670/geopandas-equivalent-to-select-by-location # get the tile as geodataframe path__tile_footprint = prjconf.get_path("Raw", "tile_footprint", tilename) # path__tile_footprint_inner = prjconf.get_path("Raw", "tile_footprint_inner", tilename) tile = gpd.read_file(path__tile_footprint) # apply a negative buffer # goal: later we want to select polygons for a given tile that are not also selected in other tiles # due to the overlapping tile footprints this happens without such an approach # # THAT would make sense in one UTM Zone but not over UTM Zones ); tile_buffered = tile.copy() # tile_buffered.geometry = tile.buffer(-4925) # tile_buffered.to_file(path__tile_footprint_inner, driver="GPKG") # convert the tiles in the crs of the clc data # then we can select the polygons in the tile tile_original_crs = tile.crs.copy() tile = tile.to_crs(clc.crs) # tile_buffered = tile_buffered.to_crs(clc.crs) idx = clc.within(tile.geometry[0]) clc_tile = clc[idx] clc_tile.loc[:, "Code_18"] = clc_tile.loc[:, "Code_18"].astype(int) clc_tile = pd.merge(clc_tile, clc_legend[["grid_code", "cid_l3", "cid_l2", "cid_l1"]], how="left", left_on="Code_18", right_on="cid_l3") clc_tile = clc_tile[["pid", "cid_l3", "cid_l2", "cid_l1", "Area_Ha", "geometry"]] # and now based on that the the polygons in the buffered tile # idx_buffered = clc_tile.within(tile_buffered.geometry[0]) # clc_tile_buffered = clc_tile[idx_buffered] clc_tile = clc_tile.to_crs(tile_original_crs) # clc_tile_buffered = clc_tile_buffered.to_crs(tile_original_crs) print(f"Number of polygons within the tile : {clc_tile.shape[0]}") # print(f"Number of polygons within the buffered tile : {clc_tile_buffered.shape[0]}") clc_tile.to_file(path__clc, "GPKG") # clc_tile_buffered.to_file(path__clc_inner, "GPKG") idx = clc_tile["Area_Ha"] <= 100 clc_tile_lte100ha = clc_tile[idx] # idx = clc_tile_buffered["Area_Ha"] <= 100 # clc_tile_buffered_lte100ha = clc_tile_buffered[idx] print(f"Number of polygons <= 100 ha within the tile : {clc_tile_lte100ha.shape[0]}") # print(f"Number of polygons <= 100 ha within the buffered tile: {clc_tile_buffered_lte100ha.shape[0]}") clc_tile_lte100ha.to_file(path__clc_lte100ha, "GPKG") # clc_tile_buffered_lte100ha.to_file(path__clc_inner_lte100ha, "GPKG") idx = clc_tile["Area_Ha"] <= 50 clc_tile_lte50ha = clc_tile[idx] # idx = clc_tile_buffered["Area_Ha"] <= 50 # clc_tile_buffered_lte50ha = clc_tile_buffered[idx] clc_tile_lte50ha.to_file(path__clc_lte50ha, "GPKG") # clc_tile_buffered_lte50ha.to_file(path__clc_inner_lte50ha, "GPKG") print(f"Number of polygons <= 50 ha within the tile : {clc_tile_lte50ha.shape[0]}") # print(f"Number of polygons <= 50 ha within the buffered tile : {clc_tile_buffered_lte50ha.shape[0]}") ``` ******************************************************************************** 32UNU Number of polygons within the tile : 6963 Number of polygons within the buffered tile : 5724 Number of polygons <= 100 ha within the tile : 5036 Number of polygons <= 100 ha within the buffered tile: 4138 Number of polygons <= 50 ha within the tile : 3106 Number of polygons <= 50 ha within the buffered tile : 2551 ******************************************************************************** 32UPU Number of polygons within the tile : 5896 Number of polygons within the buffered tile : 4918 Number of polygons <= 100 ha within the tile : 4370 Number of polygons <= 100 ha within the buffered tile: 3642 Number of polygons <= 50 ha within the tile : 2707 Number of polygons <= 50 ha within the buffered tile : 2265 ******************************************************************************** 32UQU Number of polygons within the tile : 5955 Number of polygons within the buffered tile : 4968 Number of polygons <= 100 ha within the tile : 4412 Number of polygons <= 100 ha within the buffered tile: 3708 Number of polygons <= 50 ha within the tile : 2719 Number of polygons <= 50 ha within the buffered tile : 2280 ******************************************************************************** 33UUP Number of polygons within the tile : 6345 Number of polygons within the buffered tile : 5356 Number of polygons <= 100 ha within the tile : 4512 Number of polygons <= 100 ha within the buffered tile: 3819 Number of polygons <= 50 ha within the tile : 2635 Number of polygons <= 50 ha within the buffered tile : 2239 ******************************************************************************** 32TPT Number of polygons within the tile : 4914 Number of polygons within the buffered tile : 4100 Number of polygons <= 100 ha within the tile : 3305 Number of polygons <= 100 ha within the buffered tile: 2754 Number of polygons <= 50 ha within the tile : 1888 Number of polygons <= 50 ha within the buffered tile : 1559 ******************************************************************************** 32TQT Number of polygons within the tile : 4529 Number of polygons within the buffered tile : 3691 Number of polygons <= 100 ha within the tile : 2795 Number of polygons <= 100 ha within the buffered tile: 2254 Number of polygons <= 50 ha within the tile : 1624 Number of polygons <= 50 ha within the buffered tile : 1303 ******************************************************************************** 33TUN Number of polygons within the tile : 4613 Number of polygons within the buffered tile : 3811 Number of polygons <= 100 ha within the tile : 2849 Number of polygons <= 100 ha within the buffered tile: 2336 Number of polygons <= 50 ha within the tile : 1643 Number of polygons <= 50 ha within the buffered tile : 1365 Fast access to important file paths: ``` for tilename in tilenames: print("*" * 80) print(tilename) path__clc = prjconf.get_path("Raw", "clc", tilename) path__clc_lte100ha = prjconf.get_path("Raw", "clc_lte100ha", tilename) path__clc_lte50ha = prjconf.get_path("Raw", "clc_lte50ha", tilename) print(path__clc) print(path__clc_lte100ha) print(path__clc_lte50ha) ``` ### HLS We download data from the [Harmonized Landsat Sentinel-2 (HLS) Product](https://hls.gsfc.nasa.gov/) with the [nasa_hls Python package](https://benmack.github.io/nasa_hls/build/html/index.html) in the following cell. ``` tile = tilenames[4] print(tile) df_datasets = nasa_hls.get_available_datasets(products=["L30"], years=[2018], tiles=[tile], return_list=False) print(f"Number of scenes queried for tile {tile}: {df_datasets.shape[0]}") dir__hls_tile = prjconf.get_path("Raw", "hls_tile", tile=tile) nasa_hls.download_batch(dir__hls_tile, df_datasets) # nasa_hls.get_metadata_from_hdf(fpath, fields=['cloud_cover', 'spatial_coverage']) for tile in tilenames: df_datasets = nasa_hls.get_available_datasets(products=["L30"], years=[2018], tiles=[tile], return_list=False) print(f"Number of scenes queried for tile {tile}: {df_datasets.shape[0]}") dir__hls_tile = prjconf.get_path("Raw", "hls_tile", tile=tile) create_path__hls_tile_lut = True nasa_hls.download_batch(dir__hls_tile, df_datasets) fpaths_hdfs = list(dir__hls_tile.glob("*.hdf")) for fpath in fpaths_hdfs: meta = nasa_hls.get_metadata_from_hdf(fpath, fields=['cloud_cover', 'spatial_coverage']) if len(meta) == 0: print("WARNING - deleting file since metadata could not be derived - probably corrupt.") print("path__hls_tile_lut will NOT be created!") create_path__hls_tile_lut = False files_to_remove = list((Path(fpath).parent).glob((Path(fpath).stem + "*"))) print("Deleting:") print(files_to_remove) [os.remove(fp) for fp in files_to_remove] if not create_path__hls_tile_lut: print("SKIPPING - RUN THE LOOP AGAIN EVENTUALLY THE FILES WHERE NOT DOWNLOADED CORRECTLY.") continue print("NOT SHOW") path__hls_tile_lut = prjconf.get_path("Raw", "hls_tile_lut", tile=tile) if not path__hls_tile_lut.exists(): hdf_files = list(dir__hls_tile.rglob("*.hdf")) df = nasa_hls.dataframe_from_hdf_paths(hdf_files) df["tile"] = tile df.to_csv(path__hls_tile_lut, index=False) else: pass # TODO: it would be good to check if there are new files and rewrite the csv ONLY if this is the case ``` Fast access to important directories: ``` for tile in tilenames: print(prjconf.get_path("Raw", "hls_tile", tile)) print(prjconf.get_path("Raw", "hls_tile_lut", tile=tile)) ```
github_jupyter
<a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/large_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **<h3>Predict the documentation for php code using codeTrans transfer learning finetuning model</h3>** <h4>You can make free prediction online through this <a href="https://huggingface.co/SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers** ``` !pip install -q transformers sentencepiece from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline ``` **2. Load the token classification pipeline and load it into the GPU if avilabile** ``` pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) ``` **3 Give the code for summarization, parse and tokenize it** ``` code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" #@param {type:"raw"} !pip install tree_sitter !git clone https://github.com/tree-sitter/tree-sitter-php from tree_sitter import Language, Parser Language.build_library( 'build/my-languages.so', ['tree-sitter-php'] ) PHP_LANGUAGE = Language('build/my-languages.so', 'php') parser = Parser() parser.set_language(PHP_LANGUAGE) def get_string_from_code(node, lines): line_start = node.start_point[0] line_end = node.end_point[0] char_start = node.start_point[1] char_end = node.end_point[1] if line_start != line_end: code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]])) else: code_list.append(lines[line_start][char_start:char_end]) def my_traverse(node, code_list): lines = code.split('\n') if node.child_count == 0: get_string_from_code(node, lines) elif node.type == 'string': get_string_from_code(node, lines) else: for n in node.children: my_traverse(n, code_list) return ' '.join(code_list) tree = parser.parse(bytes(code, "utf8")) code_list=[] tokenized_code = my_traverse(tree.root_node, code_list) print("Output after tokenization: " + tokenized_code) ``` **4. Make Prediction** ``` pipeline([tokenized_code]) ```
github_jupyter
# Support Vector Machine (SVM) Utilizar o modelo SVM para reconhecimento de voz (homem ou mulher) utilizando diversas propriedades da voz. Será aplicado GridSearchCV para determinar os melhores parâmetros de entrada do modelo SVC Dataset obtido no Kaggle (https://www.kaggle.com/primaryobjects/voicegender) ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns dados = pd.read_csv('voice.csv') dados.head() ``` Descrição meanfreq -> frequencia média (kHz) sd -> desvio padrão da frequencia (kHz) median -> frequencia mediana (kHz) Q25 -> primeiro quartil (kHz) Q75 -> terceiro quartil (kHz) IQR -> distância interquartil (kHz) skew -> skewness kurt -> kurtosis sp.ent -> entropia espectral sfm -> planitude espectral mode -> moda da frequencia centroid: centroide da frequencia peakf -> pico da frequencia meanfun -> média da frequencia fundamental minfun -> mínimo da frequencia fundamental maxfun -> máximo da frequencia fundamental meandom -> média da frequencia dominante mindom -> mínimo da frequencia dominante maxdom -> máximo da frequencia dominante dfrange -> variação da frequencia dominante modindx -> indíce de modulação label -> homem ou mulher Exibindo informações da amostra ``` dados.info() ``` ``` dados.isna().sum() ``` Verificando existência de valores nulos ``` dados.isnull().sum() ``` Convertendo a variável categórica label para numérica ``` from sklearn.preprocessing import LabelEncoder enconder = LabelEncoder() dados['label'] = enconder.fit_transform(dados['label']) dados.head() ``` Determiando variáveis X e Y ``` X = dados.drop('label',axis=1).values Y = dados['label'].values ``` Separando em amostra de treino e teste ``` from sklearn.model_selection import train_test_split X_treino,X_teste,Y_treino,Y_teste=train_test_split(X,Y,test_size=0.25,random_state=0) ``` Aplicando modelo SVM ``` from sklearn.svm import SVC modelo = SVC() ``` Aplicando GridSearchCV para determinar os melhores parâmetros do modelo ``` from sklearn.model_selection import GridSearchCV parametros = {'C' : [1,10,20,30,40,50,60,100], 'kernel' : ['linear','rbf','sigmoid'], 'gamma' : ['scale','auto'] } melhor_modelo = GridSearchCV(modelo, parametros, n_jobs=-1, cv=5, refit=True) melhor_modelo.fit(X_treino, Y_treino) modelo_final = melhor_modelo.best_estimator_ modelo_final.fit(X_treino, Y_treino) Y_previsto = modelo_final.predict(X_teste) from sklearn.metrics import confusion_matrix cm=confusion_matrix(Y_teste,Y_previsto) cm modelo_final.score(X_teste,Y_teste) ```
github_jupyter
# Part 2 1. 딥러닝 기법인 Word2Vec을 통해 단어를 벡터화 해본다. 2. t-SNE를 통해 벡터화한 데이터를 시각화 해본다. 3. 딥러닝과 지도학습의 랜덤포레스트를 사용하는 하이브리드 방식을 사용한다. ### Word2Vec(Word Embedding to Vector) 컴퓨터는 숫자만 인식할 수 있고 한글, 이미지는 바이너리 코드로 저장 된다. tutorial_part_1에서는 Bag of Word라는 개념을 사용해서 문자를 벡터화 하여 머신러닝 알고리즘이 이해할 수 있도록 벡터화 해주는 작업을 하였다. - one hot encoding(예 [0000001000]) 혹은 Bag of Word에서 vector size가 매우 크고 sparse 하므로 neural net 성능이 잘 나오지 않는다. - `주위 단어가 비슷하면 해당 단어의 의미는 유사하다` 라는 아이디어 - 단어를 트레이닝 시킬 때 주위 단어를 label로 매치하여 최적화 - 단어를 `의미를 내포한 dense vector`로 매칭 시키는 것 - Word2Vec은 분산 된 텍스트 표현을 사용하여 개념 간 유사성을 본다. 예를 들어, 파리와 프랑스가 베를린과 독일이 (수도와 나라) 같은 방식으로 관련되어 있음을 이해한다. - 단어의 임베딩과정을 실시간으로 시각화 : [word embedding visual inspector](https://ronxin.github.io/wevi/) - CBOW와 Skip-Gram기법이 있다. - CBOW(continuous bag-of-words)는 전체 텍스트로 하나의 단어를 예측하기 때문에 작은 데이터셋일 수록 유리하다. - 아래 예제에서 __ 에 들어갈 단어를 예측한다. 1) __가 맛있다. 2) __를 타는 것이 재미있다. 3) 평소보다 두 \__로 많이 먹어서 \__가 아프다. - Skip-Gram은 타겟 단어들로부터 원본 단어를 역으로 예측하는 것이다. CBOW와는 반대로 컨텍스트-타겟 쌍을 새로운 발견으로 처리하고 큰 규모의 데이터셋을 가질 때 유리하다. - `배`라는 단어 주변에 올 수 있는 단어를 예측한다. 1) *배*가 맛있다. 2) *배*를 타는 것이 재미있다. 3) 평소보다 두 *배*로 많이 먹어서 *배*가 아프다. ``` # 출력이 너무 길어지지 않게하기 위해 warning을 찍지 않도록 했으나 # 실제 학습 할 때는 아래 두 줄을 주석처리 하는 것을 권장한다. import warnings warnings.filterwarnings('ignore') import pandas as pd train = pd.read_csv('data/labeledTrainData.tsv', header=0, delimiter='\t', quoting=3) test = pd.read_csv('data/testData.tsv', header=0, delimiter='\t', quoting=3) unlabeled_train = pd.read_csv('data/unlabeledTrainData.tsv', header=0, delimiter='\t', quoting=3) print(train.shape) print(test.shape) print(unlabeled_train.shape) print(train['review'].size) print(test['review'].size) print(unlabeled_train['review'].size) train.head() # train에 있는 평점정보인 sentiment가 없다. test.head() # tutorial_part_1에서 수행한 것을 python파일로 만들어 쉽게 호출해서 사용 from KaggleWord2VecUtility import KaggleWord2VecUtility as KWVU KWVU.review_to_wordlist(train['review'][0])[:10] # train 데이터 전처리 sentences = [] for review in train["review"]: sentences += KWVU.review_to_sentences( review, remove_stopwords=False) # unlabeled_train 데이터 전처리 for review in unlabeled_train["review"]: sentences += KWVU.review_to_sentences( review, remove_stopwords=False) len(sentences) sentences[0][:10] sentences[1][:10] ``` ## 1. Word2Vec 모델을 학습 전처리를 거쳐 파싱된 문장을 목록으로 모델을 학습시킬 준비가 됨. 단어 간 문맥의 의미를 파악하기 위해서 Stopwords를 제거 하지 않음. ### Gensim - [gensim: models.word2vec – Deep learning with word2vec](https://radimrehurek.com/gensim/models/word2vec.html) ### Word2Vec 모델의 parameter - 아키텍처 : 아키텍처 옵션은 skip-gram (default) 또는 CBOW 모델이다. skip-gram (default)은 느리지 만 더 나은 결과를 낸다. - 학습 알고리즘 : Hierarchical softmax (default) 또는 negative 샘플링. 여기에서는 기본값이 잘 동작한다. - 빈번하게 등장하는 단어에 대한 다운 샘플링 : Google 문서는 .00001에서 .001 사이의 값을 권장한다. 여기에서는 0.001에 가까운 값이 최종 모델의 정확도를 높이는 것으로 보여진다. - 단어 벡터 차원 : 많은 feature를 사용한다고 항상 좋은 것은 아니지만 대체적으로 좀 더 나은 모델이 된다. 합리적인 값은 수십에서 수백 개가 될 수 있고 여기에서는 300으로 지정했다. - 컨텍스트 / 창 크기 : 학습 알고리즘이 고려해야하는 컨텍스트의 단어 수는 얼마나 될까? hierarchical softmax 를 위해 좀 더 큰 수가 좋지만 10 정도가 적당하다. - Worker threads : 실행할 병렬 프로세스의 수로 컴퓨터마다 다르지만 대부분의 시스템에서 4에서 6 사이의 값을 사용하다. - 최소 단어 수 : 어휘의 크기를 의미있는 단어로 제한하는 데 도움이 된다. 모든 문서에서이 여러 번 발생하지 않는 단어는 무시된다. 10에서 100 사이가 적당하며, 이 경진대회의 데이터는 각 영화가 30개씩의 리뷰가 있기 때문에 개별 영화 제목에 너무 많은 중요성이 붙는 것을 피하기 위해 최소 단어 수를 40으로 설정한다. 그 결과 전체 어휘 크기는 약 15,000 단어가 된다. 높은 값은 제한 된 실행시간에 도움이 된다. ``` import logging logging.basicConfig( format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # 파라메터값 지정 num_features = 300 # 문자 벡터 차원 수 min_word_count = 40 # 최소 문자 수 num_workers = 4 # 병렬 처리 스레드 수 context = 10 # 문자열 창 크기 downsampling = 1e-3 # 문자 빈도 수 Downsample # 초기화 및 모델 학습 from gensim.models import word2vec # 모델 학습 model = word2vec.Word2Vec(sentences, workers=num_workers, size=num_features, min_count=min_word_count, window=context, sample=downsampling) model # 학습이 완료 되면 필요없는 메모리를 unload 시킨다. model.init_sims(replace=True) model_name = '300features_40minwords_10text' # model_name = '300features_50minwords_20text' model.save(model_name) ``` ## 모델 결과 탐색 Exploring the Model Results ``` # 유사도가 없는 단어 추출 model.wv.doesnt_match('man woman child kitchen'.split()) model.wv.doesnt_match("france england germany berlin".split()) # 가장 유사한 단어를 추출 model.wv.most_similar("man") model.wv.most_similar("queen") # vocabulary에 없으면 못 찾음 model.wv.most_similar("awful") model.wv.most_similar("film") # model.wv.most_similar("happy") model.wv.most_similar("happi") # stemming 처리 시 ``` ## 2. Word2Vec으로 벡터화한 단어를 t-SNE를 통해 시각화 ``` # 참고 https://stackoverflow.com/questions/43776572/visualise-word2vec-generated-from-gensim from sklearn.manifold import TSNE import matplotlib as mpl import matplotlib.pyplot as plt import gensim import gensim.models as g # 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처 mpl.rcParams['axes.unicode_minus'] = False model_name = '300features_40minwords_10text' model = g.Doc2Vec.load(model_name) vocab = list(model.wv.vocab) X = model[vocab] print(len(X)) print(X[0][:10]) tsne = TSNE(n_components=2) # 100개의 단어에 대해서만 시각화 X_tsne = tsne.fit_transform(X[:100,:]) # X_tsne = tsne.fit_transform(X) df = pd.DataFrame(X_tsne, index=vocab[:100], columns=['x', 'y']) df.shape df.head(10) fig = plt.figure() fig.set_size_inches(40, 20) ax = fig.add_subplot(1, 1, 1) ax.scatter(df['x'], df['y']) for word, pos in df.iterrows(): ax.annotate(word, pos, fontsize=30) plt.show() ``` ## 3. 평균 feature vector 구하기 및 랜덤포레스트 학습 ``` import numpy as np def makeFeatureVec(words, model, num_features): """ 주어진 문장에서 단어 벡터의 평균을 구하는 함수 """ # 속도를 위해 0으로 채운 배열로 초기화 한다. featureVec = np.zeros((num_features,),dtype="float32") nwords = 0. # Index2word는 모델의 사전에 있는 단어명을 담은 리스트이다. # 속도를 위해 set 형태로 초기화 한다. index2word_set = set(model.wv.index2word) # 루프를 돌며 모델 사전에 포함이 되는 단어라면 피처에 추가한다. for word in words: if word in index2word_set: nwords = nwords + 1. featureVec = np.add(featureVec,model[word]) # 결과를 단어수로 나누어 평균을 구한다. featureVec = np.divide(featureVec,nwords) return featureVec def getAvgFeatureVecs(reviews, model, num_features): # 리뷰 단어 목록의 각각에 대한 평균 feature 벡터를 계산하고 # 2D numpy 배열을 반환한다. # 카운터를 초기화 한다. counter = 0. # 속도를 위해 2D 넘파이 배열을 미리 할당한다. reviewFeatureVecs = np.zeros( (len(reviews),num_features),dtype="float32") for review in reviews: # 매 1000개 리뷰마다 상태를 출력 if counter % 1000. == 0.: print("Review %d of %d" % (counter, len(reviews))) # 평균 피처 벡터를 만들기 위해 위에서 정의한 함수를 호출한다. reviewFeatureVecs[int(counter)] = makeFeatureVec(review, model, \ num_features) # 카운터를 증가시킨다. counter = counter + 1. return reviewFeatureVecs # 멀티스레드로 4개의 워커를 사용해 처리한다. def getCleanReviews(reviews): clean_reviews = [] clean_reviews = KWVU.apply_by_multiprocessing(\ reviews["review"], KWVU.review_to_wordlist,\ workers=4) return clean_reviews %time trainDataVecs = getAvgFeatureVecs(\ getCleanReviews(train), model, num_features ) %time testDataVecs = getAvgFeatureVecs(\ getCleanReviews(test), model, num_features ) from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier( n_estimators = 100, n_jobs = -1, random_state=2018) # 랜덤포레스트로 학습시키기 %time forest = forest.fit( trainDataVecs, train["sentiment"] ) from sklearn.model_selection import cross_val_score %time score = np.mean(cross_val_score(\ forest, trainDataVecs, \ train['sentiment'], cv=10, scoring='roc_auc')) score result = forest.predict(testDataVecs) output = pd.DataFrame( data={"id":test["id"], "sentiment":result} ) output.to_csv('data/Word2Vec_AverageVectors_{0:.5f}.csv'.format(score), index=False, quoting=3 ) output_sentiment = output['sentiment'].value_counts() print(output_sentiment[0] - output_sentiment[1]) output_sentiment import seaborn as sns %matplotlib inline fig, axes = plt.subplots(ncols=2) fig.set_size_inches(12,5) sns.countplot(train['sentiment'], ax=axes[0]) sns.countplot(output['sentiment'], ax=axes[1]) ``` ![image.png](attachment:image.png) ``` # kaggle score : 0.82272 print(round(540/578*100, 2), "%") ```
github_jupyter
<a href="https://colab.research.google.com/github/bereml/iap/blob/master/libretas/1g_pytorch_apis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # APIs en PyTorch Curso: [Introducción al Aprendizaje Profundo](http://turing.iimas.unam.mx/~ricardoml/course/iap/). Profesores: [Bere](https://turing.iimas.unam.mx/~bereml/) y [Ricardo](https://turing.iimas.unam.mx/~ricardoml/) Montalvo Lezama. --- --- En esta libreta veremos de manera breve las interfaces de programación de aplicaciones ([API](https://es.wikipedia.org/wiki/Interfaz_de_programaci%C3%B3n_de_aplicaciones)s) que provee PyTorch. ## 1 Preparación ### 1.1 Bibliotecas ``` import math # sistema de archivos import os # números aleatorios import random # gráficas import matplotlib.pyplot as plt # arreglos multidimensionales import numpy as np # csv import pandas as pd # redes neuronales import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader, TensorDataset ``` ### 1.2 Auxiliares ``` # directorio de datos URL = 'https://raw.githubusercontent.com/gibranfp/CursoAprendizajeProfundo/master/data/califs/califs.csv' data_dir = '../datos' filename = 'califs.csv' filepath = os.path.join(data_dir, 'califs.csv') def set_seed(seed=0): """Initializes pseudo-random number generators.""" random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) ``` ## 2 Datos Descargamos los datos. ``` ! mkdir {data_dir} ! wget -nc {URL} -O {filepath} ``` Cargamos los datos. ``` df = pd.read_csv(filepath) df.head(5) ``` Graficamos los datos para tener una idea más clara de como se encuentran distribuidos. ``` # se obtiene el atributo de entrada y se agrega una dimensión x_trn = np.array(df.iloc[:,1], dtype="float32")[..., np.newaxis] # se obtiene la salda y_trn = np.array(df.iloc[:,-1], dtype="float32")[..., np.newaxis] # graficamos plt.plot(x_trn, y_trn, '.', color='m', markersize=8) plt.xlabel('horas de estudio') plt.ylabel('calificación') plt.show() x_trn = np.array(df.iloc[:,:2], dtype="float32") y_trn = np.array(df.iloc[:,-1], dtype="float32")[..., np.newaxis] x_trn = torch.tensor(x_trn) y_trn = torch.tensor(y_trn) print(x_trn.shape) print(y_trn.shape) ``` ### 2.1 Conjunto de datos Para hacer lotes podemos user las clase [`TensorDataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.TensorDataset). <img src="https://raw.githubusercontent.com/bereml/iap/master/fig/mnist_pipeline.png"/> ``` ds = TensorDataset(x_trn, y_trn) ds[0] ``` ### 2.2 Cargador de datos Para ver el funcionamiento de la tubería de datos imprimimos la forma de cada lote y su primer elemento. ``` # tamaño del lote batch_size = 16 # creamos un DataLoader dl = DataLoader(ds, batch_size=batch_size, shuffle=True) x, y = next(iter(dl)) print(f'x shape={x.shape} dtype={x.dtype}') print(f'y shape={y.shape} dtype={y.dtype}') len(ds) ``` ## 3 Ciclo de entrenamiento <img src="https://raw.githubusercontent.com/bereml/iap/master/fig/supervisado.svg" width="700"/> ``` # optimizador def train(model, dl, epochs=5): opt = optim.SGD(model.parameters(), lr=1e-3) # historial de pérdida loss_hist = [] # ciclo de entrenamiento for epoch in range(epochs): # historial loss_hist = [] # entrenamiento de una época for x, y_true in dl: # hacemos inferencia para obtener los logits y_lgts = model(x) # calculamos de pérdida loss = F.mse_loss(y_lgts, y_true) # vaciamos los gradientes opt.zero_grad() # retropropagamos loss.backward() # actulizamos parámetros opt.step() # guardamos historial de pérdida loss_hist.append(loss.item() * 100) # imprimimos la pérdida de la época loss = np.mean(loss_hist) print(f'E{epoch:02} loss=[{loss:6.2f}]') ``` ## 4 Definición de la arquitectura Para implementar arquitecturas, PyTorch define dos clases fundamentales. * `nn.Module` define una red neuronal que internamente puede tener otras redes neuronales anidadas (o capas). Tres metodos importantes son: * `__init__(self, args)` es el inicilizador que define al objeto, * `fordward(x)` realizar predicción (hacia adelante), * `parameters(x)` regresa una lista de los parámetros (`nn.Parameter`) de la red y redes anidadas. * `nn.Parameter` envuelve un tensor solo para marcarlo como parámetro y que sea regresado por `nn.Module.parameters(x)`. ### 4.1 Alto nivel (similar a Keras) En esta API basta con apilar las capas (del paquete [`torch.nn`](https://pytorch.org/docs/stable/nn.html)) usando la clase [`nn.Sequential`](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential). ``` model1 = nn.Sequential( nn.Linear(2, 2), nn.ReLU(), nn.Linear(2, 1), ) print(model1) ``` ### 4.2 Medio nivel (Chainer, tensorflow.keras.model) En esta API heredamos de [`nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module), creamos las capas en el inicializador e implementamos la inferencia en el método `fordward`. ``` #se define la clase RegLin que hereda de torch.nn.Module class RegLin(nn.Module): #se define el inicializador def __init__(self): # se llama al inicializador de la clase padre super(RegLin, self).__init__() # importante: se definen las capas como atributos de la clase self.fc1 = nn.Linear(2, 2) self.fc2 = nn.Linear(2, 1) # método para inferencia def forward(self, x): x = self.fc1(x) x = F.relu(x) x = self.fc2(x) return x model2 = RegLin() print(model2) ``` ### 4.3 Bajo nivel En esta interfaz debemos implementar las capaz partiendo de los parámetros. ``` class Linear(nn.Module): def __init__(self, in_features, out_features, init): super(Linear, self).__init__() self.in_features = in_features self.out_features = out_features self.init = init # se envuelven los tensores en parámetros # para que model.parameters() los regrese # y sean visibles al optimizador self.weight = nn.Parameter(torch.zeros(out_features, in_features)) self.bias = nn.Parameter(torch.zeros(out_features)) if init == 'he': self.reset_parameters() def reset_parameters(self): # Delving Deep into Rectifiers: # Surpassing Human-Level Performance on ImageNet Classification # https://arxiv.org/abs/1502.01852 nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) nn.init.uniform_(self.bias, -bound, bound) def forward(self, x): return F.linear(x, self.weight, self.bias) def extra_repr(self): return 'in_features={}, out_features={}, init={}, bias={}'.format( self.in_features, self.out_features, self.init, self.bias is not None ) class RegLinBajo(nn.Module): def __init__(self, init='zeros'): super(RegLinBajo, self).__init__() self.cls = nn.Sequential( Linear(2, 2, init), nn.ReLU(), Linear(2, 1, init), ) def forward(self, x): return self.cls(x) model3 = RegLinBajo(init='he') model3 model4 = RegLinBajo(init='zeros') model4 ``` ## 5 Entrenando modelos ``` set_seed() dl = DataLoader(ds, batch_size=batch_size, shuffle=True) # alto nivel model1 = nn.Sequential( nn.Linear(2, 2), nn.ReLU(), nn.Linear(2, 1), ) train(model1, dl, epochs=5) set_seed() dl = DataLoader(ds, batch_size=batch_size, shuffle=True) # medio nivel model2 = RegLin() train(model2, dl, epochs=5) set_seed() dl = DataLoader(ds, batch_size=batch_size, shuffle=True) # bajo nivel inicializador de He model3 = RegLinBajo(init='he') train(model3, dl, epochs=5) set_seed() dl = DataLoader(ds, batch_size=batch_size, shuffle=True) # bajo nivel inicializador ceros model4 = RegLinBajo(init='zeros') train(model4, dl, epochs=50) ``` ## 6 Obteniendo parámetros ``` list(model1.parameters()) list(model2.parameters()) list(model1.named_parameters()) list(model2.named_parameters()) ```
github_jupyter
# Additional tips and tricks for designing networks This tutorial assumes that you have read the `network_design` tutorial, and have designed a network or two. Here, we will give a few advanced tips and tricks for designing networks that can be reused flexibly. In particular, these tips will use the `config` system, so we will also assume that you have gone over the `config` tutorial. Briefly, the general principles covered in this tutorial are 0. Accept `**kwargs` to pass through network arguments 0. Accept a config argument for groups of parameters We will demonstrate these principles using the two examples from the `network_design` tutorial. ``` %matplotlib inline import matplotlib.pyplot as plt import nengo from nengo.dists import Choice from nengo.processes import Piecewise from nengo.utils.ipython import hide_input def test_integrators(net): with net: piecewise = Piecewise({0: 0, 0.2: 0.5, 1: 0, 2: -1, 3: 0, 4: 1, 5: 0}) piecewise_inp = nengo.Node(piecewise) nengo.Connection(piecewise_inp, net.pre_integrator.input) input_probe = nengo.Probe(piecewise_inp) pre_probe = nengo.Probe(net.pre_integrator.ensemble, synapse=0.01) post_probe = nengo.Probe(net.post_integrator.ensemble, synapse=0.01) with nengo.Simulator(net) as sim: sim.run(6) plt.figure() plt.plot(sim.trange(), sim.data[input_probe], color="k") plt.plot(sim.trange(), sim.data[pre_probe], color="b") plt.plot(sim.trange(), sim.data[post_probe], color="g") hide_input() ``` ## 1. Accept a `**kwargs` argument The standard `nengo.Network` accepts a number of arguments, including the widely used `seed` and `label` arguments. Sometimes it is helpful to be able to set these on your custom networks too. While there is nothing wrong with explicitly passing these arguments along, it is less typing to use the Python `**kwargs` construct. This special argument allows a function to accept any number of keyword arguments which we can then pass into the `Network` constructor. ``` def Integrator(n_neurons, dimensions, tau=0.1, **kwargs): with nengo.Network(**kwargs) as net: net.input = nengo.Node(size_in=dimensions) net.ensemble = nengo.Ensemble(n_neurons, dimensions=dimensions) nengo.Connection(net.ensemble, net.ensemble, synapse=tau) nengo.Connection(net.input, net.ensemble, synapse=None, transform=tau) return net net = nengo.Network(label="Two integrators") with net: # Make both integrators use LIFRate neurons net.config[nengo.Ensemble].neuron_type = nengo.LIFRate() net.pre_integrator = Integrator(50, 1, label="pre") net.post_integrator = Integrator(50, 1, label="post") nengo.Connection(net.pre_integrator.ensemble, net.post_integrator.input) test_integrators(net) print("pre integrator label:", net.pre_integrator.label) print("post integrator label:", net.post_integrator.label) ``` ## 2. Accept a config argument for groups of parameters Often, you will not want to use the network-level defaults for all of your objects. Some objects need certain things overwritten, while others need other values overwritten. Again, it is possible to deal with this issue by adding more and more parameters, but this quickly gets out of hand. Instead, add a small number of arguments that optionally accept a `config` object, which allows for setting multiple parameters at once. In the coupled integrator network example, we make two connections. We have to be careful changing the defaults for those connections, as they are wildly different; one is a recurrent connection from an ensemble to itself, while the other is a connection from a node to an ensemble. We will accept a `config` object for the recurrent connection to make this easier. ``` def ConfigurableIntegrator(n_neurons, dimensions, recurrent_config=None, **kwargs): net = nengo.Network(**kwargs) if recurrent_config is None: recurrent_config = nengo.Config(nengo.Connection) recurrent_config[nengo.Connection].synapse = nengo.Lowpass(0.1) with net: net.input = nengo.Node(size_in=dimensions) net.ensemble = nengo.Ensemble(n_neurons, dimensions=dimensions) with recurrent_config: nengo.Connection(net.ensemble, net.ensemble) tau = nengo.Config.default(nengo.Connection, "synapse").tau nengo.Connection(net.input, net.ensemble, synapse=None, transform=tau) return net net = nengo.Network(label="Two integrators") with net: # Make both integrators use LIFRate neurons net.config[nengo.Ensemble].neuron_type = nengo.LIFRate() net.pre_integrator = ConfigurableIntegrator(50, 1) # Give the post_integrator a shorter tau (should make integration fail) recurrent_config = nengo.Config(nengo.Connection) recurrent_config[nengo.Connection].synapse = nengo.Lowpass(0.01) net.post_integrator = ConfigurableIntegrator( 50, 1, recurrent_config=recurrent_config ) nengo.Connection(net.pre_integrator.ensemble, net.post_integrator.input) test_integrators(net) ``` ## Longer example: double integrator network Recall in the previous tutorial that we created a model that released a lever 0.6 to 1.0 seconds after pressing a lever. Let's use the above principles, and the `config` system in general, to improve the code constructing this model. ``` def controlled_integrator(n_neurons, dimensions, recurrent_config=None, **kwargs): net = nengo.Network(**kwargs) if recurrent_config is None: recurrent_config = nengo.Config(nengo.Connection) recurrent_config[nengo.Connection].synapse = nengo.Lowpass(0.1) with net: net.ensemble = nengo.Ensemble(n_neurons, dimensions=dimensions + 1) with recurrent_config: nengo.Connection( net.ensemble, net.ensemble[:dimensions], function=lambda x: x[:-1] * (1.0 - x[-1]), ) return net def medial_pfc( coupling_strength, n_neurons_per_integrator=200, recurrent_config=None, tau=0.1, **kwargs ): net = nengo.Network(**kwargs) with net: recurrent_config = nengo.Config(nengo.Connection) recurrent_config[nengo.Connection].synapse = nengo.Lowpass(tau) net.pre = controlled_integrator(n_neurons_per_integrator, 1, recurrent_config) net.post = controlled_integrator(n_neurons_per_integrator, 1, recurrent_config) nengo.Connection( net.pre.ensemble[0], net.post.ensemble[0], transform=coupling_strength ) return net def motor_cortex( command_threshold, n_neurons_per_command=30, ens_config=None, **kwargs ): net = nengo.Network(**kwargs) if ens_config is None: ens_config = nengo.Config(nengo.Ensemble) ens_config[nengo.Ensemble].encoders = Choice([[1]]) ens_config[nengo.Ensemble].intercepts = Choice([command_threshold]) with net: with ens_config: net.press = nengo.Ensemble(n_neurons_per_command, dimensions=1) net.release = nengo.Ensemble(n_neurons_per_command, dimensions=1) return net def double_integrator( mpfc_coupling_strength, command_threshold, press_to_pre_gain=3, press_to_post_control=-6, recurrent_tau=0.1, **kwargs ): net = nengo.Network(**kwargs) with net: net.mpfc = medial_pfc(mpfc_coupling_strength) net.motor = motor_cortex(command_threshold) nengo.Connection( net.motor.press, net.mpfc.pre.ensemble[0], transform=recurrent_tau * press_to_pre_gain, ) nengo.Connection( net.motor.press, net.mpfc.post.ensemble[1], transform=press_to_post_control ) nengo.Connection(net.mpfc.post.ensemble[0], net.motor.release) return net def test_doubleintegrator(net): # Provide input and probe outside of network construction, # for more flexibility with net: nengo.Connection(nengo.Node(lambda t: 1 if t < 0.2 else 0), net.motor.press) pr_press = nengo.Probe(net.motor.press, synapse=0.01) pr_release = nengo.Probe(net.motor.release, synapse=0.01) pr_pre_int = nengo.Probe(net.mpfc.pre.ensemble[0], synapse=0.01) pr_post_int = nengo.Probe(net.mpfc.post.ensemble[0], synapse=0.01) with nengo.Simulator(net) as sim: sim.run(1.4) t = sim.trange() plt.figure() plt.subplot(2, 1, 1) plt.plot(t, sim.data[pr_press], c="b", label="Press") plt.plot(t, sim.data[pr_release], c="g", label="Release") plt.axvspan(0, 0.2, color="b", alpha=0.3) plt.axvspan(0.8, 1.2, color="g", alpha=0.3) plt.xlim(right=1.4) plt.legend(loc="best") plt.subplot(2, 1, 2) plt.plot(t, sim.data[pr_pre_int], label="Pre Integrator") plt.plot(t, sim.data[pr_post_int], label="Post Integrator") plt.xlim(right=1.4) plt.legend(loc="best") for coupling_strength in (0.11, 0.16, 0.21): # Try the same network with LIFRate neurons with nengo.Config(nengo.Ensemble) as cfg: cfg[nengo.Ensemble].neuron_type = nengo.LIFRate() net = double_integrator( mpfc_coupling_strength=coupling_strength, command_threshold=0.85, seed=0 ) test_doubleintegrator(net) ```
github_jupyter
#Variable pitch solenoid model ###A.M.C. Dawes - 2015 A model to design a variable pitch solenoid and calculate the associated on-axis B-field. ``` import matplotlib.pyplot as plt import numpy as np import matplotlib as mpl mpl.rcParams['legend.fontsize'] = 10 from mpl_toolkits.mplot3d import Axes3D from scipy.optimize import leastsq from math import acos, atan2, cos, sin from numpy import array, float64, zeros from numpy.linalg import norm #%matplotlib inline ``` ###Parameters: ``` I = 10 #amps - change this back to 1.5 mu = 4*np.pi*1e-7 #This gives B in units of Tesla R = .026 #meters length = 0.25 #meters guess_c1 = -1.26841572 guess_c2 = -5.81781983 guess_c3 = 0.04515335 p = np.linspace(0, 2 * np.pi, 5000) z = p*length/(2*np.pi) dp = p[1] - p[0] def get_theta(c_1, c_2, c_3): return c_1*p + c_2*p**2 + c_3*p**3 def get_x(c_1, c_2, c_3): return R * np.cos(get_theta(c_1, c_2, c_3)) def get_y(c_1, c_2, c_3): return R * np.sin(get_theta(c_1, c_2, c_3)) def get_z(): return p*length/(2*np.pi) def cart2pol(x, y): rho = np.sqrt(x**2 + y**2) phi = np.arctan2(y, x) return(rho, phi) def cart2pol(x, y): rho = np.sqrt(x**2 + y**2) phi = np.arctan2(y, x) return(rho, phi) def j(g): """Returns numbers for list for line""" l = 20.0*g + .2 #change y-intercept back to .2 return l def B(zprime, c_1, c_2, c_3): """Returns B field in Tesla at point zprime on the z-axis""" ex = get_x(c_1, c_2, c_3) why = get_y(c_1, c_2, c_3) r = np.vstack((ex,why,z-zprime)).transpose() r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2) r_mag = np.vstack((r_mag,r_mag,r_mag)).transpose() dr = r[1:,:] - r[:-1,:] drdp = dr/dp crossterm = np.cross(drdp,r[:-1,:]) return abs(mu*I/(4.0*np.pi) * np.nansum(crossterm / r_mag[:-1,:]**3 * dp,axis=0)) #this function was added to condense the return of function B into just a magnitude from its components def Bmag(zpoints1, c_1, c_2, c_3): Bdata = [1e4*B(zpoint, c_1, c_2, c_3) for zpoint in zpoints1] Bdata1 = np.asarray(Bdata) Bdata2 = [] for i in range(0, 5000): Bdata2.append(np.sqrt(Bdata1[i,0]**2 + Bdata1[i, 1]**2 + Bdata1[i, 2]**2)) Bdata3 = np.asarray(Bdata2) return Bdata3 zpoints = np.arange(0,0.15,0.00003) k = [] for i in zpoints: k.append(j(i)) d = np.asarray(k) plt.plot(zpoints,d) optimize_func = lambda points, c: Bmag(points, c[0], c[1], c[2]) ErrorFunc = lambda c,points,dat: dat[1000:3600] - optimize_func(points, c)[1000:3600] c_initial = (guess_c1, guess_c2, guess_c3) est_c, success = leastsq(ErrorFunc, c_initial[:], args=(zpoints,d)) print est_c c1 = est_c[0] c2 = est_c[1] c3 = est_c[2] Bdata = Bmag(zpoints, c1, c2, c3) plt.plot(zpoints,Bdata) ax = plt.gca() ax.axvspan(0.03,0.12,alpha=0.2,color="green") plt.ylabel("B-field (G)") plt.xlabel("z (m)") plt.show() Bdata = Bmag(zpoints, c1, c2, c3) plt.plot(zpoints,Bdata) ax = plt.gca() ax.axvspan(0.03,0.12,alpha=0.2,color="green") plt.ylabel("B-field (G)") plt.xlabel("z (m)") plt.plot(zpoints,d) plt.show() fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(get_x(c1, c2, c3), get_y(c1, c2, c3), z, label='solenoid') ax.legend() ax.set_aspect('equal') plt.show() rho, phi = cart2pol (get_x(c1, c2, c3), get_y(c1, c2, c3)) plt.plot(-z, phi, '.') plt.show() #Why is this cell necessary? plt.plot(get_theta(c1, c2, c3)) ``` ##Design discussion and comparison of two methods: The following are remnants of the design of this notebook but may be useful for verification and testing of the method. ``` #Calculate r vector: r = np.vstack((x,y,z)).transpose() plt.plot(r) # Calculate dr vector: dr = r[1:,:] - r[:-1,:] plt.plot(dr) # Calculate dp vector: dp = p[1:] - p[:-1] plt.plot(dp) # or the smart way since p is linear: dp = p[1] - p[0] dp r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2) plt.plot(r_mag) ``` ##The new way (as arrays): Converted the for loops to numpy array-based operations. Usually this just means taking two shifted arrays and subtracting them (for the delta quantities). But we also do some stacking to make the arrays easier to handle. For example, we stack x y and z into the r array. Note, this uses dp, and x,y,z as defined above, all other quantities are calculated in the loop because r is always relative to the point of interest. ``` def B2(zprime): r = np.vstack((x,y,z-zprime)).transpose() r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2) r_mag = np.vstack((r_mag,r_mag,r_mag)).transpose() dr = r[1:,:] - r[:-1,:] drdp = dr/dp crossterm = np.cross(drdp,r[:-1,:]) return mu*I/(4*np.pi) * np.nansum(crossterm / r_mag[:-1,:]**3 * dp,axis=0) B2list = [] for i in np.arange(0,0.15,0.001): B2list.append(B2(i)) plt.plot(B2list) ``` ##The original way: Warning, this is slow! ``` def B(zprime): B = 0 for i in range(len(x)-1): dx = x[i+1] - x[i] dy = y[i+1] - y[i] dz = z[i+1] - z[i] dp = p[i+1] - p[i] drdp = [dx/dp, dy/dp, dz/dp] r = [x[i],y[i],z[i]-zprime] r_mag = np.sqrt(x[i]**2 + y[i]**2 + (z[i]-zprime)**2) B += mu*I/(4*np.pi) * np.cross(drdp,r) / r_mag**3 * dp return B Blist = [] for i in np.arange(0,0.15,0.001): Blist.append(B(i)) plt.plot(Blist) ``` ##Comparison: Convert lists to arrays, then plot the difference: ``` Blist_arr = np.asarray(Blist) B2list_arr = np.asarray(B2list) plt.plot(Blist_arr - B2list_arr) ``` ##Conclusion: The only difference is on the order of $10^{-17}$ so we can ignore it. Furthermore, the difference is primarily in $z$ which we expect as the other dimensions are effectively zero.
github_jupyter
# Demo Wine avec MLflow ## Setup MLflow Par defaut suppose que le repository est le répertoire local mlruns Pour travailler en coopération, on va désigner le serveur soit en ligne de commande avec ``` export MLFLOW_TRACKING_URI=http://localhost:5000 ``` soit par programme ``` import mlflow server_uri = "http://localhost:5000" mlflow.set_tracking_uri(server_uri) ``` ## Création d'une expérimentation L'expérimentation peut être crée puis sélectionnée en ligne de commande par ``` mlflow experiments create --experiment-name ac2 --artifact-location /opt/mlflow/mlruns/ export MLFLOW_EXPERIMENT_ID=1 ``` ou par programme (voir ci-après) ``` mlflow.set_experiment("wine7") ``` ## Entraînement ``` import warnings import logging logging.basicConfig(level=logging.WARN) logger = logging.getLogger(__name__) import numpy as np warnings.filterwarnings("ignore") np.random.seed(40) import mlflow.sklearn import pandas as pd # read data from file df = pd.read_csv("wine-quality.csv") from sklearn.model_selection import train_test_split # Split the data into training and test sets. (0.75, 0.25) split. train, test = train_test_split(df) # The predicted column is "quality" which is a scalar from [3, 9] train_x = train.drop(["quality"], axis=1) test_x = test.drop(["quality"], axis=1) train_y = train[["quality"]] test_y = test[["quality"]] # make it easy to pass datasets datasets = { 'train_x': train_x, 'train_y': train_y, 'test_x': test_x, 'test_y': test_y } shapes = [ "%s : %s" % (name, dataset.shape) for (name,dataset) in datasets.items() ] print(shapes) def eval_parameters(in_alpha, in_l1_ratio): # Set default values if no alpha is provided alpha = float(in_alpha) if not in_alpha is None else 0.5 # Set default values if no l1_ratio is provided l1_ratio = float(in_l1_ratio) if not in_l1_ratio is None else 0.5 return alpha, l1_ratio def eval_metrics(actual, predicted): from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score rmse = np.sqrt(mean_squared_error(actual, predicted)) mae = mean_absolute_error(actual, predicted) r2 = r2_score(actual, predicted) return rmse, mae, r2 from sklearn.linear_model import lasso_path, enet_path import matplotlib.pyplot as plt from itertools import cycle def plot_enet_descent_path(tmpdir, X, y, l1_ratio): # Compute paths eps = 5e-3 # the smaller it is the longer is the path # Reference the global image variable global image print("Computing regularization path using ElasticNet.") alphas_enet, coefs_enet, _ = enet_path(X, y, eps=eps, l1_ratio=l1_ratio, fit_intercept=False) # Display results fig = plt.figure(1) ax = plt.gca() colors = cycle(['b', 'r', 'g', 'c', 'k']) neg_log_alphas_enet = -np.log10(alphas_enet) for coef_e, c in zip(coefs_enet, colors): l1 = plt.plot(neg_log_alphas_enet, coef_e, linestyle='--', c=c) plt.xlabel('-Log(alpha)') plt.ylabel('coefficients') title = 'ElasticNet Path by alpha for l1_ratio = ' + str(l1_ratio) plt.title(title) plt.axis('tight') # Display images image = fig # Save figure fig.savefig(os.path.join(tempdir, "ElasticNet-paths.png")) # Close plot plt.close(fig) # Return images return image import os def output_enet_coefs(tempdir, columns, lr): coef_file_name = os.path.join(tempdir, "coefs.txt") with open(coef_file_name, "w") as f: f.write("Coefs:\n") [ f.write("\t %s: %s\n" % (name, coef)) for (name, coef) in zip(columns, lr.coef_) ] f.write("\t intercept: %s\n" % lr.intercept_) def plot_enet_feature_importance(tempdir, columns, coefs): # Reference the global image variable global image # Display results fig = plt.figure(1) ax = plt.gca() feature_importance = pd.Series(index = columns, data = np.abs(coefs)) n_selected_features = (feature_importance>0).sum() print('{0:d} features, reduction of {1:2.2f}%'.format( n_selected_features,(1-n_selected_features/len(feature_importance))*100)) feature_importance.sort_values().tail(30).plot(kind = 'bar', figsize = (20,12)); # Display images image = fig # Save figure fig.savefig(os.path.join(tempdir, "feature-importance.png")) # Close plot plt.close(fig) # Return images return image import tempfile def train_elasticnet(in_alpha, in_l1_ratio, trial=None): from sklearn.linear_model import ElasticNet alpha, l1_ratio = eval_parameters(in_alpha, in_l1_ratio) print("Parameters (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio)) run_name = "en_%f_%f" % (alpha,l1_ratio) with mlflow.start_run() as run: # train with ElasticNet lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42) lr.fit(train_x, train_y) # Evaluate Metrics predicted_qualities = lr.predict(test_x) (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities) # Print out metrics print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio)) print(" RMSE: %s" % rmse) print(" MAE: %s" % mae) print(" R2: %s" % r2) # Log parameter, metrics, and model to MLflow mlflow.log_param("alpha", alpha) mlflow.log_param("l1_ratio", l1_ratio) mlflow.log_metric("rmse", rmse) mlflow.log_metric("mae", mae) mlflow.log_metric("r2", r2) mlflow.set_tag("algo", "ElastiNet") if trial != None: mlflow.set_tag("trial", trial) #run_id = run.info.run_id # store info with tempfile.TemporaryDirectory() as tmpdirname: output_enet_coefs(tmpdirname, train_x.columns, lr) # plots plot_enet_feature_importance(tmpdirname, train_x.columns, lr.coef_) # Call plot_enet_descent_path #image = plot_enet_descent_path(tmpdirname, train_x, train_y, l1_ratio) # Log artifacts (output files) mlflow.log_artifacts(tmpdirname, artifact_path="artifacts") # store model mlflow.sklearn.log_model(lr, "model") return rmse train_elasticnet(0.5, 0.5) # alpha 0.5, L1 0.5 train_elasticnet(0.5, 0.4) ``` ## Auto tuning ``` !pip install optuna import optuna from datetime import datetime optuna.logging.set_verbosity(optuna.logging.INFO) ts = datetime.now().strftime('%Y%m%dT%H%M%S') # Define an objective function to be minimized. def objective(trial): suggested_alpha = trial.suggest_uniform('alpha', 0.1, 0.8) suggested_l1_ratio = trial.suggest_uniform('l1_ratio', 0.1, 0.8) error = train_elasticnet(suggested_alpha, suggested_l1_ratio, trial="%s_%s" % (ts,trial.trial_id)) return error # A objective value linked with the Trial object. study = optuna.create_study() # Create a new study. study.optimize(objective, n_trials=40) #100 # Invoke optimization of the objective function. study.best_params study.best_trial ``` ## Identification par tag du best model ``` trial_id = study.best_trial.number print("trial_id: %s" % trial_id) tag = "%s_%s" % (ts, trial_id) print("tag: %s" % tag) from mlflow.entities import ViewType query = "tags.trial = '%s'" % tag runs = mlflow.search_runs(filter_string=query, run_view_type=ViewType.ACTIVE_ONLY) runs.head(10) best_model_id = runs['run_id'][0] best_model_uri = runs['artifact_uri'][0] print("best model - id: %s - uri: %s" % (best_model_id, best_model_uri) ) from mlflow.tracking import MlflowClient mlflow_client = MlflowClient() mlflow_client.set_tag(best_model_id, "best_model", "true") runs = mlflow.search_runs(filter_string=query, run_view_type=ViewType.ACTIVE_ONLY) runs.head(10) ```
github_jupyter
# 초보자를 위한 빠른 시작 ``` !pip install -q tensorflow-gpu==2.0.0-rc1 import tensorflow as tf ``` # 9주차 데이터 학습 모델 설계 ``` from google.colab import drive drive.mount('/content/gdrive') # import import os import pandas as pd import glob os.chdir('/content/gdrive/My Drive/Colab Notebooks/') # DataPath 설정 current_path = os.getcwd() # 현재 폴더 위치 train_path = current_path+ '/capstone_data/train' # 데이터 패스 설정 print(train_path) df = pd.read_json('./capstone_data/data_version_2.json') df df.shape df.info() df.tags.map(lambda x: len(x)).value_counts().plot.bar() df['songs2'].unique # 태그 개수 세기 tag_cnt=set() for i in df['tags']: for j in i: tag_cnt.add(j) type(tag_cnt) tag_cnt # 전체 태그 len(tag_cnt) # 전체 태그 개수 # 음원 개수 세기 song_cnt=set() for i in df['songs2']: for j in i: song_cnt.add(j) song_cnt len(song_cnt) ``` # Latent Factor CF ``` # [Tag x Song] Pivot Table 생성 unique_tags = list(set([tag for tags in df.tags for tag in tags])) unique_songs = list(set([song for songs in df.songs2 for song in songs])) df_pivot = pd.DataFrame(index=unique_tags, columns=unique_songs) df_pivot = df_pivot.fillna(0) for i, (tags, songs) in enumerate(zip(df.tags, df.songs2)): print(i) if i % 100 == 0 else '' df_pivot.loc[tags, songs] += 1 # pivot table 불러오기 df_pivot = pd.read_pickle('./capstone_data/pivot_songs_tags.pickle') df_pivot # Matrix Factorization from sklearn.decomposition import TruncatedSVD SVD = TruncatedSVD(n_components=12) matrix = SVD.fit_transform(df_pivot) matrix # 피어슨 상관계수 구하기 import numpy as np corr = np.corrcoef(matrix) corr # heatmap 으로 표현 import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(100,100)) sns.heatmap(corr,annot=True,fmt='.1g') song_title = df_pivot.columns song_title_list = list(song_title) tag_title = df_pivot.index tag_title_list = list(tag_title) seed_tag = tag_title_list.index("기분") corr_seed_tag = corr[seed_tag] list(tag_title[corr_seed_tag >= 0.9])[:50] # 태그 유사도를 바탕으로 곡을 추천하도록 import math from itertools import combinations NUM_SIM_TAG_TOPK = 2 num_item_rec_topk = 2 num_users = 1382 # df_pivot.values # matrix2 = df_pivot('rating') #user_means = matrix.mean(axis=1) df_pivot.stack().reset_index() df_pivot.index.name="tags" df_pivot.columns.name="songs" matrix=df_pivot matrix # df_pivot2 = pd.pivot_table(df_pivot, index=["tags"], columns=["songs"], values=[i for i in df_pivot.values]) user_means = df_pivot.mean(axis=1) user_means def get_similarity(user_id, other_id, matrix=matrix, user_means=user_means): intersect_ids = np.intersect1d(matrix.loc[user_id].dropna().index , matrix.loc[other_id].dropna().index) user_diff2_sum, other_diff2_sum, user_other_diff_sum = 0, 0, 0 for item_id in intersect_ids: user_diff = matrix.loc[user_id, item_id] - user_means[user_id] other_diff = matrix.loc[other_id, item_id] - user_means[other_id] user_diff2_sum += user_diff ** 2 other_diff2_sum += other_diff ** 2 user_other_diff_sum += user_diff * other_diff return user_other_diff_sum / math.sqrt(user_diff2_sum) / math.sqrt(other_diff2_sum) user_corr_dict = {} for x, y in combinations([*range(1,num_users+1)], 2): user_corr_dict[(x, y)] = get_similarity(x, y) def get_similarity(tag_id, other_id, matrix=df_pivot, tag_means=tag_means): intersect_ids = np.intersect1d(matrix.loc[tag_id].dropna().index , matrix.loc[other_id].dropna().index) tag_diff2_sum, other_diff2_sum, tag_other_diff_sum = 0, 0, 0 for song in intersect_ids: tag_diff = matrix.loc[tags, songs] - tag_means[tags] other_diff = matrix.loc[other_id, item_id] - user_means[other_id] user_diff2_sum += user_diff ** 2 other_diff2_sum += other_diff ** 2 user_other_diff_sum += user_diff * other_diff return user_other_diff_sum / math.sqrt(user_diff2_sum) / math.sqrt(other_diff2_sum) user_corr_dict = {} for x, y in combinations([*range(1,num_users+1)], 2): user_corr_dict[(x, y)] = get_similarity(x, y) ``` # LightGBM ``` from sklearn.model_selection import train_test_split X = df['tags'] y = df ['songs2'] X_train, X_test, y_train, y_test = train_test_split(X.values,y.values,test_size=0.2, random_state=42) X_train X_train.shape # lightGBM 데이터 모델 학습 from lightgbm import LGBMRegressor from sklearn.metrics import accuracy_score model_tags = LGBMRegressor(n_estimators=500); model_tags.fit(X_train,y_train) songs_pred = model_convention.predict(X_test) y_test = np.expm1(y_test) songs_pred = np.expm1(songs_pred) ``` # SVD CF 다시 ``` import pandas as pd import numpy as np import math # pivot table 불러오기 df_pivot.index.name="tags" df_pivot.columns.name="songs" df_pivot # pivot table 다시 만듬 unique_tags = list(set([tag for tags in df.tags for tag in tags])) unique_songs = list(set([song for songs in df.songs2 for song in songs])) df_pivot2 = pd.DataFrame(index=unique_tags, columns=unique_songs) df_pivot2.index.name="tags" df_pivot2.columns.name="songs" df_pivot2 def R_filled_in(df_pivot2): for col in range(len(df_pivot2.columns)): col_update=[] # 컬럼의 평균을 구한다. col_num = [i for i in df_pivot2.iloc[:,col] if math.isnan(i)==False] col_mean = sum(col_num)/len(col_num) # NaN을 가진 행은 위에서 구한 평균 값으로 채워준다. col_update = [i if math.isnan(i)==False else col_mean for i in df_pivot2.iloc[:,col]] # 리스트로 만든 업데이트된 한 컬럼을 기존에 데이터 프레임 컬럼에 새로 입혀준다. df_pivot2.iloc[:,col] = col_update return df_pivot2 rating_R_filled = R_filled_in(df_pivot2) rating_R_filled ``` # SVD CF 다시 2 ``` pip install sparsesvd import pandas as pd import numpy as np import matplotlib.pyplot as plt import math from sparsesvd import sparsesvd import scipy from scipy.sparse import csc_matrix from scipy.sparse.linalg import * from sklearn.model_selection import train_test_split %matplotlib inline # pivot table 불러오기 df_pivot = pd.read_pickle('./capstone_data/pivot_songs_tags.pickle') df_pivot.index.name='tags' df_pivot.columns.name='songs' df_pivot.head() #Dividing each rating a user gave by the mean of each user's rating tag_means = np.array(df_pivot.mean(axis = 1)).reshape(-1, 1) df_pivot = df_pivot.div(df_pivot.mean(axis = 1), axis = 0) df_pivot_matrix = df_pivot.to_numpy() tag_means # SVD #getting the U, S and Vt values U, sigma, Vt = svds(df_pivot_matrix, k = 10) #Sigma value above is outputed as an array, but we need it in the form of a diagonal matrix sigma = np.diag(sigma) #creating predictions predicted = np.dot(np.dot(U, sigma), Vt) predicted_ratings = np.dot(np.dot(U, sigma), Vt) * tag_means predicted_ratings predicted_ratings.info predicted_df = pd.DataFrame(predicted_ratings, columns= df_pivot.columns) #Data frame index starts with 0 but original dataset starts with 1, so adding 1 to index predicted_df.index = predicted_df.index + 1 predicted_df.head() #creating function to get recommendations, 코드 원상태 def svd_recommender(df_predict, user, umr, number_recomm): user_predicted_movies = df_predict.loc[user, :].sort_values(ascending = False) original_data = umr.loc[user, :].sort_values(ascending = False) already_rated = user_movies.loc[user, :].dropna() unrated = list(user_movies.loc[1, pd.isnull(user_movies.loc[user, :])].index) recommendations = df_predict.loc[user][unrated] recommendations = pd.DataFrame(recommendations.sort_values(ascending = False).index[:number_recomm]) return recommendations, already_rated #getting values for tag 카페 recommend_cafe, rated_cafe = svd_recommender(predicted_df, '카페' , df_pivot, 10) df_pivot.loc['카페'] ``` # SVD CF 다시 3 ``` import os import pandas as pd import glob import numpy as np import matplotlib.pyplot as plt import math import scipy from sklearn.decomposition import TruncatedSVD from scipy.sparse.linalg import svds from sklearn.model_selection import train_test_split %matplotlib inline # pivot table 불러오기 df_pivot = pd.read_pickle('./capstone_data/pivot_songs_tags.pickle') df_pivot.index.name = 'tags' df_pivot.columns.name = 'songs' df_pivot.head() df_song_meta = pd.read_json('./capstone_data/song_meta.json') df_song_meta.head() tag_name_erase = "로우파이" df_pivot.index.tolist().index(tag_name_erase) # matrix는 pivot_table 값을 numpy matrix로 만든 것 matrix = df_pivot.values # tag_ratings_mean은 tag의 평균 song 개수 tag_ratings_mean = np.mean(matrix, axis = 1) # R_user_mean : 사용자-영화에 대해 사용자 평균 평점을 뺀 것. matrix_tag_mean = matrix - tag_ratings_mean.reshape(-1, 1) matrix matrix.shape tag_ratings_mean.shape matrix_tag_mean.shape pd.DataFrame(matrix_tag_mean, columns=df_pivot.columns).head() # scipy에서 제공해주는 svd. # U 행렬, sigma 행렬, V 전치 행렬을 반환. U, sigma, Vt = svds(matrix_tag_mean, k = 12) print(U.shape) print(sigma.shape) print(Vt.shape) ``` 현재 이 Sigma 행렬은 0이 아닌 값만 1차원 행렬로 표현된 상태 즉, 0이 포함된 대칭행렬로 변환할 때는 numpy의 diag를 이용해야 함 ``` sigma = np.diag(sigma) sigma.shape sigma[2] # U, Sigma, Vt의 내적을 수행하면, 다시 원본 행렬로 복원이 된다. # 거기에 + 사용자 평균 rating을 적용한다. svd_tag_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + tag_ratings_mean.reshape(-1, 1) df_svd_preds = pd.DataFrame(svd_tag_predicted_ratings, columns = df_pivot.columns) df_svd_preds.head() df_svd_preds.shape #creating function to get recommendations, 코드 원상태 def svd_recommender(df_svd_preds, tag, ori_pivot, number_recomm): tag_row_number = df_pivot.index.tolist().index(tag) tag_predicted_songs = df_svd_preds.loc[tag_row_number, :].sort_values(ascending = False) recommend_song = tag_predicted_songs.loc[:number_recomm] top_song_names = df_song_meta[df_song_meta.id.isin(recommend_song)][['artist_name_basket','song_name']].values return recommend_song, top_song_names %time tag_song_recommendation = svd_recommender(df_svd_preds, '이별' , df_pivot, 10) tag_song_recommendation ```
github_jupyter
# Embeddings So far, we've represented text in a bagged one-hot encoded form which is a n-dimensional array where each index corresponds to a token. The value at that index corresponds to the number of times the word appears in the sentence. This method forces us to completely lose the structural information in our inputs. ```python [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]``` We've also represented our input in a one-hot encoded form where each token is represented by an n-dimensional array. T ```python [[0. 0. 0. ... 0. 0. 0.] [0. 0. 1. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] ``` his allows us to preserve the structural information but there are two major disadvantages here. If we have a large vocabulary, the representation length for each token will be massive leading to large computes. And though we preserve the structure within the text, the actual representation for each token does not preserve any relationship with respect to other tokens. In this notebook, we're going to learn about embeddings and how they address all the shortcomings of the representation methods we've seen so far. # Overview * **Objective:** Represent tokens in text that capture the intrinsic semantic relationships. * **Advantages:** * Low-dimensionality while capturing relationships. * Interpretable token representations * **Disadvantages:** None * **Miscellaneous:** There are lot's of pretrained embeddings to choose from but you can also train your own from scratch. # Learning embeddings The main idea of embeddings is to have fixed length representations for the tokens in a text regardless of the number of tokens in the vocabulary. So instead of each token representation having the shape [1XV] where V is vocab size, each token now has the shape [1 X D] where D is the embedding size (usually 50, 100, 200, 300). The numbers in the representation will no longer be 0s and 1s but rather floats that represent that token in a D-dimensional latent space. If the embeddings really did capture the relationship between tokens, then we should be able to inspect this latent space and confirm known relationships (we'll do this soon). But how do we learn the embeddings the first place? The intuition behind embeddings is that the definition of a token depends on the token itself but on it's context. There are several different ways of doing this: 1. Given the word in the context, predict the target word (CBOW - continuous bag of words). 2. Given the target word, predict the context word (skip-gram). 3. Given a sequence of words, predict the next word (LM - language modeling). All of these approaches involve create data to train our model on. Every word in a sentence becomes the target word and the context words are determines by a window. In the image below (skip-gram), the window size is 2. We repeat this for every sentence in our corpus and this results in our training data for the unsupervised task. This in an unsupervised learning technique since we don't have official labels for contexts. The idea is that similar target words will appear with similar contexts and we can learn this relationship by repeatedly training our mode with (context, target) pairs. <img src="figures/skipgram.png" width=600> We can learn embeddings using any of these approaches above and some work better than others. You can inspect the learned embeddings but the best way to choose an approach is to empirically validate the performance on a supervised task. We can learn embeddings by creating our models in PyTorch but instead, we're going to use a library that specializes in embeddings and topic modeling called [Gensim](https://radimrehurek.com/gensim/). ``` # Let's make sure the libraries are installed #!pip install numpy #!pip install gensim #!pip install matplotlib #!pip install pandas #!pip install nltk # Now import the libraries import os from argparse import Namespace import copy import gensim from gensim.models import Word2Vec import json import nltk#; nltk.download('punkt') import numpy as np import pandas as pd import re import urllib import warnings warnings.filterwarnings('ignore') args = Namespace( seed=1234, data_file="data/harrypotter.txt", embedding_dim=100, window=5, min_count=3, skip_gram=1, # 0 = CBOW negative_sampling=20, ) # Split text into sentences tokenizer = nltk.data.load('data/punkt/english.pickle') with open(args.data_file, encoding='cp1252') as fp: book = fp.read() sentences = tokenizer.tokenize(book) print (len(sentences)) print (sentences[11]) # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) text = text.strip() return text # Clean sentences sentences = [preprocess_text(sentence) for sentence in sentences] print (sentences[11]) # Process sentences for gensim sentences = [sentence.split(" ") for sentence in sentences] print (sentences[11]) ``` When we have large vocabularies to learn embeddings for, things can get complex very quickly. Recall that the backpropagation with softmax updates both the correct and incorrect class weights. This becomes a massive computation for every backwas pass we do so a workaround is to use [negative sampling](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/) which only updates the correct class and a few arbitrary incorrect classes (negative_sampling=20). We're able to do this because of the large amount of training data where we'll see the same word as the target class multiple times. ``` # Super fast because of optimized C code under the hood model = Word2Vec(sentences=sentences, size=args.embedding_dim, window=args.window, min_count=args.min_count, sg=args.skip_gram, negative=args.negative_sampling) print (model) # Vector for each word model.wv.get_vector("potter") # Get nearest neighbors (excluding itself) model.wv.most_similar(positive="scar", topn=5) # Save the weights model.wv.save_word2vec_format('model.txt', binary=False) ``` # Pretrained embeddings We can learn embeddings from scratch using one of the approaches above but we can also leverage pretrained embeddings that have been trained on millions of documents. Popular ones include Word2Vec (skip-gram) or GloVe (global word-word co-occurrence). We can validate that these embeddings captured meaningful semantic relationships by confirming them. ``` from gensim.scripts.glove2word2vec import glove2word2vec from gensim.models import KeyedVectors from io import BytesIO import matplotlib.pyplot as plt from sklearn.decomposition import PCA from zipfile import ZipFile from urllib.request import urlopen # Unzip the file (may take ~3 minutes) zipfile = ZipFile("data/glove.6B.zip","r") zipfile.namelist() # Write embeddings embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) zipfile.extract(embeddings_file) # Save GloVe embeddings to local directory in word2vec format word2vec_output_file = '{0}.word2vec'.format(embeddings_file) glove2word2vec(embeddings_file, word2vec_output_file) # Load embeddings (may take a minute) glove = KeyedVectors.load_word2vec_format(word2vec_output_file, binary=False) # (king - man) + woman = ? glove.most_similar(positive=['woman', 'king'], negative=['man'], topn=5) # Get nearest neighbors (exlcusing itself) glove.wv.most_similar(positive="goku", topn=5) # Reduce dimensionality for plotting X = glove[glove.wv.vocab] pca = PCA(n_components=2) pca_results = pca.fit_transform(X) def plot_embeddings(words, embeddings, pca_results): for word in words: index = embeddings.index2word.index(word) plt.scatter(pca_results[index, 0], pca_results[index, 1]) plt.annotate(word, xy=(pca_results[index, 0], pca_results[index, 1])) plt.show() plot_embeddings(words=["king", "queen", "man", "woman"], embeddings=glove, pca_results=pca_results) # Bias in embeddings glove.most_similar(positive=['woman', 'doctor'], negative=['man'], topn=5) ``` # Using Embeddings There are several different ways to use embeddings. 1. Use your own trained embeddings (trained on an unsupervised dataset). 2. Use pretrained embeddings (GloVe, word2vec, etc.) 3. Randomly initialized embeddings. Once you have chosen embeddings, you can choose to freeze them or continue to train them using the supervised data (this could lead to overfitting). In this example, we're going to use GloVe embeddings and freeze them during training. Our task will be to predict an article's category given its title. ## Set up ``` # Load PyTorch library #!pip3 install torch import os from argparse import Namespace import collections import json import matplotlib.pyplot as plt import numpy as np import pandas as pd import re import torch # Set Numpy and PyTorch seeds def set_seeds(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) # Creating directories def create_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) # Arguments args = Namespace( seed=1234, cuda=True, shuffle=True, data_file="data/news.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="news", train_size=0.7, val_size=0.15, test_size=0.15, cutoff=25, # token must appear at least <cutoff> times to be in SequenceVocabulary num_epochs=5, early_stopping_criteria=5, learning_rate=1e-3, batch_size=64, num_filters=100, embedding_dim=100, hidden_dim=100, dropout_p=0.1, ) # Set seeds set_seeds(seed=args.seed, cuda=args.cuda) # Create save dir create_dirs(args.save_dir) # Expand filepaths args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) # Check CUDA if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("Using CUDA: {}".format(args.cuda)) ``` ## Data ``` import re import urllib # Raw data df = pd.read_csv(args.data_file, header=0) df.head() # Split by category by_category = collections.defaultdict(list) for _, row in df.iterrows(): by_category[row.category].append(row.to_dict()) for category in by_category: print ("{0}: {1}".format(category, len(by_category[category]))) # Create split data final_list = [] for _, item_list in sorted(by_category.items()): if args.shuffle: np.random.shuffle(item_list) n = len(item_list) n_train = int(args.train_size*n) n_val = int(args.val_size*n) n_test = int(args.test_size*n) # Give data point a split attribute for item in item_list[:n_train]: item['split'] = 'train' for item in item_list[n_train:n_train+n_val]: item['split'] = 'val' for item in item_list[n_train+n_val:]: item['split'] = 'test' # Add to final list final_list.extend(item_list) # df with split datasets split_df = pd.DataFrame(final_list) split_df["split"].value_counts() # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) return text split_df.title = split_df.title.apply(preprocess_text) split_df.head() ``` ## Vocabulary ``` class Vocabulary(object): def __init__(self, token_to_idx=None): # Token to index if token_to_idx is None: token_to_idx = {} self.token_to_idx = token_to_idx # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): return {'token_to_idx': self.token_to_idx} @classmethod def from_serializable(cls, contents): return cls(**contents) def add_token(self, token): if token in self.token_to_idx: index = self.token_to_idx[token] else: index = len(self.token_to_idx) self.token_to_idx[token] = index self.idx_to_token[index] = token return index def add_tokens(self, tokens): return [self.add_token[token] for token in tokens] def lookup_token(self, token): return self.token_to_idx[token] def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self.token_to_idx) # Vocabulary instance category_vocab = Vocabulary() for index, row in df.iterrows(): category_vocab.add_token(row.category) print (category_vocab) # __str__ print (len(category_vocab)) # __len__ index = category_vocab.lookup_token("Business") print (index) print (category_vocab.lookup_index(index)) ``` ## Sequence vocabulary Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of tokens. ``` from collections import Counter import string class SequenceVocabulary(Vocabulary): def __init__(self, token_to_idx=None, unk_token="<UNK>", mask_token="<MASK>", begin_seq_token="<BEGIN>", end_seq_token="<END>"): super(SequenceVocabulary, self).__init__(token_to_idx) self.mask_token = mask_token self.unk_token = unk_token self.begin_seq_token = begin_seq_token self.end_seq_token = end_seq_token self.mask_index = self.add_token(self.mask_token) self.unk_index = self.add_token(self.unk_token) self.begin_seq_index = self.add_token(self.begin_seq_token) self.end_seq_index = self.add_token(self.end_seq_token) # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): contents = super(SequenceVocabulary, self).to_serializable() contents.update({'unk_token': self.unk_token, 'mask_token': self.mask_token, 'begin_seq_token': self.begin_seq_token, 'end_seq_token': self.end_seq_token}) return contents def lookup_token(self, token): return self.token_to_idx.get(token, self.unk_index) def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the SequenceVocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx) def __len__(self): return len(self.token_to_idx) # Get word counts word_counts = Counter() for title in split_df.title: for token in title.split(" "): if token not in string.punctuation: word_counts[token] += 1 # Create SequenceVocabulary instance title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= args.cutoff: title_vocab.add_token(word) print (title_vocab) # __str__ print (len(title_vocab)) # __len__ index = title_vocab.lookup_token("general") print (index) print (title_vocab.lookup_index(index)) ``` ## Vectorizer ``` class NewsVectorizer(object): def __init__(self, title_vocab, category_vocab): self.title_vocab = title_vocab self.category_vocab = category_vocab def vectorize(self, title): indices = [self.title_vocab.lookup_token(token) for token in title.split(" ")] indices = [self.title_vocab.begin_seq_index] + indices + \ [self.title_vocab.end_seq_index] # Create vector title_length = len(indices) vector = np.zeros(title_length, dtype=np.int64) vector[:len(indices)] = indices return vector def unvectorize(self, vector): tokens = [self.title_vocab.lookup_index(index) for index in vector] title = " ".join(token for token in tokens) return title @classmethod def from_dataframe(cls, df, cutoff): # Create class vocab category_vocab = Vocabulary() for category in sorted(set(df.category)): category_vocab.add_token(category) # Get word counts word_counts = Counter() for title in df.title: for token in title.split(" "): word_counts[token] += 1 # Create title vocab title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= cutoff: title_vocab.add_token(word) return cls(title_vocab, category_vocab) @classmethod def from_serializable(cls, contents): title_vocab = SequenceVocabulary.from_serializable(contents['title_vocab']) category_vocab = Vocabulary.from_serializable(contents['category_vocab']) return cls(title_vocab=title_vocab, category_vocab=category_vocab) def to_serializable(self): return {'title_vocab': self.title_vocab.to_serializable(), 'category_vocab': self.category_vocab.to_serializable()} # Vectorizer instance vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff) print (vectorizer.title_vocab) print (vectorizer.category_vocab) vectorized_title = vectorizer.vectorize(preprocess_text( "Roger Federer wins the Wimbledon tennis tournament.")) print (np.shape(vectorized_title)) print (vectorized_title) print (vectorizer.unvectorize(vectorized_title)) ``` ## Dataset ``` from torch.utils.data import Dataset, DataLoader class NewsDataset(Dataset): def __init__(self, df, vectorizer): self.df = df self.vectorizer = vectorizer # Max title length get_length = lambda title: len(title.split(" ")) self.max_seq_length = max(map(get_length, df.title)) + 2 # (<BEGIN> + <END>) # Data splits self.train_df = self.df[self.df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.df[self.df.split=='val'] self.val_size = len(self.val_df) self.test_df = self.df[self.df.split=='test'] self.test_size = len(self.test_df) self.lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.val_size), 'test': (self.test_df, self.test_size)} self.set_split('train') # Class weights (for imbalances) class_counts = df.category.value_counts().to_dict() def sort_key(item): return self.vectorizer.category_vocab.lookup_token(item[0]) sorted_counts = sorted(class_counts.items(), key=sort_key) frequencies = [count for _, count in sorted_counts] self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32) @classmethod def load_dataset_and_make_vectorizer(cls, df, cutoff): train_df = df[df.split=='train'] return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff)) @classmethod def load_dataset_and_load_vectorizer(cls, df, vectorizer_filepath): vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(df, vectorizer) def load_vectorizer_only(vectorizer_filepath): with open(vectorizer_filepath) as fp: return NewsVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): with open(vectorizer_filepath, "w") as fp: json.dump(self.vectorizer.to_serializable(), fp) def set_split(self, split="train"): self.target_split = split self.target_df, self.target_size = self.lookup_dict[split] def __str__(self): return "<Dataset(split={0}, size={1})".format( self.target_split, self.target_size) def __len__(self): return self.target_size def __getitem__(self, index): row = self.target_df.iloc[index] title_vector = self.vectorizer.vectorize(row.title) category_index = self.vectorizer.category_vocab.lookup_token(row.category) return {'title': title_vector, 'category': category_index} def get_num_batches(self, batch_size): return len(self) // batch_size def generate_batches(self, batch_size, collate_fn, shuffle=True, drop_last=False, device="cpu"): dataloader = DataLoader(dataset=self, batch_size=batch_size, collate_fn=collate_fn, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name].to(device) yield out_data_dict # Dataset instance dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) print (dataset) # __str__ title_vector = dataset[5]['title'] # __getitem__ print (title_vector) print (dataset.vectorizer.unvectorize(title_vector)) print (dataset.class_weights) ``` ## Model input → embedding → conv → FC We will be using 1d conv operations ([nn.Conv1D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv1d)) even though our inputs are words because we are not representing them at a character level. The inputs are of shape $\in \mathbb{R}^{NXSXE}$ * where: * N = batchsize * S = max sentence length * E = embedding dim at a word level ``` import torch.nn as nn import torch.nn.functional as F class NewsModel(nn.Module): def __init__(self, embedding_dim, num_embeddings, num_input_channels, num_channels, hidden_dim, num_classes, dropout_p, pretrained_embeddings=None, freeze_embeddings=False, padding_idx=0): super(NewsModel, self).__init__() if pretrained_embeddings is None: self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx) else: pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) # Conv weights self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_channels, kernel_size=f) for f in [2,3,4]]) # FC weights self.dropout = nn.Dropout(dropout_p) self.fc1 = nn.Linear(num_channels*3, hidden_dim) self.fc2 = nn.Linear(hidden_dim, num_classes) if freeze_embeddings: self.embeddings.weight.requires_grad = False def forward(self, x_in, channel_first=False, apply_softmax=False): # Embed x_in = self.embeddings(x_in) # Rearrange input so num_channels is in dim 1 (N, C, L) if not channel_first: x_in = x_in.transpose(1, 2) # Conv outputs z1 = self.conv[0](x_in) z1 = F.max_pool1d(z1, z1.size(2)).squeeze(2) z2 = self.conv[1](x_in) z2 = F.max_pool1d(z2, z2.size(2)).squeeze(2) z3 = self.conv[2](x_in) z3 = F.max_pool1d(z3, z3.size(2)).squeeze(2) # Concat conv outputs z = torch.cat([z1, z2, z3], 1) # FC layers z = self.dropout(z) z = self.fc1(z) y_pred = self.fc2(z) if apply_softmax: y_pred = F.softmax(y_pred, dim=1) return y_pred ``` ## Training ``` import torch.optim as optim class Trainer(object): def __init__(self, dataset, model, model_state_file, save_dir, device, shuffle, num_epochs, batch_size, learning_rate, early_stopping_criteria): self.dataset = dataset self.class_weights = dataset.class_weights.to(device) self.model = model.to(device) self.save_dir = save_dir self.device = device self.shuffle = shuffle self.num_epochs = num_epochs self.batch_size = batch_size self.loss_func = nn.CrossEntropyLoss(self.class_weights) self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate) self.scheduler = optim.lr_scheduler.ReduceLROnPlateau( optimizer=self.optimizer, mode='min', factor=0.5, patience=1) self.train_state = { 'done_training': False, 'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'early_stopping_criteria': early_stopping_criteria, 'learning_rate': learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': model_state_file} def update_train_state(self): # Verbose print ("[EPOCH]: {0} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format( self.train_state['epoch_index'], self.train_state['learning_rate'], self.train_state['train_loss'][-1], self.train_state['train_acc'][-1], self.train_state['val_loss'][-1], self.train_state['val_acc'][-1])) # Save one model at least if self.train_state['epoch_index'] == 0: torch.save(self.model.state_dict(), self.train_state['model_filename']) self.train_state['stop_early'] = False # Save model if performance improved elif self.train_state['epoch_index'] >= 1: loss_tm1, loss_t = self.train_state['val_loss'][-2:] # If loss worsened if loss_t >= self.train_state['early_stopping_best_val']: # Update step self.train_state['early_stopping_step'] += 1 # Loss decreased else: # Save the best model if loss_t < self.train_state['early_stopping_best_val']: torch.save(self.model.state_dict(), self.train_state['model_filename']) # Reset early stopping step self.train_state['early_stopping_step'] = 0 # Stop early ? self.train_state['stop_early'] = self.train_state['early_stopping_step'] \ >= self.train_state['early_stopping_criteria'] return self.train_state def compute_accuracy(self, y_pred, y_target): _, y_pred_indices = y_pred.max(dim=1) n_correct = torch.eq(y_pred_indices, y_target).sum().item() return n_correct / len(y_pred_indices) * 100 def pad_seq(self, seq, length): vector = np.zeros(length, dtype=np.int64) vector[:len(seq)] = seq vector[len(seq):] = self.dataset.vectorizer.title_vocab.mask_index return vector def collate_fn(self, batch): # Make a deep copy batch_copy = copy.deepcopy(batch) processed_batch = {"title": [], "category": []} # Get max sequence length max_seq_len = max([len(sample["title"]) for sample in batch_copy]) # Pad for i, sample in enumerate(batch_copy): seq = sample["title"] category = sample["category"] padded_seq = self.pad_seq(seq, max_seq_len) processed_batch["title"].append(padded_seq) processed_batch["category"].append(category) # Convert to appropriate tensor types processed_batch["title"] = torch.LongTensor( processed_batch["title"]) processed_batch["category"] = torch.LongTensor( processed_batch["category"]) return processed_batch def run_train_loop(self): for epoch_index in range(self.num_epochs): self.train_state['epoch_index'] = epoch_index # Iterate over train dataset # initialize batch generator, set loss and acc to 0, set train mode on self.dataset.set_split('train') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.train() for batch_index, batch_dict in enumerate(batch_generator): # zero the gradients self.optimizer.zero_grad() # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute gradients using loss loss.backward() # use optimizer to take a gradient step self.optimizer.step() # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['train_loss'].append(running_loss) self.train_state['train_acc'].append(running_acc) # Iterate over val dataset # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('val') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0. running_acc = 0. self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.to("cpu").item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['val_loss'].append(running_loss) self.train_state['val_acc'].append(running_acc) self.train_state = self.update_train_state() self.scheduler.step(self.train_state['val_loss'][-1]) if self.train_state['stop_early']: break def run_test_loop(self): # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('test') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['test_loss'] = running_loss self.train_state['test_acc'] = running_acc def plot_performance(self): # Figure size plt.figure(figsize=(15,5)) # Plot Loss plt.subplot(1, 2, 1) plt.title("Loss") plt.plot(trainer.train_state["train_loss"], label="train") plt.plot(trainer.train_state["val_loss"], label="val") plt.legend(loc='upper right') # Plot Accuracy plt.subplot(1, 2, 2) plt.title("Accuracy") plt.plot(trainer.train_state["train_acc"], label="train") plt.plot(trainer.train_state["val_acc"], label="val") plt.legend(loc='lower right') # Save figure plt.savefig(os.path.join(self.save_dir, "performance.png")) # Show plots plt.show() def save_train_state(self): self.train_state["done_training"] = True with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp: json.dump(self.train_state, fp) # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=None, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` ## Using GloVe embeddings We just used some randomly initialized embeddings and we were able to receive decent performance. Keep in mind that this may not always be the case and we may overfit on other datasets with this approach. We're now going to use pretrained GloVe embeddings to initialize our embeddings. We will train our model on the supervised task and assess the performance by first freezing these embeddings (so they don't change during training) and then not freezing them and allowing them to be trained. ```python pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) ``` ``` def load_glove_embeddings(embeddings_file): word_to_idx = {} embeddings = [] with open(embeddings_file, "r") as fp: for index, line in enumerate(fp): line = line.split(" ") word = line[0] word_to_idx[word] = index embedding_i = np.array([float(val) for val in line[1:]]) embeddings.append(embedding_i) return word_to_idx, np.stack(embeddings) def make_embeddings_matrix(words): word_to_idx, glove_embeddings = load_glove_embeddings(embeddings_file) embedding_dim = glove_embeddings.shape[1] embeddings = np.zeros((len(words), embedding_dim)) for i, word in enumerate(words): if word in word_to_idx: embeddings[i, :] = glove_embeddings[word_to_idx[word]] else: embedding_i = torch.zeros(1, embedding_dim) nn.init.xavier_uniform_(embedding_i) embeddings[i, :] = embedding_i return embeddings args.use_glove_embeddings = True # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer # Create embeddings embeddings = None if args.use_glove_embeddings: embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) words = vectorizer.title_vocab.token_to_idx.keys() embeddings = make_embeddings_matrix(words=words) print ("<Embeddings(words={0}, dim={1})>".format( np.shape(embeddings)[0], np.shape(embeddings)[1])) # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` ## Freeze embeddings Now we're going to freeze our GloVe embeddings and train on the supervised task. The only modification in the model is to turn on `freeze_embeddings`: ```python if freeze_embeddings: self.embeddings.weight.requires_grad = False ``` ``` args.freeze_embeddings = True # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, freeze_embeddings=args.freeze_embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` So you can see that using GloVe embeddings but not freezing them resulted in the best results on the test dataset. Different tasks will yield different results so you need to choose whether or not to freeze your embeddings based on empirical results.
github_jupyter
**Lucas-Kanade optical flow** ``` import cv2 as cv import numpy as np #Shi-Tomasi corner detection feature_params = dict(maxCorners = 300, qualityLevel = 0.2, minDistance = 2, blockSize = 7) #Lucas-Kanade optical flow process parameters lk_params = dict(winSize = (15,15), maxLevel = 2, criteria = (cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03)) # Input video cap = cv.VideoCapture("video1.mp4") #color to draw flow track color = (0, 255, 0) #reading the frames of video ret, first_frame = cap.read() # Converts frame to grayscale because we only need the luminance channel for detecting edges prev_gray = cv.cvtColor(first_frame, cv.COLOR_BGR2GRAY) #points good points/pixel prev = cv.goodFeaturesToTrack(prev_gray, mask = None, **feature_params) mask = np.zeros_like(first_frame) while(cap.isOpened()): ret, frame = cap.read() gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) #flow process next, status, error = cv.calcOpticalFlowPyrLK(prev_gray, gray, prev, None, **lk_params) good_old = prev[status == 1]#position variables good_new = next[status == 1]#position variables #marking flow tracks for i, (new, old) in enumerate(zip(good_new, good_old)): a, b = new.ravel() c, d = old.ravel() mask = cv.line(mask, (a, b), (c, d), color, 1) frame = cv.circle(frame, (a, b), 2, color, -1) output = cv.add(frame, mask) prev_gray = gray.copy() prev = good_new.reshape(-1, 1, 2) #output function cv.imshow("sparse optical flow", output) #function to destroy window if cv.waitKey(10) & 0xFF == ord('q'): break #clears memory cap.release() cv.destroyAllWindows() ``` **Dense Optical Flow** ``` import cv2 as cv import numpy as np #input video cap = cv.VideoCapture("video.mp4") #reading of each frame of video ret, first_frame = cap.read() prev_gray = cv.cvtColor(first_frame, cv.COLOR_BGR2GRAY) mask = np.zeros_like(first_frame) # Sets image saturation to maximum mask[..., 1] = 255 while(cap.isOpened()): ret, frame = cap.read() #window shows orignal video frames cv.imshow("input", frame) #converts each frame to grayscale gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) #dense optical flow flow = cv.calcOpticalFlowFarneback(prev_gray, gray, None, 0.5, 3, 15, 3, 5, 1.2, 0) magnitude, angle = cv.cartToPolar(flow[..., 0], flow[..., 1]) mask[..., 0] = angle * 180 / np.pi / 2 # Sets image value according to the optical flow magnitude mask[..., 2] = cv.normalize(magnitude, None, 0, 255, cv.NORM_MINMAX) # Converts HSV to RGB (BGR) color representation rgb = cv.cvtColor(mask, cv.COLOR_HSV2BGR) #window shows output conversion flow cv.imshow("dense optical flow", rgb) prev_gray = gray #window distruction function pess 'q' if cv.waitKey(1) & 0xFF == ord('q'): break #clears memory cap.release() cv.destroyAllWindows() ```
github_jupyter
``` import scipy.stats as sstats import numpy as np import matplotlib.pyplot as plt import ipywidgets as wid import progressbar plt.rcParams['font.size'] = 18 ``` # Problem Setup - We have a one-dimensional input space $\Lambda \subset \mathcal{R}$ - we choose some nominal value $\lambda_0$ to represent a true parameter that we attempt to identify - we map this value to the output space and perturb it with noise from a mean-zero gaussian distribution with - we use an initial distribution that is gaussian - we propagate $N$ samples from this disribution, so the output matrix has each sample in a column. - Let $M$ denote the number of observations made of an experiment (number of trials/repetitions) - these are rows in our output matrix. - We define our map $A: \mathcal{R} \to \mathcal{R}^M$ to repeat the values and represent multiple trials ## Exponential Decay ``` lam_0 = 0.25 def makemodel(t): def model(lam = np.array([[lam_0]]) ): QoI = lam[0,:].reshape(-1,1)*np.exp(-0.5*t) return QoI.T return model ``` ## Harmonic Oscillator ``` lam_0 = 0.25 def makemodel(t): def model(lam = np.array([[lam_0]]) ): QoI = (0.25 - lam[0,:].reshape(-1,1))*np.cos(np.pi*t) return QoI.T return model # def makemodel(t): # num_obs = len(t) # t = t.reshape(1,-1) # def model(lam = None): # if lam is None: # lam = np.zeros((1,3)) # lam[:,0] = initial_condition_0 # lam[:,1] = decay_rate_0 # lam[:,2] = frequency_0 # initial_condition = lam[:,0].reshape(-1,1) # decay_rate = lam[:,1].reshape(-1,1) # frequency = lam[:,2].reshape(-1,1) # QoI = initial_condition*np.exp(-decay_rate*t) # QoI *= np.cos(np.multiply(0.5*frequency*t, 2*np.pi)) # if QoI.shape[0] == 1: # return QoI.reshape(1,-1) # this allows support for simpler 1D plotting. # else: # return QoI # return model ``` # All in One ``` def solve_problem(N = 1000, prior_mean = 0.0, prior_std = 0.25, M = 10, data_std = 0.0125, lam_true = lam_0, time = (1,5), seed=None, plot=True): start_time, end_time = time # unpack some parameters N, M = int(N), int(M) # enforce types if seed is not None: seed = int(seed) np.random.seed(seed) t = np.linspace(start_time, end_time, M) ed_model = makemodel(t) def model(input_samples): # output_samples = A@input_samples # matrix multiplication output_samples = ed_model(input_samples) return output_samples true_data = model(np.array([[lam_true]])) observed_data = true_data + data_std*np.random.randn(M).reshape(-1,1) obs_data_mean, obs_data_std = np.mean(observed_data), np.std(observed_data) # print('Stats on observed data:', 'mean:', obs_data_mean, 'sd:', obs_data_std) initial_dist = sstats.distributions.norm(loc=prior_mean, scale=prior_std) # initial_dist = sstats.distributions.uniform(loc=0, scale=0.5) # PLOTTING PARAMETERS mesh_sz = 2500 initial_eval_mesh = np.linspace(-1, 1, mesh_sz) if plot: # plt.figure(figsize=(10,5)) # plt.scatter(true_data[0],[1], c='r', s=100, label='truth') # plt.scatter(observed_data,np.ones(M), label='data') # plt.legend() # plt.xlim(0.15, 0.35) fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(20,15)) ### VISUALIZE INITIAL VS DATA OBSERVED axs[0,0].plot(initial_eval_mesh, initial_dist.pdf(initial_eval_mesh),c='k',lw=5) for i in range(M): sample = observed_data[i] temp_dist = sstats.distributions.norm(loc=sample, scale=data_std) axs[0,0].plot(initial_eval_mesh, temp_dist.pdf(initial_eval_mesh)) # plt.xlim(0.15, 0.3) axs[0,0].set_ylim(0,2) axs[0,0].set_title("Initial Density and observed data with uncertainty") # generate input samples and map them to data space input_samples = initial_dist.rvs(N).reshape(1,-1) if plot: print('input sample shape:', input_samples.shape) # define map and output space output_samples = model(input_samples) if plot: print('output sample shape:', output_samples.shape) def loss_fun(output_samples): # return (1./M)*np.sum( np.power(np.divide(output_samples - observed_data, data_std), 2), axis=0) # return (1./np.sqrt(2*M))*np.sum( np.power(np.divide(output_samples - observed_data, data_std) , 2) - 1.0, axis=0) return (1./data_std)*(1./np.sqrt(M))*np.sum( output_samples - observed_data, axis=0) qoi = loss_fun(output_samples) ### Define Pushforward of Initial - choose method # FIT PF # a, l, s = sstats.distributions.gamma.fit(qoi) # # print(a, l, s) # gamma_fit = sstats.distributions.gamma(a=a,loc=l,scale=s) # d, l, s = sstats.distributions.chi2.fit(qoi) # chi2_fit = sstats.distributions.chi2(df=d,loc=l,scale=s) # COMPUTE ESTIMATE OF PUSHFORWARD DISTRIBUTION gkde_fit = sstats.gaussian_kde(qoi) def pf_initial_dist_PDF(x): # return chi2_fit.pdf(x) # return gamma_fit.pdf(x) return gkde_fit.evaluate(x) eval_pf_initial = pf_initial_dist_PDF(qoi) # print('Pushforward of Initial Distribution computed. shape:', eval_pf_initial.shape) # Define Observed Distribution obs_dist = sstats.distributions.norm() # obs_dist = sstats.distributions.gamma(a=M/2.0, scale=2.0/M) # obs_dist = sstats.distributions.chi2(df=M) eval_obs = obs_dist.pdf(qoi) if plot: num_bins = 100 print('Loss fun min:', qoi.min(), 'Loss fun max:', qoi.max()) # x_eval = np.linspace(-1000,1000,mesh_sz*5) x_eval = np.linspace(qoi.min(), qoi.max(), mesh_sz*5) # gamma_eval = gamma_fit.pdf(x_eval) # chi2_eval = chi2_fit.pdf(x_eval) gkde_eval = gkde_fit.evaluate(x_eval) axs[0,1].hist(qoi, num_bins, density=True) # axs[0,1].plot(x_eval, gamma_eval, c='b', label='gamma fit') # axs[0,1].plot(x_eval, chi2_eval, c='r', label='chi2 fit') axs[0,1].plot(x_eval, gkde_eval, '--',c='r', label='gkde fit') axs[0,1].plot(x_eval, obs_dist.pdf(x_eval), c='k', label = 'observed') axs[0,1].set_xlim(-5,5) axs[0,1].set_ylim(0,np.max(gkde_eval)) axs[0,1].legend() ### SOLVE INVERSE PROBLEM eval_initial = initial_dist.pdf(input_samples) # print('MIN oF PF_INPUT_EVAL:.', np.min(eval_pf_initial)) ratio = np.divide(eval_obs, eval_pf_initial) # COMPUTE RATIO updated_dist = eval_initial*ratio # EVALUATE UPDATED DENSITY ON INPUT SAMPLES rn = np.random.rand(N) accepted_inds = [i for i in range(N) if ratio[i] > rn[i]] def eval_updated(x): # takes input sample and evaluates it through the updated density y = loss_fun(model(x.reshape(1,-1))) return initial_dist.pdf(x)*np.divide(obs_dist.pdf(y), pf_initial_dist_PDF(y)) input_samples = input_samples.ravel() # reshape 1D vectors for easier access updated_dist = updated_dist.ravel() updated_dist_eval_at_truth = eval_updated(np.array([lam_true])) updated_eval_mesh = np.linspace(0.2, 0.3, mesh_sz) updated_dist_evaluated_on_mesh = eval_updated(updated_eval_mesh) max_input_sample_index = np.argmax(updated_dist) if lam_true != 0: rel_error_mc = np.abs( (input_samples[max_input_sample_index] - lam_true )/lam_true ) else: rel_error_mc = np.abs( (input_samples[max_input_sample_index] - lam_true )/1.0 ) error_mc = np.mean(np.power(output_samples[:,max_input_sample_index] - observed_data,2)) # VISUALIZE UPDATED DENSITY max_eval_sample_index = np.argmax(updated_dist_evaluated_on_mesh) if lam_true != 0: rel_error_mesh = np.abs( (updated_eval_mesh[max_eval_sample_index] - lam_true )/lam_true ) else: rel_error_mesh = np.abs( (updated_eval_mesh[max_eval_sample_index] - lam_true )/1.0 ) print('warning: the relative error computed is actually absolute error b/c lam_true = 0.') # PLOT RESULTS AND OBSERVED DATA if plot: print('Ratio computed. Mean:', np.mean(ratio)) print('data space predictive error: ', error_mc) print('parameter space error: ', rel_error_mc) axs[1,1].vlines(lam_true, 0, updated_dist_eval_at_truth, color='b', label='true value') axs[1,1].plot(updated_eval_mesh, updated_dist_evaluated_on_mesh, c='k', label='updated eval, mesh: %d'%mesh_sz) axs[1,1].scatter(input_samples[max_input_sample_index], updated_dist[max_input_sample_index], c='b', s=250, label='Max (MC), RE:%1.2e'%rel_error_mc) # axs[1,1].scatter(0.25, updated_dist_eval_at_truth, s=25, label='density val @ truth: %2.4f'%updated_dist_eval_at_truth) axs[1,1].scatter(updated_eval_mesh[max_eval_sample_index], updated_dist_evaluated_on_mesh[max_eval_sample_index], c='orange', s=200, label='Max (mesh), RE:%1.2e'%rel_error_mesh) # axs[1,1].set_xlim(0.15, 0.35) axs[1,1].set_xlim(0.2,0.3) # axs[1,1].set_ylim(0, 200) axs[1,1].set_title('Updated Density evaluated') if plot: # axs[1,0].scatter(input_samples[0,accepted_inds], updated_dist[0,accepted_inds]) if len(accepted_inds) > 1: # KDE of accepted samples - an estimate of updated distribution g = sstats.gaussian_kde(input_samples[accepted_inds]) axs[1,0].plot(initial_eval_mesh, g.evaluate(initial_eval_mesh), label='gkde of %d accepted'%len(accepted_inds), c='b', lw=2) # for i in range(M-1): # plot the assumed noise distribution around each data point. # sample = observed_data[i] # temp_dist = sstats.distributions.norm(loc=sample, scale=data_std) # axs[1,0].plot(initial_eval_mesh, temp_dist.pdf(initial_eval_mesh), c='k', alpha=0.1) axs[1,0].plot(initial_eval, sstats.distributions.norm.pdf(loc=observed_data[M-1], scale=data_std, x=initial_eval), c='orange', alpha=0.75, label='observations') # OBSERVED DATA if len(accepted_inds) > 1: # SCATTER ACCEPTED SAMPLES sample = input_samples[accepted_inds[0]] axs[1,0].scatter(sample, g.evaluate(sample), c='red', alpha=1, label='accepted samples') for i in range(1,len(accepted_inds)): sample = input_samples[accepted_inds[i]] axs[1,0].scatter(sample, g.evaluate(sample), c='red', alpha=1) axs[1,1].scatter(input_samples[accepted_inds], updated_dist[accepted_inds], c='red', s=50, label='accepted') axs[1,1].scatter(input_samples, updated_dist, c='k', s=10, label='initial samples') reference_dist = sstats.distributions.norm(loc=lam_true, scale=data_std) ref_dist_eval = reference_dist.pdf(initial_eval_mesh) # axs[1,0].plot(initial_eval_mesh, ref_dist_eval, label='N(%2.4f, %2.5f$^2$)'%(lam_true, data_std), c='green', lw=3, alpha=1.0) axs[1,0].vlines(lam_true, 0, np.max(ref_dist_eval)) if M == -1: # can't compute std on sample size of 1, this makes no sense for timeseries approx_dist = sstats.distributions.norm(loc=obs_data_mean, scale=obs_data_std) axs[1,0].plot(initial_eval_mesh, approx_dist.pdf(initial_eval_mesh), label='N(%2.4f, %2.5f$^2$)'%(obs_data_mean, obs_data_std), c='purple', lw=3, alpha=1.0) # axs[1,0].plot(initial_eval_mesh, g.evaluate(initial_eva_meshl), # label='gkde of %d accepted'%len(accepted_inds), c='b') # axs[1,0].set_xlim(0.2,0.3) # axs[1,0].set_xlim(0.15, 0.35) axs[1,0].set_title("Updated Density, observed data with uncertainty") axs[1,0].legend(fontsize=12) axs[1,1].legend(fontsize=12) if plot: plt.show() SUMMARY = { 'N': N, 'M': M, 'seed': seed, 'prior_mean': prior_mean, 'prior_std': prior_std, 'data_std': data_std, 'lam_true': lam_true, 'mud_val': input_samples[max_input_sample_index], 'rel_error_mc': rel_error_mc, 'obs_data_mean': obs_data_mean, 'obs_data_std': obs_data_std, 'num_accepted': len(accepted_inds), 'time': [start_time, end_time], 'mean_r': ratio.mean(), 'qoi_min': qoi.min(), 'qoi_max': qoi.max() } if not plot: return SUMMARY, input_samples[accepted_inds], observed_data wid.interact_manual(solve_problem, prior_mean=wid.FloatSlider(min=-0.25, max=0.25, step=0.05, continuous_update=False), prior_std=wid.FloatSlider(value=0.25, min=0.125, max=0.5, step=0.125, readout_format='.2e', continuous_update=False), data_std=wid.FloatSlider(value=0.01, min=0.0025, max=0.0125, step=0.0025, readout_format='.2e', continuous_update=False), N = wid.IntSlider(value=500, min=1000, max=10000, step=1000, continuous_update=False), M = wid.IntSlider(value=1, min=1, max=250, continuous_update=False), time = wid.FloatRangeSlider(value=(1,5), min=1, max=5, step=0.05, continuous_update=False), lam_true = wid.fixed(value=lam_0, continuous_update=False), seed=wid.IntSlider(value=12, min=1, max=21, continuous_update=False), plot = wid.fixed(True)) ``` # Make Predictions ``` def make_predictions(N = 1000, prior_mean = 0.0, prior_std = 0.25, M = 10, data_std = 0.001, lam_true = 0.5, time = (1,5), seed=None, plot=False): S, I, O = solve_problem(N, prior_mean, prior_std, M, data_std, lam_true, time, seed, plot) start_time, end_time = S['time'] t = np.linspace(start_time, end_time, S['M']) tt = np.linspace(0, 10, 1000) model = makemodel(tt) u_acc = model(I.reshape(1,-1)) obs_data = O plt.figure(figsize=(20,10)) dd = np.mean(u_acc, axis=1) plt.plot(tt, dd, c='k', alpha=1, lw=5, label='Mean Predicted Signal') for i in range(len(I)): d = u_acc[:,i] if i==1: plt.plot(tt, d, c='b', alpha=0.25, lw=1, label='Accepted Samples') else: plt.plot(tt, d, c='b', alpha=0.05, lw=1) plt.scatter(t, obs_data, marker='o', c='r', s=50, alpha=1, label='Observed Data') plt.plot(tt, model(np.array(S['mud_val']).reshape(-1,1)), c='green', lw=3, label='MUD prediction') plt.plot(tt, model(), ls=':', c='k', lw=3, label='true signal') plt.ylabel('Height', fontsize=18) plt.xlabel('Time (s)', fontsize=18) plt.title('Recovered Signal based on Accepted Samples', fontsize=28) plt.legend(fontsize=18, loc='upper left') plt.xlim([0,5+.05]) plt.ylim([-0.05,0.05]) # plt.hlines(np.mean(I),0,5) # plt.savefig('recovered{}.png'.format(problem.upper())) plt.legend() plt.show() return None wid.interact_manual(make_predictions, prior_mean=wid.FloatSlider(min=-0.25, max=0.25, step=0.05, continuous_update=False), prior_std=wid.FloatSlider(value=0.25, min=0.125, max=0.5, step=0.125, readout_format='.2e', continuous_update=False), data_std=wid.FloatSlider(value=0.01, min=0.0025, max=0.05, step=0.0025, readout_format='.2e', continuous_update=False), N = wid.IntSlider(value=500, min=500, max=10000, step=1000, continuous_update=False), M = wid.IntSlider(value=1, min=1, max=50, continuous_update=False), time = wid.FloatRangeSlider(value=(1,5), min=1, max=5, step=0.05, continuous_update=False), lam_true = wid.fixed(value=lam_0, continuous_update=False), seed=wid.fixed(None), # seed=wid.IntSlider(value=12, min=1, max=21, continuous_update=False), plot = wid.fixed(False)) ``` ## Conduct Experiments ``` def init_data_vec(): DATA = {'M': [], 'N': [], 'data_std': [], 'lam_true': [], 'mean_r': [], 'qoi_max': [], 'qoi_min': [], 'num_accepted': [], 'obs_data_mean': [], 'obs_data_std': [], 'prior_mean': [], 'prior_std': [], 'rel_error_mc': [], 'mud_val': [], 'time': [], 'seed': []} return DATA def append_summary(SUMMARY, DATA): for k in SUMMARY.keys(): DATA[k].append(SUMMARY[k]) N_ = 1000 # N_list = [50*2**n for n in range(8)] data_std_ = 0.01 # M_list = [2,3,4,5,6,7,8,9, *np.arange(1,21)*5 ] M_list = [5, 10, 25, 50, 100, 200, 400] num_repeats = 10 # (number of random seeds tried) lam_true = 0.25 prior_mean = 0 prior_std = 0.5 # data_std = np.round([0.001*2**n for n in np.linspace(0,np.log2(100),5)],12).ravel() seed_list = np.random.randint(2,10000,num_repeats) assert len(np.unique(seed_list)) == len(seed_list) # ensure no repetitions # ax = plt.subplot(1,1,1) DDD = {} for seed_ in progressbar.progressbar(seed_list): DD = init_data_vec() for M_ in M_list: SMRY = solve_problem(time = (1.4,4.35), N=N_, M=M_, seed=seed_, data_std=data_std_, lam_true=lam_true, prior_mean=prior_mean, prior_std=prior_std, plot=False) append_summary(SMRY[0], DD) # print(, DD['rel_error_mc']) DDD[str(seed_)] = DD print('Done running.') plt.figure(figsize=(20,10)) temp_val = np.zeros(len(M_list)) for seed_ in seed_list: temp_val += DDD[str(seed_)]['rel_error_mc'] # plt.semilogy(DDD[str(seed_)]['M'], DDD[str(seed_)]['rel_error_mc'], color='k', alpha=0.5) plt.loglog(M_list,temp_val/len(seed_list), label='Mean of %d trials'%(len(seed_list)) ) plt.loglog(M_list, 0.1/np.sqrt(np.array(M_list)), label='$0.1/ \sqrt{M}$') # plt.ylim([1E-1, 2]) plt.ylabel('Relative Absolute Error (in $\Lambda$)', fontsize=24) plt.xlabel('Number of Observations', fontsize=24) plt.legend() plt.show() ```
github_jupyter
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: ``` NAME = "" COLLABORATORS = "" ``` --- <!--NOTEBOOK_HEADER--> *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks); content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* <!--NAVIGATION--> < [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Low-Res Scoring and Fragments](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.02-Low-Res-Scoring-and-Fragments.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.01-Basic-Folding-Algorithm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> # Basic Folding Algorithm Keywords: pose_from_sequence(), random move, scoring move, Metropolis, assign(), Pose() ``` # Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.mount_pyrosetta_install() print ("Notebook is set for PyRosetta use in Colab. Have fun!") from pyrosetta import * from pyrosetta.teaching import * init() ``` ## Building the Pose In this workshop, you will be folding a 10 residue protein by building a simple de novo folding algorithm. Start by initializing PyRosetta as usual. Create a simple poly-alanine `pose` with 10 residues for testing your folding algorithm. Store the pose in a variable called "polyA." ``` # YOUR CODE HERE raise NotImplementedError() polyA.pdb_info().name("polyA") ``` __Question:__ Check the backbone dihedrals of a few residues (except the first and last) using the `.phi()` and `.psi()` methods in `Pose`. What are the values of $\phi$ and $\psi$ dihedrals? You should see ideal bond lengths and angles, but the dihedrals may not be as realistic. ``` # YOUR CODE HERE raise NotImplementedError() ``` OPTIONAL: We may want to visualize folding as it happens. Before starting with the folding protocol, instantiate a PyMOL mover and use a UNIQUE port number between 10,000 and 65,536. We will retain history in order to view the entire folding process by utilizing the `.keep_history()` method. Make sure it says `PyMOL <---> PyRosetta link started!` on its command line. ``` pmm = PyMOLMover() pmm.keep_history(True) ``` Use the PyMOL mover to view the `polyA` `Pose`. You should see a long thread-like structure in PyMOL. ``` pmm.apply(polyA) ``` ## Building A Basic *de Novo* Folding Algorithm Now, write a program that implements a Monte Carlo algorithm to optimize the protein conformation. You can do this here in the notebook, or you may use a code editor to write a `.py` file and execute in a Python or iPython shell. Our main program will include 100 iterations of making a random trial move, scoring the protein, and accepting/rejecting the move. Therefore, we can break this algorithm down into three smaller subroutines: **random, score, and decision.** ### Step 1: Random Move For the **random** trial move, write a subroutine to choose one residue at random using `random.randint()` and then randomly perturb either the φ or ψ angles by a random number chosen from a Gaussian distribution. Use the Python built-in function `random.gauss()` from the `random` library with a mean of the current angle and a standard deviation of 25°. After changing the torsion angle, use `pmm.apply(polyA)` to update the structure in PyMOL. ``` import math import random def randTrial(your_pose): # YOUR CODE HERE raise NotImplementedError() return your_pose ``` ### Step 2: Scoring Move For the **scoring** step, we need to create a scoring function and make a subroutine that simply returns the numerical energy score of the pose. ``` sfxn = get_fa_scorefxn() def score(your_pose): # YOUR CODE HERE raise NotImplementedError() ``` ### Step 3: Accepting/Rejecting Move For the **decision** step, we need to make a subroutine that either accepts or rejects the new conformatuon based on the Metropolis criterion. The Metropolis criterion has a probability of accepting a move as $P = \exp( -\Delta G / kT )$. When $ΔE ≥ 0$, the Metropolis criterion probability of accepting the move is $P = \exp( -\Delta G / kT )$. When $ΔE < 0$, the Metropolis criterion probability of accepting the move is $P = 1$. Use $kT = 1$ Rosetta Energy Unit (REU). ``` def decision(before_pose, after_pose): # YOUR CODE HERE raise NotImplementedError() ``` ### Step 4: Execution Now we can put these three subroutines together in our main program! Write a loop in the main program so that it performs 100 iterations of: making a random trial move, scoring the protein, and accepting/rejecting the move. After each iteration of the search, output the current pose energy and the lowest energy ever observed. **The final output of this program should be the lowest energy conformation that is achieved at *any* point during the simulation.** Be sure to use `low_pose.assign(pose)` rather than `low_pose = pose`, since the latter will only copy a pointer to the original pose. ``` def basic_folding(your_pose): """Your basic folding algorithm that completes 100 Monte-Carlo iterations on a given pose""" lowest_pose = Pose() # Create an empty pose for tracking the lowest energy pose. # YOUR CODE HERE raise NotImplementedError() return lowest_pose ``` Finally, output the last pose and the lowest-scoring pose observed and view them in PyMOL. Plot the energy and lowest-energy observed vs. cycle number. What are the energies of the initial, last, and lowest-scoring pose? Is your program working? Has it converged to a good solution? ``` basic_folding(polyA) ``` Here's an example of the PyMOL view: ``` from IPython.display import Image Image('./Media/folding.gif',width='300') ``` ### Exercise 1: Comparing to Alpha Helices Using the program you wrote for Workshop #2, force the $A_{10}$ sequence into an ideal α-helix. **Questions:** Does this helical structure have a lower score than that produced by your folding algorithm above? What does this mean about your sampling or discrimination? ### Exercise 2: Optimizing Algorithm Since your program is a stochastic search algorithm, it may not produce an ideal structure consistently, so try running the simulation multiple times or with a different number of cycles (if necessary). Using a kT of 1, your program may need to make up to 500,000 iterations. <!--NAVIGATION--> < [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Low-Res Scoring and Fragments](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.02-Low-Res-Scoring-and-Fragments.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.01-Basic-Folding-Algorithm.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
github_jupyter
# Testing our first LSTM text generation model Here we'll recreate the same network that we trained our entire YelpNYC dataset on, load in the best weights, and actually generate some reviews! Here we will load in the token dictionary that correspond to the softmax layer on the output of the network, and load in the best weights from our training session to generate some reviews. ``` import keras from keras import layers import sys import numpy as np chars=['\n', ' ', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '=', '?', '@', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '[', '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '{', '|', '}', '~'] char_indices = dict((char, chars.index(char)) for char in chars) maxlen=60 step=1 model = keras.models.Sequential() model.add(layers.LSTM(1024, input_shape=(maxlen, len(chars)),return_sequences=True)) model.add(layers.LSTM(1024, input_shape=(maxlen, len(chars)))) model.add(layers.Dense(len(chars), activation='softmax')) model.load_weights("Mar-4-all-01-1.0843.hdf5") optimizer = keras.optimizers.Adam(lr=0.0002) model.compile(loss='categorical_crossentropy', optimizer=optimizer) ``` This utility, plot_model, is nice for visualizing our network and seeing the I/O sizes of the layers. As we can see, our output node shape is 94, corresponding to a probability value of each of the 94 tokens in the above dictionary. Depending on the temperature, we will choose one of these 94 tokens as our next token. ``` from keras.utils import plot_model plot_model(model, to_file='model.png', show_shapes=True) from IPython.display import Image Image(filename='model.png') ``` Here we define a sampling function. It takes in our output layer, preds, and based on the temperature will add some randomness into the mix, enabling it to be a bit crazier in it's decision and not always make the most obvious choice. Low temperature values can be seen throwing this into a loop. High temperature values can be seen talking complete nonsense. ``` def sample(preds, temperature=1.0): ''' Generate some randomness with the given preds which is a list of numbers, if the temperature is very small, it will always pick the index with highest pred value ''' preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) ``` Now we define our review generation function. We define a seed sentence, that the model will choose a random spot to start off in, take a chunk of our seed text out, and then begin predicting characters every iteration by passing the current chunk back into the network. The sample function will return an index, corresponding to a dictionary character to print, and append it, and pass it back. ``` text = 'This is a starter seed. The model will pick a random place in this sentence to start from, and then begin predicting the next character.' LEN_SEQUENCES = 1000 def gen_reviews(): start_index = np.random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] for temperature in [0.1, 0.3, 0.5, 0.8, 1.0]: print('Temperature: ', temperature) sys.stdout.write(generated_text) for i in range(LEN_SEQUENCES): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print(generated_text) random_reviews() ``` This is awesome to see. The very first line shows the 60 length chunk of the seed sentence, but from there the network goes off and does it's own thing. There are of course some issues: - The data needs to be cleaned more (there's a lot of instances of \302\240 and \n) - The network needs to be trained over multiple epochs - The layers need to be implemented manually so we can access the logits for use in a GAN However, for a from-scratch generative model, I'm extremely impressed with how this performed.
github_jupyter
``` import purly import webbrowser # if you're running on mybinder.or{"children": [], "attribute"}g # localhost does not return 127.0.0.1 from example_utils import localhost ``` # A Source Of Truth The core problem that Purly solves is data synchronization between clients - in order to allow Python to control a webpage's Document Object Model (DOM) both Python and the webpage need to share their states with eachother. Purly accomplished this by running a server which acts as a "source of truth" for its clients. This server can be run in its own process via `Machine.run`, but to get started right away we'll use `Machine.daemon` which runs a subprocess. ``` purly.state.Machine().daemon() ``` To start using the Purly model server we'll need to specify the name of the model (``model_name``) our client will view and update. We'll also need to get a URL that we can use to connect to the server. There are two relevant routes. ``` model_name = "/example-model" webpage_url = localhost('http', 8000) + '/model' + model_name + '/assets/index.html' websocket_url = localhost('ws', 8000) + '/model' + model_name + '/stream' print('Get a webpage that streams layout updates via a websocket:') print(webpage_url) print() print('Websockets use this route to stream layout updates:') print(websocket_url) ``` # Making Layouts ``` layout = purly.Layout(websocket_url) div = layout.html('div') layout.children.append(div) div.style.update(height='20px', width='20px', backgroundColor='red') layout.sync() ``` Now that you've made a layout you need to sync it with the model server with `Layout.sync` # Display Output Since we're trying to create visual results in the browser we need to show our work. There are a couple ways to do this: 1. **In the notebook**: 1. Display the `Layout` object in a cell. 2. Use `purly.display.output` with a websocket route. 2. **In your browser**: 1. Open a new window and go to `http://my-model-server/<model>`. ``` layout if not webbrowser.open(webpage_url): print("Open up a new browser window at %s" % webpage_url) ``` # Realtime Updates Because all the displays above are connected to the same model on the same server. They can all be synced in realtime! ``` div.style['backgroundColor'] = 'blue' layout.sync() ``` Check out the webpage too. Even it got updated! ``` @div.on('Click') def toggle(): if div.style['backgroundColor'] == 'blue': div.style['backgroundColor'] = 'red' else: div.style['backgroundColor'] = 'blue' layout.serve() ```
github_jupyter
# Tutorial 3: Spark Programming Credits and Referecens: - Learning Spark by Holden Karau, Andy Konwinski, Patrick Wendell, and Matei Zaharia (O’Reilly). Copyright 2015 Databricks, 978-1-449-35862-4. - Spark: The Definitive Guide by Bill Chambers and Matei Zaharia (O’Reilly). Copyright 2018 Databricks, Inc., 978-1-491-91221-8.” ## 1.- Getting started: The SparkSession and the Spark UI ``` import findspark # Change the path to the Spark folder accordingly findspark.init(spark_home="/home/ubuntu/software/spark-2.2.1-bin-hadoop2.7/") data_folder = "/home/ubuntu/movielens_v2/movielens/" ``` #### Now we can import pyspark ``` import pyspark import numpy as np # We'll be using numpy for some numeric operations import os # pyspark provides a SparkContext sc = pyspark.SparkContext(master="local[*]", appName="tour") # Now you can go to http://localhost:4040/ and see the Spark UI! # Try re-running this line # To try the SparkContext with other masters first stop the one that is already running # sc.stop() ``` - **local**: Run Spark locally with one worker thread (i.e. no parallelism at all). - **local[K]**: Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine). - **local[*]**: Run Spark locally with as many worker threads as logical cores on your machine. - **spark://HOST:PORT**: Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default. - **mesos://HOST:PORT**: Connect to the given Mesos cluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://.... To submit with --deploy-mode cluster, the HOST:PORT should be configured to connect to the MesosClusterDispatcher. - **yarn**: Connect to a YARN cluster in client or cluster mode depending on the value of --deploy-mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable. - **yarn-client**: Equivalent to yarn with --deploy-mode client, which is preferred to `yarn-client` - **yarn-cluster**: Equivalent to yarn with --deploy-mode cluster, which is preferred to `yarn-cluster` ## Creating RDDS We saw that we can create RDDs by loading files from disk. We can also create RDDs from Python collections or transforming other RDDs. ``` help(sc.parallelize) # Creating an RDD from in-memory objects: l_numbers = np.arange(0,100000) numbers = sc.parallelize(l_numbers) # creation of RDD ratings = sc.textFile(os.path.join(data_folder, "ratings.csv")).filter(lambda x: "movie_id" not in x) # load data from a file ``` ## RDD Transformations and Actions There are two types of RDD operations in Spark: **transformations** and **actions**. - Transfromations: Create new RDDs from other RDDs. - Actions: Extract information from RDDs and return it to the driver program. ### Transformations ``` help(ratings.map) ratings_splitted = ratings.map(lambda x: x.split(",")) ratings_splitted ``` ### Actions ``` help(ratings.take) help(ratings.collect) help(ratings.count) ratings_splitted_top = ratings_splitted.take(5) ratings_splitted_res = ratings_splitted.collect() ratings_count = ratings_splitted.count() ratings_count ``` ## Lambda expressions [Lambda expressions](https://docs.python.org/3.5/howto/functional.html#small-functions-and-the-lambda-expression) are an easy way to write short functions in Python. ``` f = lambda line: 'Spark' in line f("we are learning park") def f(line): return 'Spark' in line f("we are learning Spark") ``` #### Let's try to get the zombie movies ``` dbpedia_movies = sc.textFile(os.path.join(data_folder, "dbpedia.csv")) # load data # count only lines that mention "Spark" zombie_movies= dbpedia_movies.filter(lambda line: 'zombie' in line).map(lambda x: x.split(",")[1]) zombie_movies zombie_movies.collect() ``` ## Lazy evaluation RDDs are **lazy**. This means that Spark will not materialize an RDD until it has to perform an action. In the example below, `primesRDD` is not evaluated until action `collect()` is performed on it. ``` def is_prime(num): """ return True if num is prime, False otherwise""" if num < 1 or num % 1 != 0: raise Exception("invalid argument") for d in range(2, int(np.sqrt(num) + 1)): if num % d == 0: return False return True numbersRDD = sc.parallelize(range(1, 1000000)) # creation of RDD primesRDD = numbersRDD.filter(is_prime) # transformation # primesRDD has not been materialized until this point primes = primesRDD.collect() # action print(primes[0:15]) print(primesRDD.take(15)) ``` ## Persistence RDDs are **ephemeral** by default, i.e. there is no guarantee they will remain in memory after they are materialized. If we want them to `persist` in memory, possibly to query them repeatedly or use them in multiple operations, we can ask Spark to do this by calling `persist()` on them. ``` primesRDD_persisted = numbersRDD.filter(is_prime).persist() # transformation # we're asking Spark to keep this RDD in memory. Note that cache is the same but as using persist for memory. However, persist allows you to define other types of storage print("Found", primesRDD_persisted.count(), "prime numbers") # first action -- causes primesRDD_persisted to be materialized print("Here are some of them:") print(primesRDD_persisted.collect()[0:20]) # second action - RDD is already in memory ``` How long does it take to collect `primesRDD`? Let's time the operation. ``` %%timeit primes = primesRDD.collect() ``` It took about 1.8s. That's because Spark had to evaluate `primesRDD` before performing `collect` on it. How long would it take if `primesRDD_persisted` was already in memory? ``` %%timeit primes = primesRDD_persisted.collect() ``` It took about 20ms to collect `primesRDD_persisted`! *** ## map and flatmap ``` words = sc.textFile(os.path.join(data_folder, "dbpedia.csv")).filter(lambda x: "movie_id" not in x) words_map = words.map(lambda phrase: phrase.split(" ")) l_words = words_map.collect() # This returns a list of lists l_words[1][0:10] words_flatmap = words.flatMap(lambda phrase: phrase.split(" ")) words_flatmap.collect()[0:10] # This returns a list withe the combined elements of the list # We can use the flatmap to make a word count words_flatmap.map( lambda x: (x,1) ).reduceByKey( lambda x,y: x+y ).collect()[0:10] ``` *** ## Set operations ``` oneRDD = sc.parallelize([1, 1, 1, 2, 3, 3, 4, 4]) oneRDD.persist() otherRDD = sc.parallelize([1, 4, 4, 7]) otherRDD.persist() unionRDD = oneRDD.union(otherRDD) unionRDD.persist() oneRDD.subtract(otherRDD).collect() oneRDD.distinct().collect() oneRDD.intersection(otherRDD).collect() # removes duplicates oneRDD.cartesian(otherRDD).collect()[:5] ``` *** ## reduce ``` np.sum([1,43,62,23,52]) data = sc.parallelize([1,43,62,23,52]) data.reduce(lambda x, y: x + y) data.reduce(lambda x, y: x * y) 1 * 43 * 62 * 23 * 52 data.reduce(lambda x, y: x**2 + y**2) # this does NOT compute the sum of squares of RDD elements ((((1 ** 2 + 43 ** 2) ** 2 + 62 ** 2) **2 + 23 ** 2) **2 + 52 **2) data.reduce(lambda x, y: np.sqrt(x**2 + y**2)) ** 2 np.sum(np.array([1,43,62,23,52]) ** 2) ``` *** ## aggregate ``` help(data.aggregate) def seq(x,y): print(x,y,"seq") return x[0] + y, x[1] + 1 def comb(x,y): print(x,y,"comb") return x[0] + y[0], x[1] + y[1] np.sum([1,43,62,23,52]) data = sc.parallelize([1,43,62,23,52], 1) # Try different levels of paralellism. Where are the functions printing? aggr = data.aggregate(zeroValue = (0,0), seqOp = seq, # combOp = comb) aggr aggr[0] / aggr[1] # average value of RDD elements ``` *** ## reduceByKey ``` pairRDD = sc.parallelize([('$APPL', 100.64), ('$APPL', 100.52), ('$GOOG', 706.2), ('$AMZN', 552.32), ('$AMZN', 552.32) ]) pairRDD.reduceByKey(lambda x,y: x + y).collect() # sum of values per key help(pairRDD.reduceByKey) ``` From https://github.com/vaquarkhan/vk-wiki-notes/wiki/reduceByKey--vs-groupBykey-vs-aggregateByKey-vs-combineByKey ReduceByKey will aggregate y key before shuffling: ![alt text](https://camo.githubusercontent.com/516114b94193cddf7e59bdd5368d6756d30dc8b4/687474703a2f2f7777772e727578697a68616e672e636f6d2f75706c6f6164732f342f342f302f322f34343032333436352f313836363838325f6f7269672e706e67) GroupByKey will shuffle all the value key pairs as the diagrams show: ![alt text](https://camo.githubusercontent.com/ed75baabdaee2198d3fc1390e04a5d20bcd2e484/687474703a2f2f7777772e727578697a68616e672e636f6d2f75706c6f6164732f342f342f302f322f34343032333436352f333030393135315f6f7269672e706e67) ## (inner) join ``` movies = sc.textFile(os.path.join(data_folder, "movies.csv")).filter(lambda x: "movie_id" not in x).map(lambda x: x.split(",")) ratings = sc.textFile(os.path.join(data_folder, "ratings.csv")).filter(lambda x: "movie_id" not in x).map(lambda x: x.split(",")[1:3]) ratings.take(10) movies.join(ratings).take(5) ``` *** ## Accumulators This example demonstrates how to use accumulators. The map operations creates an RDD that contains the length of lines in the text file - and while the RDD is materialized, an accumulator keeps track of how many lines are long (longer than $30$ characters). ``` text = sc.textFile(os.path.join(data_folder, "dbpedia.csv")) long_lines = sc.accumulator(0) # create accumulator def line_data(line): global long_lines # to reference an accumulator, declare it as global variable length = len(line) if "abstract" in line: long_lines += 1 # update the accumulator return length llengthRDD = text.map(line_data) llengthRDD.count() long_lines.value # this is how we obtain the value of the accumulator in the driver program help(long_lines) ``` ### Warning In the example above, we update the value of an accumulator within a transformation (map). This is **not recommended**, unless for debugging purposes! The reason is that, if there are failures during the materialization of `llengthRDD`, some of its partitions will be re-computed, possibly causing the accumulator to double-count some the the long lines. It is advisable to use accumulators within actions - and particularly with the `foreach` action, as demonstrated below. ``` text = sc.textFile(os.path.join(data_folder, "dbpedia.csv")) long_lines_2 = sc.accumulator(0) def line_len(line): global long_lines_2 length = len(line) if length > 30: long_lines_2 += 1 text.foreach(line_len) long_lines_2.value ``` ## Broadcast variable We use *broadcast variables* when many operations depend on the same large static object - e.g., a large lookup table that does not change but provides information for other operations. In such cases, we can make a broadcast variable out of the object and thus make sure that the object will be shipped to the cluster only once - and not for each of the operations we'll be using it for. The example below demonstrates the usage of broadcast variables. In this case, we make a broadcast variable out of a dictionary that represents an address table. The tablke is shipped to cluster nodes only once across multiple operations. ``` def load_ages_catalog(): return {1: "Under 18", 18: "18-24", 25: "25-34", 35: "35-44", 45: "45-49", 50: "50-55", 56: "56+"} ages_catalog = sc.broadcast(load_ages_catalog()) def find_age(age_id): res = None if age_id in ages_catalog.value: res = ages_catalog.value[age_id] return res ages = sc.parallelize([1,18,50]) pairRDD = ages.map(lambda age_id: (age_id, find_age(age_id))) print(pairRDD.collect()) other_ages = sc.parallelize([35, 50, 1]) pairRDD = other_ages.map(lambda age_id: (age_id, find_age(age_id))) print(pairRDD.collect()) ``` ## High-level structured #### There are many ways to solve an information need. For example, let's try two different queries for finding the number of ratings female users gave to each movie: ``` ratings_fr = sc.textFile(os.path.join(data_folder, "ratings.csv")).map(lambda x: (x.split(",")[0], x.split(",")[1],1)) f_users = sc.textFile(os.path.join(data_folder, "users.csv")).map(lambda x: (x.split(",")[0],x.split(",")[1])).filter(lambda x: x[1] == "F") ratings_fr_res = ratings_fr.join(f_users).map(lambda x: (x[1][0], 1)).reduceByKey(lambda x,y: x + y).collect() ratings_rf = sc.textFile(os.path.join(data_folder, "ratings.csv")).map(lambda x: (x.split(",")[0], x.split(",")[1],1)) users = sc.textFile(os.path.join(data_folder, "users.csv")).map(lambda x: (x.split(",")[0],x.split(",")[1])) ratings_rf_res = ratings_rf.join(users).filter(lambda x: x[1][1] == "F").map(lambda x: (x[1][0], 1)).reduceByKey(lambda x,y: x + y).collect() ``` ## What happened? Let's find out in Spark UI - Working with RDD allows developers to have more freedom. - However, this is not recommended. There are new high-lever structured API that optimize many steps of the data transformations. - In general, it is pointless trying to beat a query optimizer. ``` from pyspark import sql sql_sc = sql.SparkSession(sparkContext=sc) ratings_file = os.path.join(data_folder, "ratings.csv") ratings = sql_sc.read.option("inferSchema", "true").option("header", "true").csv(ratings_file) ratings.createOrReplaceTempView("ratings") users_file = os.path.join(data_folder, "users.csv") users = sql_sc.read.option("inferSchema", "true").option("header", "true").csv(users_file) users.createOrReplaceTempView("users") sqlWay = sql_sc.sql(""" SELECT r.movie_id, count(r.rating) FROM ratings r INNER JOIN users u on r.user_id = u.user_id WHERE u.gender = 'F' GROUP BY r.movie_id """) dataFrameWay = ratings.join(users, ratings.user_id == users.user_id).filter(users.gender == 'F') \ .groupBy(ratings.movie_id).agg({"rating": "count"}) %timeit sqlWay.collect() %timeit dataFrameWay.collect() sqlWay.explain() dataFrameWay.explain() ``` ### Streaming and Structured Streaming #### Check the examples in: - /home/ubuntu/software/spark/examples/src/main/python/streaming/network_wordcount.py - /home/ubuntu/software/spark/examples/src/main/python/streaming/sql_network_wordcount.py ##### That's all
github_jupyter
# Results for random forest classifier ## Multilabel classification with imbalanced data ### Confusion matrices ``` """Results for random forest classifier Multilabel classification with imbalanced data Confusion matrices """ import pandas as pd # import libraries import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn import preprocessing from sklearn.model_selection import TimeSeriesSplit from sklearn.metrics import confusion_matrix # import data df = pd.read_csv('data/SCADA_downtime_merged.csv', skip_blank_lines=True) list1 = list(df['turbine_id'].unique()) # list of turbines to plot list1 = sorted(list1, key=int) # sort turbines in ascending order list2 = list(df['TurbineCategory_id'].unique()) # list of categories list2 = [g for g in list2 if g >= 0] # remove NaN from list list2 = sorted(list2, key=int) # sort categories in ascending order # categories to remove list2 = [m for m in list2 if m not in (1, 12, 13, 14, 15, 17, 21, 22)] list4 = list(range(0, 14)) for x in list1: # filter only data for turbine x dfx = df[(df['turbine_id'] == x)].copy() for y in list2: # copying fault to new column (mins) # (fault when turbine category id is y) def ff(c): if c['TurbineCategory_id'] == y: return 0 else: return 1 dfx['mins'] = dfx.apply(ff, axis=1) dfx = dfx.sort_values(by='timestamp', ascending=False) # sort values by timestamp in descending order dfx.reset_index(drop=True, inplace=True) # reset index if dfx.loc[0, 'mins'] == 0: # assigning value to first cell if it's not 0 dfx.set_value(0, 'mins', 0) else: dfx.set_value(0, 'mins', 999999999) for i, e in enumerate(dfx['mins']): # using previous value's row to evaluate time if e == 1: dfx.at[i, 'mins'] = dfx.at[i - 1, 'mins'] + 10 dfx = dfx.sort_values(by='timestamp') # sort in ascending order dfx.reset_index(drop=True, inplace=True) # reset index dfx['hours'] = dfx['mins'].astype(np.int64) # convert to hours, then round to nearest hour dfx['hours'] = dfx['hours'] / 60 dfx['hours'] = round(dfx['hours']).astype(np.int64) def f11(c): # > 48 hours - label as normal (9999) if c['hours'] > 48: return 9999 else: return c['hours'] dfx['hours'] = dfx.apply(f11, axis=1) def f22(c,): # filter out curtailment - curtailed when turbine is pitching # outside 0 deg <= normal <= 3.5 deg if (0 <= c['pitch'] <= 3.5 or c['hours'] != 9999 or ( (c['pitch'] > 3.5 or c['pitch'] < 0) and ( c['ap_av'] <= (0.1 * dfx['ap_av'].max()) or c['ap_av'] >= (0.9 * dfx['ap_av'].max())))): return 'normal' else: return 'curtailed' dfx['curtailment'] = dfx.apply(f22, axis=1) def f3(c,): # filter unusual readings, i.e. for normal operation, # power <= 0 in operating wind speeds, power >100... # before cut-in, runtime < 600 and other downtime categories if c['hours'] == 9999 and (( 3 < c['ws_av'] < 25 and ( c['ap_av'] <= 0 or c['runtime'] < 600 or c['EnvironmentalCategory_id'] > 1 or c['GridCategory_id'] > 1 or c['InfrastructureCategory_id'] > 1 or c['AvailabilityCategory_id'] == 2 or 12 <= c['TurbineCategory_id'] <= 15 or 21 <= c['TurbineCategory_id'] <= 22)) or (c['ws_av'] < 3 and c['ap_av'] > 100)): return 'unusual' else: return 'normal' dfx['unusual'] = dfx.apply(f3, axis=1) def f4(c): # round to 6 hour intervals if c['hours'] == 0: return 10 elif 1 <= c['hours'] <= 6: return 11 elif 7 <= c['hours'] <= 12: return 12 elif 13 <= c['hours'] <= 18: return 13 elif 19 <= c['hours'] <= 24: return 14 elif 25 <= c['hours'] <= 30: return 15 elif 31 <= c['hours'] <= 36: return 16 elif 37 <= c['hours'] <= 42: return 17 elif 43 <= c['hours'] <= 48: return 18 else: return 19 dfx['hours6'] = dfx.apply(f4, axis=1) def f5(c): # change label for unusual and curtailed data (20) if c['unusual'] == 'unusual' or c['curtailment'] == 'curtailed': return 20 else: return c['hours6'] dfx['hours_%s' % y] = dfx.apply(f5, axis=1) dfx = dfx.drop('hours6', axis=1) # drop unnecessary columns dfx = dfx.drop('hours', axis=1) dfx = dfx.drop('mins', axis=1) dfx = dfx.drop('curtailment', axis=1) dfx = dfx.drop('unusual', axis=1) # separate features from classes for classification features = [ 'ap_av', 'ws_av', 'wd_av', 'pitch', 'ap_max', 'ap_dev', 'reactive_power', 'rs_av', 'gen_sp', 'nac_pos'] classes = [col for col in dfx.columns if 'hours' in col] list6 = features + classes # list of columns to copy into new df df2 = dfx[list6].copy() df2 = df2.dropna() # drop NaNs X = df2[features] X = preprocessing.normalize(X) # normalise features to values b/w 0 and 1 Y = df2[classes] Y = Y.as_matrix() # convert from pd dataframe to np array # cross validation using time series split tscv = TimeSeriesSplit(n_splits=5) rf = RandomForestClassifier(criterion='entropy', n_jobs=-1) for train_index, test_index in tscv.split(X): # looping for each cross validation fold # split train and test sets X_train, X_test = X[train_index], X[test_index] Y_train, Y_test = Y[train_index], Y[test_index] rf1 = rf.fit(X_train, Y_train) # fit to classifier and predict Yp = rf1.predict(X_test) for m in list4: Yt = Y_test[:, m] Ypr = Yp[:, m] print( 'Confusion matrix for turbine %s, turbine category %s' % (x, m)) print(confusion_matrix(Yt, Ypr)) print('------------------------------------------------------------') ```
github_jupyter
# Test modules of recognition pipeline Written by Yujun Lin ## Prepare import libraries ``` import os os.environ['MANTLE_TARGET'] = 'ice40' from magma import * import mantle import math from mantle.lattice.ice40 import ROMB, SB_LUT4 from magma.simulator import PythonSimulator from magma.scope import Scope from magma.bit_vector import BitVector ``` ## Global Settings ``` num_cycles = 16 num_classes = 8 # operand width N = 16 # number of bits for num_cycles n = int(math.ceil(math.log2(num_cycles))) # number of bits for num_classes b = int(math.ceil(math.log2(num_classes))) # number of bits for bit counter output n_bc = int(math.floor(math.log2(N))) + 1 # number of bits for bit counter output accumulator n_bc_adder = int(math.floor(math.log2(N*num_cycles))) + 1 print('number of bits for num_cycles: %d' % n) print('number of bits for num_classes: %d' % b) print('number of bits for bit counter output: %d' % n_bc) print('number of bits for bit counter output accumulator: %d' % n_bc_adder) ``` ## Control module generate address for weight and image block `IDX` means the idx-th row of weight matrix `CYCLE` means the cycle-th block of idx-th row of weight matrix `CYCLE` also means the cycle-th block of image vector ``` class Controller(Circuit): IO = ['CLK', In(Clock), 'IDX', Out(Bits(b)), 'CYCLE', Out(Bits(n))] @classmethod def definition(io): adder_cycle = mantle.Add(n, cin=False, cout=False) reg_cycle = mantle.Register(n, has_reset=True) adder_idx = mantle.Add(b, cin=False, cout=False) reg_idx = mantle.Register(b, has_ce=True) wire(io.CLK, reg_cycle.CLK) wire(io.CLK, reg_idx.CLK) wire(reg_cycle.O, adder_cycle.I0) wire(bits(1, n), adder_cycle.I1) wire(adder_cycle.O, reg_cycle.I) comparison_cycle = mantle.EQ(n) wire(reg_cycle.O, comparison_cycle.I0) wire(bits(num_cycles-1, n), comparison_cycle.I1) # if cycle-th is the last, then switch to next idx (accumulate idx) and clear cycle wire(comparison_cycle.O, reg_cycle.RESET) wire(comparison_cycle.O, reg_idx.CE) comparison_idx = mantle.EQ(b) wire(reg_idx.O, comparison_idx.I0) wire(bits(num_classes-1, b), comparison_idx.I1) wire(reg_idx.O, adder_idx.I0) wire(bits(0, b-1), adder_idx.I1[1:]) nand_gate = mantle.NAnd() wire(comparison_cycle.O, nand_gate.I0) wire(comparison_idx.O, nand_gate.I1) # after all idx rows, we stop accumulating idx wire(nand_gate.O, adder_idx.I1[0]) wire(adder_idx.O, reg_idx.I) wire(reg_idx.O, io.IDX) wire(adder_cycle.O, io.CYCLE) class TestController(Circuit): IO = ['CLK', In(Clock), 'IDX', Out(Bits(b)), 'CYCLE', Out(Bits(n)), 'CONTROL', Out(Bit)] @classmethod def definition(io): # IF controller = Controller() reg_1_cycle = mantle.DefineRegister(n)() reg_1_control = mantle.DFF(init=1) wire(io.CLK, controller.CLK) wire(io.CLK, reg_1_cycle.CLK) wire(io.CLK, reg_1_control.CLK) reg_1_idx = controller.IDX wire(controller.CYCLE, reg_1_cycle.I) wire(1, reg_1_control.I) wire(reg_1_idx, io.IDX) wire(reg_1_cycle.O, io.CYCLE) wire(reg_1_control.O, io.CONTROL) simulator = PythonSimulator(TestController, clock=TestController.CLK) waveforms = [] for i in range(96): simulator.step() simulator.evaluate() clk = simulator.get_value(TestController.CLK) o = simulator.get_value(TestController.IDX) c = simulator.get_value(TestController.CYCLE) ctl = simulator.get_value(TestController.CONTROL) waveforms.append([clk, ctl] + o + c) names = ["CLK", "CTL"] for i in range(n): names.append("IDX[{}]".format(i)) for i in range(b): names.append("CYC[{}]".format(i)) from magma.waveform import waveform waveform(waveforms, names) ``` ## ROM module Test Unit for Rom Reading ``` class ReadROM(Circuit): IO = ['IDX', In(Bits(b)), 'CYCLE', In(Bits(n)), 'CLK', In(Clock), 'WEIGHT', Out(Bits(N)), 'IMAGE', Out(Bits(N))] @classmethod def definition(io): weights_list = [1] + [2**16-1]*15 + [3] + [2**16-1]*15 + ([0] + [2**16-1]*15)*((256-32)//16) print(len(weights_list)) weigths_rom = ROMB(256,16,weights_list) lut_list = [] for i in range(N): lut_list.append(SB_LUT4(LUT_INIT=1)) wire(io.CYCLE, weigths_rom.RADDR[:n]) wire(io.IDX, weigths_rom.RADDR[n:n+b]) if n + b < 8: wire(bits(0, 8-n-b), weigths_rom.RADDR[n+b:]) wire(1, weigths_rom.RE) wire(weigths_rom.RDATA, io.WEIGHT) wire(io.CLK, weigths_rom.RCLK) for i in range(N): wire(io.CYCLE, bits([lut_list[i].I0, lut_list[i].I1, lut_list[i].I2, lut_list[i].I3])) wire(lut_list[i].O, io.IMAGE[i]) class TestReadROM(Circuit): IO = ['CLK', In(Clock), 'WEIGHT', Out(Bits(N)), 'IMAGE', Out(Bits(N)), 'IDX', Out(Bits(b)), 'CYCLE', Out(Bits(n)), 'CONTROL', Out(Bit)] @classmethod def definition(io): # IF - get cycle_id, label_index_id controller = Controller() reg_1_cycle = mantle.DefineRegister(n)() reg_1_control = mantle.DFF(init=1) wire(io.CLK, controller.CLK) wire(io.CLK, reg_1_cycle.CLK) wire(io.CLK, reg_1_control.CLK) reg_1_idx = controller.IDX wire(controller.CYCLE, reg_1_cycle.I) wire(1, reg_1_control.I) # RR - get weight block, image block of N bits readROM = ReadROM() wire(reg_1_idx, readROM.IDX) wire(reg_1_cycle.O, readROM.CYCLE) reg_2 = mantle.DefineRegister(N + b + n)() reg_2_control = mantle.DFF() reg_2_weight = readROM.WEIGHT wire(io.CLK, reg_2.CLK) wire(io.CLK, readROM.CLK) wire(io.CLK, reg_2_control.CLK) wire(readROM.IMAGE, reg_2.I[:N]) wire(reg_1_idx, reg_2.I[N:N + b]) wire(reg_1_cycle.O, reg_2.I[N + b:]) wire(reg_1_control.O, reg_2_control.I) wire(reg_2_weight, io.WEIGHT) wire(reg_2.O[:N], io.IMAGE) wire(reg_2.O[N:N+b], io.IDX) wire(reg_2.O[N+b:], io.CYCLE) wire(reg_2_control.O, io.CONTROL) simulator = PythonSimulator(TestReadROM, clock=TestReadROM.CLK) waveforms = [] for i in range(96): simulator.step() simulator.evaluate() clk = simulator.get_value(TestReadROM.CLK) w = simulator.get_value(TestReadROM.WEIGHT) i = simulator.get_value(TestReadROM.IMAGE) d = simulator.get_value(TestReadROM.IDX) c = simulator.get_value(TestReadROM.CYCLE) ctl = simulator.get_value(TestReadROM.CONTROL) waveforms.append([clk, ctl] + w + i + d + c) names = ["CLK", "CTL"] for i in range(N): names.append("WGT[{}]".format(i)) for i in range(N): names.append("IMG[{}]".format(i)) for i in range(n): names.append("IDX[{}]".format(i)) for i in range(b): names.append("CYC[{}]".format(i)) from magma.waveform import waveform waveform(waveforms, names) ``` ## Pop Count Unit 4/8/16 bit pop count ``` # 4-bit pop count class BitCounter4(Circuit): IO = ['I', In(Bits(4)), 'O', Out(Bits(3))] @classmethod def definition(io): lut_list = [] lut_list.append(SB_LUT4(LUT_INIT=int('0110100110010110', 2))) lut_list.append(SB_LUT4(LUT_INIT=int('0111111011101000', 2))) lut_list.append(SB_LUT4(LUT_INIT=int('1000000000000000', 2))) for i in range(3): wire(io.I, bits([lut_list[i].I0, lut_list[i].I1, lut_list[i].I2, lut_list[i].I3])) wire(lut_list[i].O, io.O[i]) # 8-bit pop count class BitCounter8(Circuit): IO = ['I', In(Bits(8)), 'O', Out(Bits(4))] @classmethod def definition(io): counter_1 = BitCounter4() counter_2 = BitCounter4() wire(io.I[:4], counter_1.I) wire(io.I[4:], counter_2.I) adders = [mantle.HalfAdder()] + [mantle.FullAdder() for _ in range(2)] for i in range(3): wire(counter_1.O[i], adders[i].I0) wire(counter_2.O[i], adders[i].I1) if i > 0: wire(adders[i-1].COUT, adders[i].CIN) wire(adders[i].O, io.O[i]) wire(adders[-1].COUT, io.O[-1]) # 16-bit pop count class BitCounter16(Circuit): IO = ['I', In(Bits(16)), 'O', Out(Bits(5))] @classmethod def definition(io): counter_1 = BitCounter8() counter_2 = BitCounter8() wire(io.I[:8], counter_1.I) wire(io.I[8:], counter_2.I) adders = [mantle.HalfAdder()] + [mantle.FullAdder() for _ in range(3)] for i in range(4): wire(counter_1.O[i], adders[i].I0) wire(counter_2.O[i], adders[i].I1) if i > 0: wire(adders[i-1].COUT, adders[i].CIN) wire(adders[i].O, io.O[i]) wire(adders[-1].COUT, io.O[-1]) # pop count def DefineBitCounter(n): if n <= 4: return BitCounter4 elif n <= 8: return BitCounter8 elif n <= 16: return BitCounter16 else: return None class TestBitCounter(Circuit): IO = ['CLK', In(Clock), 'COUNT', Out(Bits(n_bc_adder)), 'CONTROL', Out(Bit)] @classmethod def definition(io): # IF - get cycle_id, label_index_id controller = Controller() reg_1_cycle = mantle.Register(n) reg_1_control = mantle.DFF(init=1) wire(io.CLK, controller.CLK) wire(io.CLK, reg_1_cycle.CLK) wire(io.CLK, reg_1_control.CLK) reg_1_idx = controller.IDX wire(controller.CYCLE, reg_1_cycle.I) wire(1, reg_1_control.I) # RR - get weight block, image block of N bits readROM = ReadROM() wire(reg_1_idx, readROM.IDX) wire(reg_1_cycle.O, readROM.CYCLE) reg_2 = mantle.Register(N + b + n) reg_2_control = mantle.DFF() reg_2_weight = readROM.WEIGHT wire(io.CLK, reg_2.CLK) wire(io.CLK, readROM.CLK) wire(io.CLK, reg_2_control.CLK) wire(readROM.IMAGE, reg_2.I[:N]) wire(reg_1_idx, reg_2.I[N:N + b]) wire(reg_1_cycle.O, reg_2.I[N + b:]) wire(reg_1_control.O, reg_2_control.I) # EX - NXOr for multiplication, pop count and accumulate the result for activation multiplier = mantle.NXOr(height=2, width=N) bit_counter = DefineBitCounter(N)() adder = mantle.Add(n_bc_adder, cin=False, cout=False) mux_for_adder_0 = mantle.Mux(height=2, width=n_bc_adder) mux_for_adder_1 = mantle.Mux(height=2, width=n_bc_adder) reg_3_1 = mantle.Register(n_bc_adder) reg_3_2 = mantle.Register(b + n) wire(io.CLK, reg_3_1.CLK) wire(io.CLK, reg_3_2.CLK) wire(reg_2_weight, multiplier.I0) wire(reg_2.O[:N], multiplier.I1) wire(multiplier.O, bit_counter.I) wire(bits(0, n_bc_adder), mux_for_adder_0.I0) wire(bit_counter.O, mux_for_adder_0.I1[:n_bc]) if n_bc_adder > n_bc: wire(bits(0, n_bc_adder - n_bc), mux_for_adder_0.I1[n_bc:]) # only when data read is ready (i.e. control signal is high), accumulate the pop count result wire(reg_2_control.O, mux_for_adder_0.S) wire(reg_3_1.O, mux_for_adder_1.I0) wire(bits(0, n_bc_adder), mux_for_adder_1.I1) if n == 4: comparison_3 = SB_LUT4(LUT_INIT=int('0'*15+'1', 2)) wire(reg_2.O[N+b:], bits([comparison_3.I0, comparison_3.I1, comparison_3.I2, comparison_3.I3])) else: comparison_3 = mantle.EQ(n) wire(reg_2.O[N+b:], comparison_3.I0) wire(bits(0, n), comparison_3.I1) wire(comparison_3.O, mux_for_adder_1.S) wire(mux_for_adder_0.O, adder.I0) wire(mux_for_adder_1.O, adder.I1) wire(adder.O, reg_3_1.I) wire(reg_2.O[N:], reg_3_2.I) wire(reg_3_1.O, io.COUNT) wire(reg_2_control.O, io.CONTROL) simulator = PythonSimulator(TestBitCounter, clock=TestBitCounter.CLK) waveforms = [] for i in range(128): simulator.step() simulator.evaluate() clk = simulator.get_value(TestBitCounter.CLK) o = simulator.get_value(TestBitCounter.COUNT) ctl = simulator.get_value(TestBitCounter.CONTROL) waveforms.append([clk, ctl] + o) names = ["CLK", "CTL"] for i in range(n_bc_adder): names.append("COUNT[{}]".format(i)) from magma.waveform import waveform waveform(waveforms, names) ``` ## Classifier Module using compare operation to decide the final prediction label of image ``` class Classifier(Circuit): IO = ['I', In(Bits(n_bc_adder)), 'IDX', In(Bits(b)), 'CLK', In(Clock), 'O', Out(Bits(b)), 'M', Out(Bits(n_bc_adder))] @classmethod def definition(io): comparison = mantle.UGT(n_bc_adder) reg_count = mantle.Register(n_bc_adder, has_ce=True) reg_idx = mantle.Register(b, has_ce=True) wire(io.I, comparison.I0) wire(reg_count.O, comparison.I1) wire(comparison.O, reg_count.CE) wire(comparison.O, reg_idx.CE) wire(io.CLK, reg_count.CLK) wire(io.CLK, reg_idx.CLK) wire(io.I, reg_count.I) wire(io.IDX, reg_idx.I) wire(reg_idx.O, io.O) wire(reg_count.O, io.M) class TestClassifier(Circuit): IO = ['CLK', In(Clock), 'MAX', Out(Bits(n_bc_adder)), 'IDX', Out(Bits(b)), 'COUNT', Out(Bits(n_bc_adder))] @classmethod def definition(io): # IF - get cycle_id, label_index_id controller = Controller() reg_1_cycle = mantle.Register(n) reg_1_control = mantle.DFF(init=1) wire(io.CLK, controller.CLK) wire(io.CLK, reg_1_cycle.CLK) wire(io.CLK, reg_1_control.CLK) reg_1_idx = controller.IDX wire(controller.CYCLE, reg_1_cycle.I) wire(1, reg_1_control.I) # RR - get weight block, image block of N bits readROM = ReadROM() wire(reg_1_idx, readROM.IDX) wire(reg_1_cycle.O, readROM.CYCLE) reg_2 = mantle.Register(N + b + n) reg_2_control = mantle.DFF() reg_2_weight = readROM.WEIGHT wire(io.CLK, reg_2.CLK) wire(io.CLK, readROM.CLK) wire(io.CLK, reg_2_control.CLK) wire(readROM.IMAGE, reg_2.I[:N]) wire(reg_1_idx, reg_2.I[N:N + b]) wire(reg_1_cycle.O, reg_2.I[N + b:]) wire(reg_1_control.O, reg_2_control.I) # EX - NXOr for multiplication, pop count and accumulate the result for activation multiplier = mantle.NXOr(height=2, width=N) bit_counter = DefineBitCounter(N)() adder = mantle.Add(n_bc_adder, cin=False, cout=False) mux_for_adder_0 = mantle.Mux(height=2, width=n_bc_adder) mux_for_adder_1 = mantle.Mux(height=2, width=n_bc_adder) reg_3_1 = mantle.Register(n_bc_adder) reg_3_2 = mantle.Register(b + n) wire(io.CLK, reg_3_1.CLK) wire(io.CLK, reg_3_2.CLK) wire(reg_2_weight, multiplier.I0) wire(reg_2.O[:N], multiplier.I1) wire(multiplier.O, bit_counter.I) wire(bits(0, n_bc_adder), mux_for_adder_0.I0) wire(bit_counter.O, mux_for_adder_0.I1[:n_bc]) if n_bc_adder > n_bc: wire(bits(0, n_bc_adder - n_bc), mux_for_adder_0.I1[n_bc:]) # only when data read is ready (i.e. control signal is high), accumulate the pop count result wire(reg_2_control.O, mux_for_adder_0.S) wire(reg_3_1.O, mux_for_adder_1.I0) wire(bits(0, n_bc_adder), mux_for_adder_1.I1) if n == 4: comparison_3 = SB_LUT4(LUT_INIT=int('0'*15+'1', 2)) wire(reg_2.O[N+b:], bits([comparison_3.I0, comparison_3.I1, comparison_3.I2, comparison_3.I3])) else: comparison_3 = mantle.EQ(n) wire(reg_2.O[N+b:], comparison_3.I0) wire(bits(0, n), comparison_3.I1) wire(comparison_3.O, mux_for_adder_1.S) wire(mux_for_adder_0.O, adder.I0) wire(mux_for_adder_1.O, adder.I1) wire(adder.O, reg_3_1.I) wire(reg_2.O[N:], reg_3_2.I) # CF - classify the image classifier = Classifier() reg_4 = mantle.Register(n + b) reg_4_idx = classifier.O wire(io.CLK, classifier.CLK) wire(io.CLK, reg_4.CLK) wire(reg_3_1.O, classifier.I) wire(reg_3_2.O[:b], classifier.IDX) wire(reg_3_2.O, reg_4.I) wire(reg_3_1.O, io.COUNT) wire(classifier.O, io.IDX) wire(classifier.M, io.MAX) simulator = PythonSimulator(TestClassifier, clock=TestClassifier.CLK) waveforms = [] for i in range(128): simulator.step() simulator.evaluate() clk = simulator.get_value(TestClassifier.CLK) o = simulator.get_value(TestClassifier.IDX) m = simulator.get_value(TestClassifier.MAX) c = simulator.get_value(TestClassifier.COUNT) waveforms.append([clk] + o + m + c) names = ["CLK"] for i in range(b): names.append("IDX[{}]".format(i)) for i in range(n_bc_adder): names.append("MAX[{}]".format(i)) for i in range(n_bc_adder): names.append("CNT[{}]".format(i)) from magma.waveform import waveform waveform(waveforms, names) class Classifier2(Circuit): IO = ['I', In(Bits(n_bc_adder)), 'IDX', In(Bits(b)), 'CLK', In(Clock), 'O', Out(Bits(b))] @classmethod def definition(io): comparison = mantle.UGT(n_bc_adder) reg_count = mantle.Register(n_bc_adder, has_ce=True) reg_idx = mantle.Register(b, has_ce=True) wire(io.I, comparison.I0) wire(reg_count.O, comparison.I1) wire(comparison.O, reg_count.CE) wire(comparison.O, reg_idx.CE) wire(io.CLK, reg_count.CLK) wire(io.CLK, reg_idx.CLK) wire(io.I, reg_count.I) wire(io.IDX, reg_idx.I) wire(reg_idx.O, io.O) ``` ## Pipeline Module ``` class TestPipeline(Circuit): IO = ['CLK', In(Clock), 'O', Out(Bits(b)), 'IDX', Out(Bits(b))] @classmethod def definition(io): # IF - get cycle_id, label_index_id controller = Controller() reg_1_cycle = mantle.Register(n) reg_1_control = mantle.DFF(init=1) wire(io.CLK, controller.CLK) wire(io.CLK, reg_1_cycle.CLK) wire(io.CLK, reg_1_control.CLK) reg_1_idx = controller.IDX wire(controller.CYCLE, reg_1_cycle.I) wire(1, reg_1_control.I) # RR - get weight block, image block of N bits readROM = ReadROM() wire(reg_1_idx, readROM.IDX) wire(reg_1_cycle.O, readROM.CYCLE) reg_2 = mantle.Register(N + b + n) reg_2_control = mantle.DFF() reg_2_weight = readROM.WEIGHT wire(io.CLK, reg_2.CLK) wire(io.CLK, readROM.CLK) wire(io.CLK, reg_2_control.CLK) wire(readROM.IMAGE, reg_2.I[:N]) wire(reg_1_idx, reg_2.I[N:N + b]) wire(reg_1_cycle.O, reg_2.I[N + b:]) wire(reg_1_control.O, reg_2_control.I) # EX - NXOr for multiplication, pop count and accumulate the result for activation multiplier = mantle.NXOr(height=2, width=N) bit_counter = DefineBitCounter(N)() adder = mantle.Add(n_bc_adder, cin=False, cout=False) mux_for_adder_0 = mantle.Mux(height=2, width=n_bc_adder) mux_for_adder_1 = mantle.Mux(height=2, width=n_bc_adder) reg_3_1 = mantle.Register(n_bc_adder) reg_3_2 = mantle.Register(b + n) wire(io.CLK, reg_3_1.CLK) wire(io.CLK, reg_3_2.CLK) wire(reg_2_weight, multiplier.I0) wire(reg_2.O[:N], multiplier.I1) wire(multiplier.O, bit_counter.I) wire(bits(0, n_bc_adder), mux_for_adder_0.I0) wire(bit_counter.O, mux_for_adder_0.I1[:n_bc]) if n_bc_adder > n_bc: wire(bits(0, n_bc_adder - n_bc), mux_for_adder_0.I1[n_bc:]) # only when data read is ready (i.e. control signal is high), accumulate the pop count result wire(reg_2_control.O, mux_for_adder_0.S) wire(reg_3_1.O, mux_for_adder_1.I0) wire(bits(0, n_bc_adder), mux_for_adder_1.I1) if n == 4: comparison_3 = SB_LUT4(LUT_INIT=int('0'*15+'1', 2)) wire(reg_2.O[N+b:], bits([comparison_3.I0, comparison_3.I1, comparison_3.I2, comparison_3.I3])) else: comparison_3 = mantle.EQ(n) wire(reg_2.O[N+b:], comparison_3.I0) wire(bits(0, n), comparison_3.I1) wire(comparison_3.O, mux_for_adder_1.S) wire(mux_for_adder_0.O, adder.I0) wire(mux_for_adder_1.O, adder.I1) wire(adder.O, reg_3_1.I) wire(reg_2.O[N:], reg_3_2.I) # CF - classify the image classifier = Classifier2() reg_4 = mantle.Register(n + b) reg_4_idx = classifier.O wire(io.CLK, classifier.CLK) wire(io.CLK, reg_4.CLK) wire(reg_3_1.O, classifier.I) wire(reg_3_2.O[:b], classifier.IDX) wire(reg_3_2.O, reg_4.I) # WB - wait to show the result until the end reg_5 = mantle.Register(b, has_ce=True) comparison_5_1 = mantle.EQ(b) comparison_5_2 = mantle.EQ(n) and_gate = mantle.And() wire(io.CLK, reg_5.CLK) wire(reg_4_idx, reg_5.I) wire(reg_4.O[:b], comparison_5_1.I0) wire(bits(num_classes - 1, b), comparison_5_1.I1) wire(reg_4.O[b:], comparison_5_2.I0) wire(bits(num_cycles - 1, n), comparison_5_2.I1) wire(comparison_5_1.O, and_gate.I0) wire(comparison_5_2.O, and_gate.I1) wire(and_gate.O, reg_5.CE) wire(reg_5.O, io.O) wire(classifier.O, io.IDX) simulator = PythonSimulator(TestPipeline, clock=TestPipeline.CLK) waveforms = [] for i in range(300): simulator.step() simulator.evaluate() clk = simulator.get_value(TestPipeline.CLK) o = simulator.get_value(TestPipeline.O) i = simulator.get_value(TestPipeline.IDX) waveforms.append([clk] + o + i) names = ["CLK"] for i in range(b): names.append("O[{}]".format(i)) for i in range(b): names.append("I[{}]".format(i)) from magma.waveform import waveform waveform(waveforms, names) ```
github_jupyter
# Learning to Control Incompressible Fluids with Differentiable Physics This notebook will walk you through data generation, supervised network initialization and end-to-end training using our differentiable PDE solver, [Φ<sub>Flow</sub>](https://github.com/tum-pbs/PhiFlow). The code below replicates the shape transitions (experiment 2 from the ICLR 2020 paper [Learning to Control PDEs with Differentiable Physics](https://ge.in.tum.de/publications/2020-iclr-holl/)). The original experiment was performed on an older version of the solver, the code for which can be found under `/legacy`. The experiment is described in detail in section D.2 of the [appendix](https://openreview.net/pdf?id=HyeSin4FPB). If you havn't already, check out the notebook on controlling Burgers' Equation. It covers the basics in more detail. ``` import sys; sys.path.append('../PhiFlow'); sys.path.append('../src') from shape_utils import load_shapes, distribute_random_shape from control.pde.incompressible_flow import IncompressibleFluidPDE from control.control_training import ControlTraining from control.sequences import StaggeredSequence, RefinedSequence import matplotlib.pyplot as plt from phi.flow import * ``` ## Data Generation ``` domain = Domain([64, 64]) # 1D Grid resolution and physical size step_count = 16 # how many solver steps to perform dt = 1.0 # Time increment per solver step example_count = 1000 batch_size = 100 data_path = 'shape-transitions' pretrain_data_path = 'moving-squares' shape_library = load_shapes('shapes') ``` The following cell creates the dataset we want to train our model on. Each example consists of a start and target (end) frame which are generated by placing a random shape somewhere within the domain. ``` for scene in Scene.list(data_path): scene.remove() for _ in range(example_count // batch_size): scene = Scene.create(data_path, count=batch_size, copy_calling_script=False) print(scene) start = distribute_random_shape(domain.resolution, batch_size, shape_library) end__ = distribute_random_shape(domain.resolution, batch_size, shape_library) [scene.write_sim_frame([start], ['density'], frame=f) for f in range(step_count)] scene.write_sim_frame([end__], ['density'], frame=step_count) ``` Since this dataset does not contain any intermediate frames, it does not allow for supervised pretraining. This is because to pre-train a CFE, two consecutive frames are required and to pretrain an $OP_n$, three frames with distance $n/2$ are needed. Instead, we create a second dataset which contains intermediate frames. This does not need to look like the actual dataset since it's only used for network initialization. Here, we linearly move a rectangle around the domain. ``` for scene in Scene.list(pretrain_data_path): scene.remove() for scene_index in range(example_count // batch_size): scene = Scene.create(pretrain_data_path, count=batch_size, copy_calling_script=False) print(scene) pos0 = np.random.randint(10, 56, (batch_size, 2)) # start position pose = np.random.randint(10, 56, (batch_size, 2)) # end position size = np.random.randint(6, 10, (batch_size, 2)) for frame in range(step_count+1): time = frame / float(step_count + 1) pos = np.round(pos0 * (1 - time) + pose * time).astype(np.int) density = AABox(lower=pos-size//2, upper=pos-size//2+size).value_at(domain.center_points()) scene.write_sim_frame([density], ['density'], frame=frame) ``` # Supervised Initialization ``` test_range = range(100) val_range = range(100, 200) train_range = range(200, 1000) ``` The following cell trains the $OP_2$, $OP_4$, $OP_8$, $OP_{16}$ networks from scratch. You can skip it and load the checkpoints by running the cell after. ``` supervised_checkpoints = {} for n in [2, 4, 8, 16]: app = ControlTraining(n, IncompressibleFluidPDE(domain, dt), datapath=pretrain_data_path, val_range=val_range, train_range=train_range, trace_to_channel=lambda _: 'density', obs_loss_frames=[n//2], trainable_networks=['OP%d' % n], sequence_class=None).prepare() for i in range(1000): app.progress() # Run Optimization for one batch supervised_checkpoints['OP%d' % n] = app.save_model() supervised_checkpoints # supervised_checkpoints = {'OP%d' % n: '../networks/shapes/supervised/OP%d_1000' % n for n in [2, 4, 8, 16]} ``` # CFE Pretraining with Differentiable Physics To pretrain the CFE, we set up a simulation with a single step of the differentiable solver. The following cell trains the CFE network from scratch. You can skip it an load the checkpoint by running the cell after. ``` app = ControlTraining(1, IncompressibleFluidPDE(domain, dt), datapath=pretrain_data_path, val_range=val_range, train_range=train_range, trace_to_channel=lambda _: 'density', obs_loss_frames=[1], trainable_networks=['CFE']).prepare() for i in range(1000): app.progress() # Run Optimization for one batch supervised_checkpoints['CFE'] = app.save_model() # supervised_checkpoints['CFE'] = '../networks/shapes/CFE/CFE_2000' ``` # End-to-end Training with Differentiable Physics Now, we jointly train $CFE$ and all $OP_n$ networks. The following cell builds the computational graph with `step_count` solver steps without initializing the network weights. ``` staggered_app = ControlTraining(step_count, IncompressibleFluidPDE(domain, dt), datapath=data_path, val_range=val_range, train_range=train_range, trace_to_channel=lambda _: 'density', obs_loss_frames=[step_count], trainable_networks=['CFE', 'OP2', 'OP4', 'OP8', 'OP16'], sequence_class=StaggeredSequence, learning_rate=5e-4).prepare() ``` The next cell initializes the networks using the supervised checkpoints and then trains all networks jointly. You can skip it and load the checkpoint by running the cell after. ``` staggered_app.load_checkpoints(supervised_checkpoints) for i in range(1000): staggered_app.progress() # Run staggered Optimization for one batch staggered_checkpoint = staggered_app.save_model() # staggered_checkpoint = {net: '../networks/shapes/staggered/all_53750' for net in ['CFE', 'OP2', 'OP4', 'OP8', 'OP16']} # staggered_app.load_checkpoints(staggered_checkpoint) ``` Now that the network is trained, we can infer some trajectories from the test set. This corresponds to Fig 5b and 18b from the [paper](https://openreview.net/pdf?id=HyeSin4FPB). ``` states = staggered_app.infer_all_frames(test_range) import pylab batches = [0, 1, 2] pylab.subplots(len(batches), 9, sharey='row', sharex='col', figsize=(12, 7)) pylab.tight_layout(w_pad=0) for i, batch in enumerate(batches): for t in range(9): pylab.subplot(len(batches), 9, t + 1 + i * 9) pylab.title('t=%d' % t * 2) pylab.imshow(states[t * 2].density.data[batch, ..., 0], origin='lower') ``` Using the same procedure as with the Burgers example, we could use a `RefinedSequence` and train with the prediction refinement scheme. The results are already looking rather nice, so we'll leave it up to the reader ;-)
github_jupyter
# Westeros Tutorial - Introducing `addon` technologies This tutorial shows how to establish an inter-dependency between two technologies by configuring one of them as `addon` to another one, i.e., parent technology. This can be used to model additional technology features such as carbon-capture-and-storage (CCS) retrofits, passout-turbines (for optional heat cogeneration) or cooling technologies, to existing technologies. There are several ways to tackle this issue. Lets take the example of a coal power plant (`coal_ppl`). All of the above mentioned additional features could be implemented by introducing different *modes* of operation for `coal_ppl`. For example, heat cogeneration could be implemented as a separate operation `mode` of `coal_ppl`, where instead of just generating electricity, heat can also be produced at the cost of reducing the amount of electricity generated. Another approach would make use of the generic `relations` in MESSAGEix, therefore linking the newly added technology representing the passout-turbine with the activity of `coal_ppl`. Both of these approaches have some downsides. Using a separate `mode` will not permit explicitly modelling investment costs and lifetime associated with the asset being added to `coal_ppl`. Generic relations are very flexible, but if too many of them will be added, the model becomes very hard to understand. MESSAGEix offers an explicit `addon` formulation for tackling this issue. The additional technology options are explicitly modelled as separate technologies, classified as `addon` technologies, linked to the activity of the technology to which they serve as additional configuration options, i.e., the parent technology. Through an `addon_conversion` factor, the activity of `addon` technology can further be restricted to a minimum or maximum share of the activity of the parent technology. **Pre-requisites for running this tutorial** - You have the *MESSAGEix* framework installed and working. - You have run Westeros baseline scenario (``westeros_baseline.ipynb``) and solved it successfully. ``` # Importing required software packages import pandas as pd import ixmp import message_ix from message_ix.utils import make_df %matplotlib inline mp = ixmp.Platform() ``` ## Making a clone of the existing scenario '*baseline*' ``` model = 'Westeros Electrified' base = message_ix.Scenario(mp, model=model, scenario='baseline') scen = base.clone(model, 'addon_technology', 'illustration of addon formulation', keep_solution=False) scen.check_out() ``` ### i. Setting up parameters ``` year_df = scen.vintage_and_active_years() vintage_years, act_years = year_df['year_vtg'], year_df['year_act'] model_horizon = scen.set('year') country = 'Westeros' gdp_profile = pd.Series([1., 1.5, 1.9], index=pd.Index([700, 710, 720], name='Time')) ``` ### ii. Define helper dataframes used for subsequent operations ``` base_input = { 'node_loc': country, 'year_vtg': vintage_years , 'year_act': act_years, 'mode': 'standard', 'node_origin': country, 'commodity': 'electricity', 'time': 'year', 'time_origin': 'year', } base_output = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'mode': 'standard', 'node_dest': country, 'time': 'year', 'time_dest': 'year', 'unit': '-', } base_capacity_factor = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'time': 'year', 'unit': '-', } base_technical_lifetime = { 'node_loc': country, 'year_vtg': model_horizon, 'unit': 'y', } base_inv_cost = { 'node_loc': country, 'year_vtg': model_horizon, 'unit': 'USD/kW', } base_fix_cost = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'unit': 'USD/kW', } base_var_cost = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'mode': 'standard', 'time': 'year', 'unit': 'USD/kWa', } ``` ## `addon` technology in MESSAGEix This tutorial will extend the current reference-energy-system to include a demand for heat and the necessary technologies to meet this demand. Heat will be generated via a `passout-turbine` which will be linked to the `coal_ppl` using the `addon` formulation. <img src='_static/addon_technologies_res.png' width='700'> We will therefore carry out the following three steps: 1. Define a new commodity and demand for heat: - Define new `commodity` `heat`. - Parametrize `demand` for `heat`. 2. Add new technologies: - add a new technology to generate heat: `passout-turbine`. - add a new technology district heat network to transport heat to the end-use technology: `dh_grid`. - add a new end-use technology, an in-house district heat connection which is linked to `demand`: `hs_house`. 3. Link the passout-turbine to the coal_ppl using the `addon` feature. ### 1: Define a new commodity and demand We therefore add a new `commodity` *heat* and a corresponding demand, which will rise at the same rate as electricity demand. ``` # Define a new commodity `heat` scen.add_set("commodity", ["heat"]) # Add heat demand at the useful level heat_demand = pd.DataFrame({ 'node': country, 'commodity': 'heat', 'level': 'useful', 'year': [700, 710, 720], 'time': 'year', 'value': (50 * gdp_profile).round(), 'unit': 'GWa', }) scen.add_par("demand", heat_demand) ``` ### 2: Define new technologies i. Heat will be generated via a pass-out turbine: Passout-turbine (`po_turbine`) characteristics: The passout-turbine requires one unit of electricity to generate five units of heat. The lifetime is assumed to be 30 years, 10 years longer then that of `coal_ppl`. Investment costs are 150\\$/kW compared to 500\\$/kW for `coal_ppl`. A coal heatplant would have higher investment costs, approximately double that of `po_turbine`. Lastly, `po_turbine` represents an alternative production mode of `coal_ppl`, hence in order to produce heat, the electricity of `coal_ppl` is reduced. Thus, electricity is parametrized as an input to `po_turbine`; for each unit of electricity, 5 units of heat can be produced. This will later also be used for establishing a "link" between `coal_ppl` and `po_turbine`. ii. Heat will be transported via a district heating grid: District heat (`dh_grid`) network characteristics: District heating networks have only very low losses as these cover only short distances (within city perimeters). We will assume the district heating network to have an efficiency of 97%. iii. Heat demand will be linked to an end-use technology: `hs_house` will represent the end-use technology, which distributed heat within the buildings. Similar to previous tutorials, we work our way backwards, starting from the `heat` demand defined at the `useful` energy level and connecting this to the `final` energy level via a technology, `hs_house`, representing the in-house heat distribution system. ``` tec = 'hs_house' scen.add_set('technology', tec) hs_house_out = make_df(base_output, technology=tec, commodity='heat', level='useful', value=1.0) scen.add_par('output', hs_house_out) hs_house_in = make_df(base_input, technology=tec, commodity='heat', level='final', value=1.0, unit='-') scen.add_par('input', hs_house_in) ``` Next, we add the information for the district heating network. ``` tec = 'dh_grid' scen.add_set('technology', tec) dh_grid_out = make_df(base_output, technology='dh_grid', commodity='heat', level='final', value=1.0) scen.add_par('output', dh_grid_out) dh_grid_in = make_df(base_input, technology='dh_grid', commodity='heat', level='secondary', value=1.03, unit='-') scen.add_par('input', dh_grid_in) ``` Last, we add `po_turbine` as a technology. ``` tec = 'po_turbine' scen.add_set('technology', tec) po_out = make_df(base_output, technology=tec, commodity='heat', level='secondary', value=1.0) scen.add_par('output', po_out) po_in = make_df(base_input, technology=tec, commodity='electricity', level='secondary', value=0.2, unit='-') scen.add_par('input', po_in) po_tl = make_df(base_technical_lifetime, technology=tec, value=30) scen.add_par('technical_lifetime', po_tl) po_inv = make_df(base_inv_cost, technology=tec, value=150) scen.add_par('inv_cost', po_inv) po_fix = make_df(base_fix_cost, technology=tec, value=15) scen.add_par('fix_cost', po_fix) ``` ### 3: Link `po_turbine` with `coal_ppl` `po_turbine` could already operate as all required parameters are defined, yet without a link to the activity of `coal_ppl`, `po_turbine` has the possibility of using electricity generated from either `coal_ppl` or `wind_ppl`. But because `po_turbine` is an addon component to `coal_ppl` a distinct linkage needs to be established. First, the newly added technology `po_turbine` needs to be classified as an `addon` technology ``` scen.add_set('addon', 'po_turbine') ``` Next, we need a new `type_addon`, which we will name `cogeneration_heat`. We will classify the `po_turbine` via the *category* `addon` as one of the addon technologies as part of this specific `type_addon`. In some cases, for example when modelling cooling technologies, multiple technologies can be classfied within a single `type_addon`. Via the set `map_tec_addon` we map the electricity generation technology, `coal_ppl`, to the `addon` technology, `po_turbine`. Multiple technologies, for example further fossil powerplants, could also be added to the this `type_addon` so as to be able to produce heat via `po_turbine`. Note: the `addon` technology as well as the parent technology must have the same `mode`. ``` type_addon = 'cogeneration_heat' addon = 'po_turbine' tec = 'coal_ppl' scen.add_cat('addon', type_addon, addon) scen.add_set('map_tec_addon', pd.DataFrame({'technology': tec, 'type_addon': [type_addon]})) ``` The last step required in order to link the `coal_ppl` is to define the `addon_conversion` factor between the `coal_ppl` and the `type_addon`. This is important, because the `coal_ppl` generates electricty while the `po_turbine` generates heat. Therefore, we can use the inverse of the `input` coefficient from the `po_turbine`. ``` df = pd.DataFrame({'node': country, 'technology': tec, 'year_vtg': vintage_years, 'year_act': act_years, 'mode': 'standard', 'time': 'year', 'type_addon': type_addon, 'value': 5, 'unit': '-'}) scen.add_par('addon_conversion', df) ``` Although not necessary for this specific example, it is also possible to limit the activity of `po_turbine` to a specific share of `coal_ppl` activity. In the example below, `po_turbine` is limited to using 15% of `coal_ppl` activity. Likewise, a constraint on the minimum amount of electricity used from `po_turbine` can be applied by using the parameter `addon_lo`. ``` # Index for `addon_up` is ['node', 'technology', 'year_act', # 'mode', 'time', 'type_addon', # 'value', 'unit'] df = pd.DataFrame({'node': country, 'technology': tec, 'year_act': act_years, 'mode': 'standard', 'time': 'year', 'type_addon': type_addon, 'value': .15, 'unit': '-'}) scen.add_par('addon_up', df) ``` ### Commit and solve ``` scen.commit(comment='define parameters for renewable implementation') scen.set_as_default() scen.solve() scen.var('OBJ')['lvl'] ``` # Plotting Results ``` # Create a Reporter object to describe and carry out reporting # calculations and operations (like plotting) based on `scenario` from message_ix.reporting import Reporter rep_bl = Reporter.from_scenario(base) rep_addon = Reporter.from_scenario(scen) # Add keys like "plot activity" to describe reporting operations. # See tutorial/utils/plotting.py from message_ix.util.tutorial import prepare_plots prepare_plots(rep_bl) prepare_plots(rep_addon) ``` ## Activity *** ### Scenario: '*baseline*' ``` rep_bl.set_filters(t=["coal_ppl", "wind_ppl"]) rep_bl.get("plot activity") ``` ### Scenario: '*addon_technology*' ``` rep_addon.set_filters(t=["coal_ppl", "wind_ppl"]) rep_addon.get("plot activity") rep_addon.set_filters(t=["po_turbine"]) rep_addon.get("plot activity") ``` ### Question Comparing the electricity generation of wind power plant in *baseline* and in this scenario, shows that wind is generating more. Can you explain the reason? You can find the answer at the end of this tutorial. ## Capacity *** The behavior observed for the activity of the two electricity generation technologies is reflected in the capacity. ### Scenario: '*baseline*' ``` rep_bl.set_filters(t=["coal_ppl", "wind_ppl"]) rep_bl.get("plot capacity") ``` ### Scenario: '*addon_technology*' ``` rep_addon.set_filters(t=["coal_ppl", "wind_ppl"]) rep_addon.get("plot capacity") ``` ## Prices *** The resulting impact on the electricity price though is negligable. Yet we can see that the prices of heat are significantly lower than that of light. ### Scenario: '*baseline*' ``` rep_bl.set_filters(c=["light"]) rep_bl.get("plot prices") ``` ### Scenario: '*addon_technology*' ``` rep_addon.set_filters(c=["light"]) rep_addon.get("plot prices") rep_addon.set_filters(c=["heat"]) rep_addon.get("plot prices") ``` ### Answer to the question: In the new scenario ('*add_technology*'), the effects of the addon technology can be seen when comparing the activity to the baseline scenario ('*baseline*'). From 700 onwards, the activity of the `wind_ppl` has increased to compensate for the electricity required from the `coal_ppl` for use in the `po_turbine`. In 720, when the `wind_ppl` is phased out, then more electricity is required to be produced by the `coal_ppl`. ``` mp.close_db() ```
github_jupyter
# Exercise 4: Optimizing Redshift Table Design ``` %load_ext sql from time import time import configparser import matplotlib.pyplot as plt import pandas as pd config = configparser.ConfigParser() config.read_file(open('dwh.cfg')) KEY=config.get('AWS','key') SECRET= config.get('AWS','secret') DWH_DB= config.get("DWH","DWH_DB") DWH_DB_USER= config.get("DWH","DWH_DB_USER") DWH_DB_PASSWORD= config.get("DWH","DWH_DB_PASSWORD") DWH_PORT = config.get("DWH","DWH_PORT") ``` # STEP 1: Get the params of the created redshift cluster - We need: - The redshift cluster <font color='red'>endpoint</font> - The <font color='red'>IAM role ARN</font> that give access to Redshift to read from S3 ``` # FILL IN THE REDSHIFT ENDPOINT HERE # e.g. DWH_ENDPOINT="redshift-cluster-1.csmamz5zxmle.us-west-2.redshift.amazonaws.com" DWH_ENDPOINT="dwhcluster.croszzrwyk0g.us-west-2.redshift.amazonaws.com" #FILL IN THE IAM ROLE ARN you got in step 2.2 of the previous exercise #e.g DWH_ROLE_ARN="arn:aws:iam::988332130976:role/dwhRole" DWH_ROLE_ARN="arn:aws:iam::471896449959:role/dwhRole" ``` # STEP 2: Connect to the Redshift Cluster ``` import os conn_string="postgresql://{}:{}@{}:{}/{}".format(DWH_DB_USER, DWH_DB_PASSWORD, DWH_ENDPOINT, DWH_PORT,DWH_DB) print(conn_string) %sql $conn_string ``` # STEP 3: Create Tables - We are going to use a benchmarking data set common for benchmarking star schemas in data warehouses. - The data is pre-loaded in a public bucket on the `us-west-2` region - Our examples will be based on the Amazon Redshfit tutorial but in a scripted environment in our workspace. ![afa](https://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model.png) ## 3.1 Create tables (no distribution strategy) in the `nodist` schema ``` %%sql CREATE SCHEMA IF NOT EXISTS nodist; SET search_path TO nodist; DROP TABLE IF EXISTS part cascade; DROP TABLE IF EXISTS supplier; DROP TABLE IF EXISTS supplier; DROP TABLE IF EXISTS customer; DROP TABLE IF EXISTS dwdate; DROP TABLE IF EXISTS lineorder; CREATE TABLE part ( p_partkey INTEGER NOT NULL, p_name VARCHAR(22) NOT NULL, p_mfgr VARCHAR(6) NOT NULL, p_category VARCHAR(7) NOT NULL, p_brand1 VARCHAR(9) NOT NULL, p_color VARCHAR(11) NOT NULL, p_type VARCHAR(25) NOT NULL, p_size INTEGER NOT NULL, p_container VARCHAR(10) NOT NULL ); CREATE TABLE supplier ( s_suppkey INTEGER NOT NULL, s_name VARCHAR(25) NOT NULL, s_address VARCHAR(25) NOT NULL, s_city VARCHAR(10) NOT NULL, s_nation VARCHAR(15) NOT NULL, s_region VARCHAR(12) NOT NULL, s_phone VARCHAR(15) NOT NULL ); CREATE TABLE customer ( c_custkey INTEGER NOT NULL, c_name VARCHAR(25) NOT NULL, c_address VARCHAR(25) NOT NULL, c_city VARCHAR(10) NOT NULL, c_nation VARCHAR(15) NOT NULL, c_region VARCHAR(12) NOT NULL, c_phone VARCHAR(15) NOT NULL, c_mktsegment VARCHAR(10) NOT NULL ); CREATE TABLE dwdate ( d_datekey INTEGER NOT NULL, d_date VARCHAR(19) NOT NULL, d_dayofweek VARCHAR(10) NOT NULL, d_month VARCHAR(10) NOT NULL, d_year INTEGER NOT NULL, d_yearmonthnum INTEGER NOT NULL, d_yearmonth VARCHAR(8) NOT NULL, d_daynuminweek INTEGER NOT NULL, d_daynuminmonth INTEGER NOT NULL, d_daynuminyear INTEGER NOT NULL, d_monthnuminyear INTEGER NOT NULL, d_weeknuminyear INTEGER NOT NULL, d_sellingseason VARCHAR(13) NOT NULL, d_lastdayinweekfl VARCHAR(1) NOT NULL, d_lastdayinmonthfl VARCHAR(1) NOT NULL, d_holidayfl VARCHAR(1) NOT NULL, d_weekdayfl VARCHAR(1) NOT NULL ); CREATE TABLE lineorder ( lo_orderkey INTEGER NOT NULL, lo_linenumber INTEGER NOT NULL, lo_custkey INTEGER NOT NULL, lo_partkey INTEGER NOT NULL, lo_suppkey INTEGER NOT NULL, lo_orderdate INTEGER NOT NULL, lo_orderpriority VARCHAR(15) NOT NULL, lo_shippriority VARCHAR(1) NOT NULL, lo_quantity INTEGER NOT NULL, lo_extendedprice INTEGER NOT NULL, lo_ordertotalprice INTEGER NOT NULL, lo_discount INTEGER NOT NULL, lo_revenue INTEGER NOT NULL, lo_supplycost INTEGER NOT NULL, lo_tax INTEGER NOT NULL, lo_commitdate INTEGER NOT NULL, lo_shipmode VARCHAR(10) NOT NULL ); ``` ## 3.1 Create tables (with a distribution strategy) in the `dist` schema ``` %%sql CREATE SCHEMA IF NOT EXISTS dist; SET search_path TO dist; DROP TABLE IF EXISTS part cascade; DROP TABLE IF EXISTS supplier; DROP TABLE IF EXISTS supplier; DROP TABLE IF EXISTS customer; DROP TABLE IF EXISTS dwdate; DROP TABLE IF EXISTS lineorder; CREATE TABLE part ( p_partkey integer not null sortkey distkey, p_name varchar(22) not null, p_mfgr varchar(6) not null, p_category varchar(7) not null, p_brand1 varchar(9) not null, p_color varchar(11) not null, p_type varchar(25) not null, p_size integer not null, p_container varchar(10) not null ); CREATE TABLE supplier ( s_suppkey integer not null sortkey, s_name varchar(25) not null, s_address varchar(25) not null, s_city varchar(10) not null, s_nation varchar(15) not null, s_region varchar(12) not null, s_phone varchar(15) not null) diststyle all; CREATE TABLE customer ( c_custkey integer not null sortkey, c_name varchar(25) not null, c_address varchar(25) not null, c_city varchar(10) not null, c_nation varchar(15) not null, c_region varchar(12) not null, c_phone varchar(15) not null, c_mktsegment varchar(10) not null) diststyle all; CREATE TABLE dwdate ( d_datekey integer not null sortkey, d_date varchar(19) not null, d_dayofweek varchar(10) not null, d_month varchar(10) not null, d_year integer not null, d_yearmonthnum integer not null, d_yearmonth varchar(8) not null, d_daynuminweek integer not null, d_daynuminmonth integer not null, d_daynuminyear integer not null, d_monthnuminyear integer not null, d_weeknuminyear integer not null, d_sellingseason varchar(13) not null, d_lastdayinweekfl varchar(1) not null, d_lastdayinmonthfl varchar(1) not null, d_holidayfl varchar(1) not null, d_weekdayfl varchar(1) not null) diststyle all; CREATE TABLE lineorder ( lo_orderkey integer not null, lo_linenumber integer not null, lo_custkey integer not null, lo_partkey integer not null distkey, lo_suppkey integer not null, lo_orderdate integer not null sortkey, lo_orderpriority varchar(15) not null, lo_shippriority varchar(1) not null, lo_quantity integer not null, lo_extendedprice integer not null, lo_ordertotalprice integer not null, lo_discount integer not null, lo_revenue integer not null, lo_supplycost integer not null, lo_tax integer not null, lo_commitdate integer not null, lo_shipmode varchar(10) not null ); ``` # STEP 4: Copying tables Our intent here is to run 5 COPY operations for the 5 tables respectively as show below. However, we want to do accomplish the following: - Make sure that the `DWH_ROLE_ARN` is substituted with the correct value in each query - Perform the data loading twice once for each schema (dist and nodist) - Collect timing statistics to compare the insertion times Thus, we have scripted the insertion as found below in the function `loadTables` which returns a pandas dataframe containing timing statistics for the copy operations ```sql copy customer from 's3://awssampledbuswest2/ssbgz/customer' credentials 'aws_iam_role=<DWH_ROLE_ARN>' gzip region 'us-west-2'; copy dwdate from 's3://awssampledbuswest2/ssbgz/dwdate' credentials 'aws_iam_role=<DWH_ROLE_ARN>' gzip region 'us-west-2'; copy lineorder from 's3://awssampledbuswest2/ssbgz/lineorder' credentials 'aws_iam_role=<DWH_ROLE_ARN>' gzip region 'us-west-2'; copy part from 's3://awssampledbuswest2/ssbgz/part' credentials 'aws_iam_role=<DWH_ROLE_ARN>' gzip region 'us-west-2'; copy supplier from 's3://awssampledbuswest2/ssbgz/supplier' credentials 'aws_iam_role=<DWH_ROLE_ARN>' gzip region 'us-west-2'; ``` ## 4.1 Automate the copying ``` def loadTables(schema, tables): loadTimes = [] SQL_SET_SCEMA = "SET search_path TO {};".format(schema) %sql $SQL_SET_SCEMA for table in tables: SQL_COPY = """ copy {} from 's3://awssampledbuswest2/ssbgz/{}' credentials 'aws_iam_role={}' gzip region 'us-west-2'; """.format(table,table, DWH_ROLE_ARN) print("======= LOADING TABLE: ** {} ** IN SCHEMA ==> {} =======".format(table, schema)) print(SQL_COPY) t0 = time() %sql $SQL_COPY loadTime = time()-t0 loadTimes.append(loadTime) print("=== DONE IN: {0:.2f} sec\n".format(loadTime)) return pd.DataFrame({"table":tables, "loadtime_"+schema:loadTimes}).set_index('table') #-- List of the tables to be loaded tables = ["customer","dwdate","supplier", "part", "lineorder"] #-- Insertion twice for each schema (WARNING!! EACH CAN TAKE MORE THAN 10 MINUTES!!!) nodistStats = loadTables("nodist", tables) distStats = loadTables("dist", tables) ``` ## 4.1 Compare the load performance results ``` #-- Plotting of the timing results stats = distStats.join(nodistStats) stats.plot.bar() plt.show() ``` # STEP 5: Compare Query Performance ``` oneDim_SQL =""" set enable_result_cache_for_session to off; SET search_path TO {}; select sum(lo_extendedprice*lo_discount) as revenue from lineorder, dwdate where lo_orderdate = d_datekey and d_year = 1997 and lo_discount between 1 and 3 and lo_quantity < 24; """ twoDim_SQL=""" set enable_result_cache_for_session to off; SET search_path TO {}; select sum(lo_revenue), d_year, p_brand1 from lineorder, dwdate, part, supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_category = 'MFGR#12' and s_region = 'AMERICA' group by d_year, p_brand1 """ drill_SQL = """ set enable_result_cache_for_session to off; SET search_path TO {}; select c_city, s_city, d_year, sum(lo_revenue) as revenue from customer, lineorder, supplier, dwdate where lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_orderdate = d_datekey and (c_city='UNITED KI1' or c_city='UNITED KI5') and (s_city='UNITED KI1' or s_city='UNITED KI5') and d_yearmonth = 'Dec1997' group by c_city, s_city, d_year order by d_year asc, revenue desc; """ oneDimSameDist_SQL =""" set enable_result_cache_for_session to off; SET search_path TO {}; select lo_orderdate, sum(lo_extendedprice*lo_discount) as revenue from lineorder, part where lo_partkey = p_partkey group by lo_orderdate order by lo_orderdate """ def compareQueryTimes(schema): queryTimes =[] for i,query in enumerate([oneDim_SQL, twoDim_SQL, drill_SQL, oneDimSameDist_SQL]): t0 = time() q = query.format(schema) %sql $q queryTime = time()-t0 queryTimes.append(queryTime) return pd.DataFrame({"query":["oneDim","twoDim", "drill", "oneDimSameDist"], "queryTime_"+schema:queryTimes}).set_index('query') noDistQueryTimes = compareQueryTimes("nodist") distQueryTimes = compareQueryTimes("dist") queryTimeDF =noDistQueryTimes.join(distQueryTimes) queryTimeDF.plot.bar() plt.show() improvementDF = queryTimeDF["distImprovement"] =100.0*(queryTimeDF['queryTime_nodist']-queryTimeDF['queryTime_dist'])/queryTimeDF['queryTime_nodist'] improvementDF.plot.bar(title="% dist Improvement by query") plt.show() ```
github_jupyter
# K-means clustering analysis for stocks compiled in the Hang Sang Index The goal of this demo is to show how we can apply k-means clustering to identify market patterns and analyze their relationships not only among the clusters themselves but also with the market's trend. Specifically, we first cluster on 48 different stocks as similar to building a dictionary in Bag-of-Words. Then, we construct a matrix on the transition between the clusters themselves and identify trending behavior among the clusters and the next $n-$day trend ``` import os import glob import numpy as np import pandas as pd import itertools import matplotlib.pyplot as plt import utils import ta_utils as ta import candlesticks_plot as cp % matplotlib inline from collections import Counter from sklearn.cluster import KMeans from sklearn.decomposition import PCA ``` Defining a 6-day window on the daily OHLC prices for 48 different stocks ``` window = 6 path = glob.glob('./stockData/*') data = [pd.read_csv(filename).drop(columns=['Volume','Date']) for filename in path] lag_data = [utils.n_day_lag(d, window).dropna() for d in data] trend_data = [utils.get_window_trend(d, window).dropna() for d in data] # data = pd.concat(data).reset_index(drop=True) ``` Whitening data and initializing clusters. Our goal is to identify 24 different clusters. ``` K = 24 pca = PCA() X = pca.fit_transform(pd.concat(lag_data,axis=0)) centroids = np.random.normal(size=(K,X.shape[1])) centroids = (centroids - centroids.mean()) / centroids.std() km = KMeans(n_clusters=K, init=centroids) km.fit(X) ``` Visualizing distribution of clusters ``` counts = Counter(km.labels_) plt.style.use('ggplot') labels, values = zip(*counts.items()) # sort your values in descending order indSort = np.argsort(values)[::-1] # rearrange your data labels = np.array(labels)[indSort] values = np.array(values)[indSort] indexes = np.arange(len(labels)) bar_width = 0.35 plt.bar(indexes, values) # add labels plt.xticks(indexes + bar_width, labels) plt.show() ``` After identifying our clusters, we apply this on selected stocks to analyze the patterns we have idenfified. As examples, we will use the daily prices for HSBC and MTR ``` hsbc_ohlc = pd.read_csv('./data/0005.HK.csv') hsbc_ohlc = hsbc_ohlc.set_index('Date') hsbc_ohlc = hsbc_ohlc.drop(columns=['Volume']) hsbc_ohlc_lag = utils.n_day_lag(hsbc_ohlc, window).dropna() hsbc_trend = utils.get_window_trend(hsbc_ohlc, window).dropna() hsbc_clusters = km.predict(PCA().fit_transform(hsbc_ohlc_lag.as_matrix())) hsbc_clusters = pd.Series(hsbc_clusters, name='Cluster', index=hsbc_trend.index) cluster_trend = pd.concat([hsbc_clusters, hsbc_trend],axis=1) mtr_ohlc = pd.read_csv('./data/0066.HK.csv') mtr_ohlc = mtr_ohlc.set_index('Date') mtr_ohlc = mtr_ohlc.drop(columns=['Volume']) mtr_ohlc_lag = utils.n_day_lag(mtr_ohlc, window).dropna() mtr_trend = utils.get_window_trend(mtr_ohlc, window) mtr_clusters = km.predict(PCA().fit_transform(mtr_ohlc_lag.as_matrix())) mtr_clusters = pd.Series(mtr_clusters, name='Cluster', index=mtr_trend.index) mtr_cluster_trend = pd.concat([mtr_clusters, mtr_trend],axis=1) ``` Using the prices of HSBC, we identfy the transition from one cluster to another. Specifically, the transition probabilities is represented in matrix $\mathbb{P}=\{p_{i,j}\}_{i,j\in\mathbb{N}}$ where $p_{i,j}$ is the probability of transitioning from cluster $i$ to cluster $j$. Given cluster $i$ at time $t$ $c_t^i$, the transition probability $p_{i,j}=p(c_{t+5}^j|c_t^i)$ for $t=1,2,3,...$. ``` plt.style.use('default') trans_mat = utils.transition_matrix(hsbc_clusters, K, future_step=window) cp.plot_heatmap(trans_mat, K, show_prob=False) ``` We are interested in whether there is a relationship between clusters and the window it is defined in. Here, we plot the probability of a cluster that is defined as a trend: bullish (1), no-trend (0), and bearish (-1) ``` sig_dist = utils.trend_cluster_distribution(cluster_trend, K) plt.figure(figsize=(10,10)) cp.plot_signal_distribution(sig_dist) ``` Simiarly for MTR, we plot the same heatmap and compare the two: ``` sig_dist = utils.trend_cluster_distribution(mtr_cluster_trend.dropna(), K) plt.figure(figsize=(10,10)) cp.plot_signal_distribution(sig_dist) ``` We can further examine the market trend for the next $n-$day. Here we make similar plots as above for the next 3-day trend for HSBC: ``` window = 3 hsbc_trend = utils.get_n_day_trend(hsbc_ohlc, window).dropna() cluster_trend = pd.concat([hsbc_clusters, hsbc_trend],axis=1) sig_dist = utils.trend_cluster_distribution(cluster_trend.dropna(), K) plt.figure(figsize=(10,10)) cp.plot_signal_distribution(sig_dist) ``` and simiarly for MTR: ``` mtr_trend = utils.get_n_day_trend(mtr_ohlc, window).dropna() mtr_cluster_trend = pd.concat([hsbc_clusters, mtr_trend],axis=1) sig_dist = utils.trend_cluster_distribution(mtr_cluster_trend.dropna(), K) plt.figure(figsize=(10,10)) cp.plot_signal_distribution(sig_dist) #output demo import os os.system('jupyter nbconvert --to html kmeans-demo.ipynb') ```
github_jupyter
``` %load_ext autoreload %autoreload 2 %load_ext tensorboard import sys import os module_path = os.path.abspath(os.path.join(os.pardir)) if module_path not in sys.path: sys.path.append(module_path) from datetime import datetime import pandas as pd import numpy as np import joblib from pathlib import Path from sklearn import model_selection from pytorch_lightning import Trainer, seed_everything from pytorch_lightning.callbacks import EarlyStopping from pytorch_lightning import Trainer, seed_everything from pytorch_lightning.callbacks import EarlyStopping from pytorch_lightning.loggers.tensorboard import TensorBoardLogger from project.datasets import Dataset, CTRPDataModule from project.models import FiLMNetwork, ConcatNetwork, ConditionalNetwork import pyarrow.dataset as ds import pyarrow.feather as feather def prepare(exp, subset=True): data_path = Path("../../film-gex-data/processed/") input_cols = joblib.load(data_path.joinpath("gene_cols.pkl")) if exp=='id': cpd_id = "master_cpd_id" cond_cols = np.array([cpd_id, 'cpd_conc_umol']) else: fp_cols = joblib.load(data_path.joinpath("fp_cols.pkl")) cond_cols = np.append(fp_cols, ['cpd_conc_umol']) if subset: dataset = ds.dataset(data_path.joinpath("train_sub.feather"), format='feather') else: dataset = ds.dataset(data_path.joinpath("train.feather"), format='feather') return dataset, input_cols, cond_cols def cv(name, exp, gpus, nfolds, dataset, input_cols, cond_cols, batch_size): seed_everything(2299) cols = list(np.concatenate((input_cols, cond_cols, ['cpd_avg_pv']))) for fold in np.arange(0,nfolds): start = datetime.now() train = dataset.to_table(columns=cols, filter=ds.field('fold') != fold).to_pandas() val = dataset.to_table(columns=cols, filter=ds.field('fold') == fold).to_pandas() # DataModule dm = CTRPDataModule(train, val, input_cols, cond_cols, target='cpd_avg_pv', batch_size=batch_size) print("Completed dataloading in {}".format(str(datetime.now() - start))) # Model start = datetime.now() if exp=='film': model = FiLMNetwork(len(input_cols), len(cond_cols)) else: model = ConcatNetwork(len(input_cols), len(cond_cols)) # Callbacks logger = TensorBoardLogger(save_dir=os.getcwd(), version="{}_{}_fold_{}".format(name, exp, fold), name='lightning_logs') early_stop = EarlyStopping(monitor='val_loss', min_delta=0.01) # Trainer start = datetime.now() trainer = Trainer(auto_lr_find=True, auto_scale_batch_size=False, max_epochs=25, gpus=[1,3], logger=logger, early_stop_callback=False, distributed_backend='dp') print("Completed loading in {}".format(str(datetime.now() - start))) trainer.fit(model, dm) print("Completed fold {} in {}".format(fold, str(datetime.now() - start))) return print("/done") dataset, input_cols, cond_cols = prepare('id', subset=True) name = 'test' exp = 'id' gpus = 3 nfolds = 1 model = ConditionalNetwork(exp, len(input_cols), len(cond_cols), batch_size=256) model.hparams #cv(name, exp, gpus, nfolds, dataset, input_cols, cond_cols, batch_size=256) name logger = TensorBoardLogger(save_dir=os.getcwd(), version="{}_{}_fold_{}", name='lightning_logs') logger.log_dir trainer = Trainer(logger=logger) trainer.default_root_dir trainer.logger.log_dir FiLMNetwork.log() ```
github_jupyter
Numba 0.52.0 Release Demo ======================= This notebook contains a demonstration of new features present in the 0.52.0 release of Numba. Whilst release notes are produced as part of the [`CHANGE_LOG`](https://github.com/numba/numba/blob/release0.52/CHANGE_LOG), there's nothing like seeing code in action! This release contains a few new features, but it's mainly internals that have changed, with a particular focus on increasing run time performance! In this notebook the new CPU target features are demonstrated. The [CUDA target](https://numba.pydata.org/numba-doc/latest/cuda/index.html) also gained a lot of new features in 0.52.0 and [@gmarkall](https://github.com/gmarkall) has created a [demo notebook](https://mybinder.org/v2/gh/numba/numba-examples?filepath=notebooks%2FNumba_052_CUDA_Release_Demo.ipynb) especially for these! Key internal changes: * Intel kindly sponsored the development of an LLVM level reference count pruning compiler pass. This reduces pressure on the atomic locks used for reference counting in Numba and exposes a lot more inlining/optimisation opportunities ([@sklam](https://github.com/sklam)). This change has a large impact on performance and so has [its own notebook](https://mybinder.org/v2/gh/numba/numba-examples?filepath=notebooks%2FNumba_052_refpruner.ipynb) to help users understand what it's doing! * Intel also sponsored work to improve the performance of the ``numba.typed.List`` container ([@stuartarchibald](https://github.com/stuartarchibald)). * The optimisers in Numba have been lightly tuned and can now do more ([@stuartarchibald](https://github.com/stuartarchibald)). Highlights of core feature changes: * The ``inspect_cfg`` method on the JIT dispatcher object has been significantly enhanced ([@stuartarchibald](https://github.com/stuartarchibald)). * NumPy 1.19 support is added ([@stuartarchibald](https://github.com/stuartarchibald)). * A few new NumPy features have been added along with some extensions to existing support. Demonstrations of new features/changes: * [Performance improvement demonstration](#Performance-improvement-demonstration) * [``inspect_cfg`` enhancements](#CFG-inspection-enhancements) * [NumPy enhancements](#Newly-supported-NumPy-functions/features) First, import the necessary from Numba and NumPy... ``` from numba import jit, njit, config, __version__, errors from numba.extending import overload import numba import numpy as np assert numba.version_info.short >= (0, 52) ``` Performance improvement demonstration ================================== The performance of Numba JIT compiled functions is improved in quite a few important cases in 0.52. First, as mentioned above, [this notebook](https://mybinder.org/v2/gh/numba/numba-examples?filepath=notebooks%2FNumba_052_refpruner.ipynb) demonstrates the impact of the reference count pruning compiler pass, alternatively, just try 0.52.0 with your existing code and see if it makes a difference! Second, there have been some specific improvements, demonstrating a couple of them: #### Calling `str(<int>)` ``` @njit def str_on_int(n): c = 0 for i in range(n): c += len(str(n)) return c sz = 100000 str_on_int(sz) %timeit str_on_int.py_func(sz) # python function %timeit str_on_int(sz) # jit function ``` #### Reductions/`__getitem__` on `typed.List` ``` # Reductions on typed.List from numba.typed import List n = 1000 py_list = [float(x) for x in range(n)] nb_list = List(py_list) def sum_list(lst): acc = 0.0 for item in lst: acc += item return acc jit_sum_list = njit(sum_list) fastmath_jit_sum_list = njit(fastmath=True)(sum_list) %timeit sum_list(py_list) # python function on a python list %timeit jit_sum_list(nb_list) # JIT function on typed list %timeit fastmath_jit_sum_list(nb_list) # "fastmath" JIT function on typed list ``` CFG inspection enhancements ========================= The Numba dispatcher's [`inspect_cfg()` method](https://numba.readthedocs.io/en/stable/reference/jit-compilation.html#Dispatcher.inspect_cfg) has been enhanced with colorized output and support for Python code interleaving to provide a more visual way to debug/tune code. For a more advanced demonstration, this feature is used in [the notebook](https://mybinder.org/v2/gh/numba/numba-examples?filepath=notebooks%2FNumba_052_refpruner.ipynb) explaining the new reference count pruning pass. A quick demonstration of this feature: ``` @njit(debug=True) # Switch on debug to make python source available. def foo(n): acc = 0. for i in range(n): acc += np.sqrt(i) if acc > 1000: raise ValueError("Error!") else: return acc foo(10) # Take a look at the docstring for all the options, the ones used here are: # strip_ir = remove LLVM IR apart from calls # interleave = add Python source into the LLVM CFG! foo.inspect_cfg(foo.signatures[0], strip_ir=True, interleave=True) ``` Newly supported NumPy functions/features ==================================== This release contains some updates to Numba's NumPy support, mostly contributed by the Numba community (with thanks!): * NumPy 1.19 ([@stuartarchibald](https://github.com/stuartarchibald)). * ``np.asfarray`` ([@guilhermeleobas](https://github.com/guilhermeleobas)). * "subtyping" in record arrays ([@luk-f-a](https://github.com/luk-f-a)). * ``np.split`` and ``np.array_split`` ([@ivirshup](https://github.com/ivirshup)). * ``operator.contains`` with ``ndarray`` ([@mugoh](https://github.com/mugoh)). * ``np.asarray_chkfinite`` ([@rishabhvarshney14](https://github.com/rishabhvarshney14)). * the ``ndarray`` allocators, ``empty``, ``ones`` and ``zeros``, accept a ``dtype`` specified as a string literal ([@stuartarchibald](https://github.com/stuartarchibald)). ``` @njit def demo_numpy(): # np.asfarray farray = np.asfarray(np.zeros(4,), dtype=np.int8) # np.split/np.array_split split = np.split(np.arange(10), 5) arr_split = np.array_split(np.arange(10), 3) arr_contains = 4 in np.arange(10), 11 in np.arange(10) # asarray_chkfinite caught = False try: np.asarray_chkfinite((0., np.inf, 1., np.nan,)) except Exception: # inf and nan not accepted caught = True # String literal dtypes ones, zeros, empty = (np.ones((5,), 'int8'), np.zeros((3,), 'complex128'), np.empty((0,), 'float32')) return farray, split, arr_split, arr_contains, caught, ones, zeros, empty farray, split, arr_split, arr_contains, caught, ones, zeros, empty = demo_numpy() print((f"farray: {farray}\n" f"split: {split}\n" f"array_split: {arr_split}\n" f"array contains: {arr_contains}\n" f"caught: {caught}\n" f"ones: {ones}\n" f"zeros: {zeros}\n" f"empty: {empty}\n")) ```
github_jupyter
``` %matplotlib inline ``` The eigenfaces example: chaining PCA and SVMs ============================================= The goal of this example is to show how an unsupervised method and a supervised one can be chained for better prediction. It starts with a didactic but lengthy way of doing things, and finishes with the idiomatic approach to pipelining in scikit-learn. Here we'll take a look at a simple facial recognition example. Ideally, we would use a dataset consisting of a subset of the `Labeled Faces in the Wild <http://vis-www.cs.umass.edu/lfw/>`__ data that is available with :func:`sklearn.datasets.fetch_lfw_people`. However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset. Feel free to explore the LFW dataset. ``` from sklearn import datasets faces = datasets.fetch_olivetti_faces() faces.data.shape ``` Let's visualize these faces to see what we're working with ``` from matplotlib import pyplot as plt fig = plt.figure(figsize=(8, 6)) # plot several images for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(faces.images[i], cmap=plt.cm.bone) ``` .. tip:: Note is that these faces have already been localized and scaled to a common size. This is an important preprocessing piece for facial recognition, and is a process that can require a large collection of training data. This can be done in scikit-learn, but the challenge is gathering a sufficient amount of training data for the algorithm to work. Fortunately, this piece is common enough that it has been done. One good resource is `OpenCV <https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html>`__, the *Open Computer Vision Library*. We'll perform a Support Vector classification of the images. We'll do a typical train-test split on the images: ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, random_state=0) print(X_train.shape, X_test.shape) ``` Preprocessing: Principal Component Analysis ------------------------------------------- 1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable size, while maintaining most of the information in the dataset. ``` from sklearn import decomposition pca = decomposition.PCA(n_components=150, whiten=True) pca.fit(X_train) ``` One interesting part of PCA is that it computes the "mean" face, which can be interesting to examine: ``` plt.imshow(pca.mean_.reshape(faces.images[0].shape), cmap=plt.cm.bone) ``` The principal components measure deviations about this mean along orthogonal axes. ``` print(pca.components_.shape) ``` It is also interesting to visualize these principal components: ``` fig = plt.figure(figsize=(16, 6)) for i in range(30): ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[]) ax.imshow(pca.components_[i].reshape(faces.images[0].shape), cmap=plt.cm.bone) ``` The components ("eigenfaces") are ordered by their importance from top-left to bottom-right. We see that the first few components seem to primarily take care of lighting conditions; the remaining components pull out certain identifying features: the nose, eyes, eyebrows, etc. With this projection computed, we can now project our original training and test data onto the PCA basis: ``` X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) print(X_test_pca.shape) ``` These projected components correspond to factors in a linear combination of component images such that the combination approaches the original face. Doing the Learning: Support Vector Machines ------------------------------------------- Now we'll perform support-vector-machine classification on this reduced dataset: ``` from sklearn import svm clf = svm.SVC(C=5., gamma=0.001) clf.fit(X_train_pca, y_train) ``` Finally, we can evaluate how well this classification did. First, we might plot a few of the test-cases with the labels learned from the training set: ``` import numpy as np fig = plt.figure(figsize=(8, 6)) for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(X_test[i].reshape(faces.images[0].shape), cmap=plt.cm.bone) y_pred = clf.predict(X_test_pca[i, np.newaxis])[0] color = ('black' if y_pred == y_test[i] else 'red') ax.set_title(y_pred, fontsize='small', color=color) ``` The classifier is correct on an impressive number of images given the simplicity of its learning model! Using a linear classifier on 150 features derived from the pixel-level data, the algorithm correctly identifies a large number of the people in the images. Again, we can quantify this effectiveness using one of several measures from :mod:`sklearn.metrics`. First we can do the classification report, which shows the precision, recall and other measures of the "goodness" of the classification: ``` from sklearn import metrics y_pred = clf.predict(X_test_pca) print(metrics.classification_report(y_test, y_pred)) ``` Another interesting metric is the *confusion matrix*, which indicates how often any two items are mixed-up. The confusion matrix of a perfect classifier would only have nonzero entries on the diagonal, with zeros on the off-diagonal: ``` print(metrics.confusion_matrix(y_test, y_pred)) ``` Pipelining ---------- Above we used PCA as a pre-processing step before applying our support vector machine classifier. Plugging the output of one estimator directly into the input of a second estimator is a commonly used pattern; for this reason scikit-learn provides a ``Pipeline`` object which automates this process. The above problem can be re-expressed as a pipeline as follows: ``` from sklearn.pipeline import Pipeline clf = Pipeline([('pca', decomposition.PCA(n_components=150, whiten=True)), ('svm', svm.LinearSVC(C=1.0))]) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(metrics.confusion_matrix(y_pred, y_test)) plt.show() ``` A Note on Facial Recognition ---------------------------- Here we have used PCA "eigenfaces" as a pre-processing step for facial recognition. The reason we chose this is because PCA is a broadly-applicable technique, which can be useful for a wide array of data types. Research in the field of facial recognition in particular, however, has shown that other more specific feature extraction methods are can be much more effective.
github_jupyter
``` #TODO # 1.詢問助教是否是使用新的trainingdata和新的testingdata,如果是那就要做完善的洗資料,因為testingdata中就有一筆資料是有缺失的(blank) # 2.詢問是否要用儲存成一個model,然後去跑助教的testingdata,並且在20天內得到最高的profit # 3.重新判斷和確認high,low和close收盤價的關係 # 4.改模型參數LSTM 256->64, batch_size, learning_rate, epochs, train_test_split_ratio, past_day&future_day交叉測試 # 5.跑requirement.txt # 6.跑Pipfile # 7.寫Readme # 8.ipynb -> py 整合,execute arguments defined # 9.TestingCorrector #TOFIX # 1.每次讀到csv的時候,第一個資料都會被miss掉,去找出原因 # Fixed: 只要在read_csv裡面加入header=None就會讓api自動忽略把第一筆資料當做column項了 # 2.把缺失的資料,用predata 和 postdata 的平均補上。目前是直接使用前一筆data # train出兩套model,第一個先訓練出k線 # 第二個訓練出基於第一個模型訓練出的k線與實際值比對過後的買賣決策 !nvidia-smi from google.colab import drive drive.mount('/content/drive') import os import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from keras.models import Sequential from keras.layers import LSTM,Dense,Dropout from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.optimizers import Adam from sklearn.preprocessing import MinMaxScaler from statistics import mean from numpy import newaxis import csv # 以收盤價為train, 以開盤價為target label def split_dataset(df, past_day, future_day): X, Y = [], [] for i in range(len(df) - future_day - past_day): X.append(np.array(df[i:i+past_day, 0])) Y.append(np.array(df[i+past_day:i+past_day+future_day, 0])) return np.array(X), np.array(Y) def build_model(shape): model = Sequential() model.add(LSTM(64, input_shape=(shape[1], shape[2]), return_sequences=True)) print(shape[1], shape[2]) model.add(Dropout(0.2)) model.add(LSTM(64, return_sequences=True)) model.add(Dropout(0.2)) model.add(Dense(1)) return model def generate_csv(): try: with open('output.csv', mode='w') as csv_file: writer = csv.writer(csv_file) writer.writerow([0]) writer.writerow([-1]) writer.writerow([1]) csv_file.seek(0, os.SEEK_END) csv_file.truncate() except: raise main_path = 'drive/My Drive/Colab Notebooks/DSAI_HW2' print(os.listdir(main_path)) # Arguments epochs = 100 batch_size = 32 past_day = 5 future_day = 1 train_df = pd.read_csv(os.path.join(main_path, 'training.csv'), header=None) test_df = pd.read_csv(os.path.join(main_path, 'testing.csv'), header=None) train_df.drop([1,2,3], inplace=True, axis=1) test_df.drop([1,2,3], inplace=True, axis=1) test_df = pd.DataFrame(np.insert(test_df.to_numpy(), 0, train_df.to_numpy()[-(past_day+1):], axis=0)) train_df = pd.DataFrame(train_df.to_numpy()[:-(past_day+1)]) # Scaling sc = MinMaxScaler(feature_range=(-1, 1)) scaled_train_df = sc.fit_transform(train_df) scaled_test_df = sc.transform(test_df) # Generate training data and label x_train, y_train = split_dataset(scaled_train_df, past_day, future_day) x_test, y_test = split_dataset(scaled_test_df, past_day, future_day) # Reshape the data into (Samples, Timestep, Features) x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1) x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], 1) # Build model model = build_model(x_train.shape) model.summary() # Compile and Fit reduce_lr = tf.keras.callbacks.LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x) early_stopping = EarlyStopping(monitor='loss', patience=10, verbose=1, mode='auto') model.compile(optimizer=Adam(), loss='mean_squared_error') history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,validation_data=(x_test, y_test), shuffle=False, callbacks=[reduce_lr, early_stopping]) model.save('model.h5') # Plotting plt.figure(figsize=(20,8)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title("Model Loss") plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend(['Train','Valid']) plt.grid(True) plt.figure(figsize=(20,8)) plt.plot(x_test[:,-1], color='blue') plt.plot(y_test, color='red') plt.title("Price") plt.grid(True) plt.legend(['close price','open price']) #@title 預設標題文字 predicted = model.predict(x_test) predict = sc.inverse_transform(predicted.reshape(predicted.shape[0], predicted.shape[1])) # 將預測data前移三天,並利用預測結果預測剩餘最後三天的股票 last = np.array([x_test[-1, 1:], x_test[-1, 2:], x_test[-1, 3:]], dtype=object) last[0] = np.array(np.concatenate((np.array(last[0]), np.array(predicted[newaxis, -3, -1])))) last[1] = np.concatenate((last[1], predicted[newaxis, -3, -1])) last[1] = np.array(np.concatenate((last[1], predicted[newaxis, -2, -1]))) last[2] = np.concatenate((last[2], predicted[newaxis, -3, -1])) last[2] = np.concatenate((last[2], predicted[newaxis, -2, -1])) last[2] = np.array(np.concatenate((last[2], predicted[newaxis, -1, -1]))) last[0] = pd.DataFrame(last[0]) last[1] = pd.DataFrame(last[1]) last[2] = pd.DataFrame(last[2]) X = [] X.append(np.array(last[0])) X.append(np.array(last[1])) X.append(np.array(last[2])) X = np.array(X) predicted_last = model.predict(X) predicted_last = sc.inverse_transform(predicted_last.reshape(predicted_last.shape[0], predicted_last.shape[1])) nu = [] for i in range(20): nu.append(predict[i, -1]) nu = nu[3:] nu.append(predicted_last[0, -1]) nu.append(predicted_last[1, -1]) nu.append(predicted_last[2, -1]) print(nu) ground_truth = sc.inverse_transform(y_test.reshape(-1,1)) plt.figure(figsize=(20,8)) plt.plot(ground_truth) plt.plot(nu) plt.title('Open Price') plt.legend(['y_test','predict']) plt.grid(True) print(nu) # 1: buy # 0: hold #-1: sell status = 0; flag = 0; revenue = 0; with open('output.csv', mode='w') as csv_file: writer = csv.writer(csv_file) writer.writerow(['actoin table']) for i in range(19): if (status == 1): if (nu[i+1]<nu[i]): writer.writerow(['-1']) status = 0 revenue = revenue+nu[i] else: writer.writerow(['0']) elif (status == 0): if (nu[i+1]>nu[i]): writer.writerow(['1']) status = 1 revenue = revenue-nu[i] elif (nu[i+1]<nu[i]): writer.writerow(['-1']) status = -1 revenue = revenue+nu[i] else: writer.writerow(['0']) else : if (nu[i+1]>nu[i]): writer.writerow(['1']) status = 0 revenue = revenue-nu[i] else: writer.writerow(['0']) if (status==1) : revenue = revenue + nu[19] elif (status==-1) : revenue = revenue - nu[19] print(revenue) ```
github_jupyter
``` import os import pandas as pd import numpy as np import subprocess from tqdm import tqdm import sys from surfboard.sound import Waveform from surfboard.feature_extraction import extract_features sys.path DATASET_PATH = "/Users/mazeyu/Desktop/CMU/20fall/18797/project/data" FEATURE_PATH = "/Users/mazeyu/Desktop/CMU/20fall/18797/project/features" DOC_PATH = 'alc_original/DOC/IS2011CHALLENGE' DATA_PATH = 'alc_original' TRAIN_TABLE = 'TRAIN.TBL' D1_TABLE = 'D1.TBL' D2_TABLE = 'D2.TBL' TEST_TABLE = 'TESTMAPPING.txt' components = ['mfcc', 'log_melspec', 'magnitude_spectrum', 'bark_spectrogram', 'morlet_cwt', 'chroma_stft', 'chroma_cqt', 'chroma_cens', 'spectral_slope', 'spectral_flux', 'spectral_entropy', 'spectral_centroid', 'spectral_spread', 'spectral_skewness', 'spectral_kurtosis', 'spectral_flatness', 'spectral_rolloff', 'loudness', 'loudness_slidingwindow', 'shannon_entropy', 'shannon_entropy_slidingwindow', 'zerocrossing', 'zerocrossing_slidingwindow', 'rms', 'intensity', 'crest_factor', 'f0_contour', 'f0_statistics', 'ppe', 'jitters', 'shimmers', 'hnr', 'dfa', 'lpc', 'lsf', 'formants', 'formants_slidingwindow', 'kurtosis_slidingwindow', 'log_energy', 'log_energy_slidingwindow', ] statistics = ['max', 'min', 'mean', 'std', 'skewness', 'kurtosis', 'first_derivative_mean', 'first_derivative_std', 'first_derivative_skewness', 'first_derivative_kurtosis', 'second_derivative_mean', 'second_derivative_std', 'second_derivative_skewness', 'second_derivative_kurtosis', 'first_quartile', 'second_quartile', 'third_quartile', 'q2_q1_range', 'q3_q2_range', 'q3_q1_range', 'percentile_1', 'percentile_99', 'percentile_1_99_range', 'linear_regression_offset', 'linear_regression_slope', 'linear_regression_mse', ] class ALCDataset: def __init__(self, path): self.dataset_path = path self.__load_meta_file() def __process_meta(self, meta): meta['file_name'] = meta['file_name'].map(lambda x: x[x.find('/') + 1:].lower()) meta['file_name'] = meta['file_name'].map(lambda x: x[:-8] + 'm' + x[-7:]) meta['session'] = meta['file_name'].map(lambda x: x[:x.find('/')]) meta['label'] = meta['user_state'].map(lambda x: 1 if x == 'I' else 0) return meta def extract_feature(self, split, feature): split = split.lower() assert split in ('train', 'd1', 'd2', 'test') meta = getattr(self, f'{split}_meta') sounds = [] for file_name in tqdm(meta['file_name']): sound = Waveform(path=os.path.join(self.dataset_path, DATA_PATH, file_name)) sounds.append(sound) features_df = extract_features(sounds, [feature], statistics) features = features_df.to_numpy() path = os.path.join(FEATURE_PATH, feature) if not os.path.exists(path): os.makedirs(path) np.save(os.path.join(path, f'{split}_x.npy'), features) return features def __load_meta_file(self): """Load meta file. :return: None """ assert os.path.exists(self.dataset_path) doc_folder = os.path.join(self.dataset_path, DOC_PATH) print(doc_folder) train_meta_path = os.path.join(doc_folder, TRAIN_TABLE) self.train_meta = pd.read_csv(train_meta_path, sep='\t', names=['file_name', 'bac', 'user_state']) self.train_meta = self.__process_meta(self.train_meta) d1_meta_path = os.path.join(doc_folder, D1_TABLE) self.d1_meta = pd.read_csv(d1_meta_path, sep='\t', names=['file_name', 'bac', 'user_state']) self.d1_meta = self.__process_meta(self.d1_meta) d2_meta_path = os.path.join(doc_folder, D2_TABLE) self.d2_meta = pd.read_csv(d2_meta_path, sep='\t', names=['file_name', 'bac', 'user_state']) self.d2_meta = self.__process_meta(self.d2_meta) test_meta_path = os.path.join(doc_folder, TEST_TABLE) self.test_meta = pd.read_csv(test_meta_path, sep='\t', names=['file_name', 'bac', 'user_state', 'test_file_name']) self.test_meta = self.test_meta[['file_name', 'bac', 'user_state']] self.test_meta = self.__process_meta(self.test_meta) dataset = ALCDataset(DATASET_PATH) feature = 'mfcc' f_train = dataset.extract_feature("train", feature) f_d1 = dataset.extract_feature("d1", feature) f_d2 = dataset.extract_feature("d2", feature) f_test = dataset.extract_feature("test", feature) ```
github_jupyter
## Reptile Meta-Learning Based on the following paper: <i>On First-Order Meta-Learning Algorithms</i>. ``` import numpy as np import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt from tensorflow.keras import backend as K from tensorflow.keras.layers import Input,Dense,Dropout,Activation from tensorflow.keras.models import Model,Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras.losses import BinaryCrossentropy,MeanSquaredError from sklearn.model_selection import train_test_split from sklearn.metrics import balanced_accuracy_score,mean_squared_error from sklearn.utils import shuffle ``` ### Gather data ``` universal_embed = hub.load("../other/universal-sentence-encoder_4") def get_data(file_name,data_dir="../data/sentiment/"): """ Gather and process data """ all_text = [] all_labels = [] with open(data_dir+file_name) as infile: lines = infile.readlines() for line in lines: all_labels.append(int(line[0])) all_text.append(line[2:]) all_text = all_text[:2000] all_labels = all_labels[:2000] data_x = universal_embed(all_text).numpy() data_y = np.array(all_labels) return data_x,data_y data_files = ["amazon_electronics_reviews/reviews.txt","amazon_kitchen_reviews/reviews.txt","amazon_toys_reviews/reviews.txt","imdb_reviews/reviews.txt","yelp_reviews/reviews.txt"] all_data = [get_data(fn) for fn in data_files] t_x,t_y = all_data[0] # task we are optimizing for train_x,train_y = t_x[:100],t_y[:100] test_x,test_y = t_x[500:],t_y[500:] all_data = all_data[1:] # all training data all_train_data = [(df[0][:1000],df[1][:1000]) for df in all_data] all_test_data = [(df[0][1000:],df[1][1000:]) for df in all_data] ``` ### Modeling ``` def get_model(): """ Model instantiation """ x = Input(shape=(512)) h = Dense(100,activation="relu")(x) o = Dense(1,activation=None)(h) model = Model(inputs=x,outputs=o) return model n_epochs = 1000 batch_size = 50 optimizer = Adam(0.001) meta_model = get_model() n_tasks = len(all_test_data) n_inner_epochs=3 for epoch_i in range(n_epochs): all_train_data = [shuffle(df[0],df[1]) for df in all_train_data] all_test_data = [shuffle(df[0],df[1]) for df in all_test_data] losses = [] for i in range(0,len(all_train_data[0]),batch_size*n_inner_epochs): task_train_x = [df[0][i:i+batch_size*n_inner_epochs] for df in all_train_data] task_train_y = [df[1][i:i+batch_size*n_inner_epochs] for df in all_train_data] task_losses = [] for t_i in range(n_tasks): model_copy = get_model() model_copy.set_weights(meta_model.get_weights()) for j in range(0,batch_size*n_inner_epochs,batch_size): # K>1 steps for task T with tf.GradientTape() as tape: task_train_pred = model_copy(task_train_x[t_i][j:j+batch_size]) task_train_loss = BinaryCrossentropy()(task_train_y[t_i][j:j+batch_size],task_train_pred) gradients = tape.gradient(task_train_loss, model_copy.trainable_variables) optimizer.apply_gradients(zip(gradients,model_copy.trainable_variables)) task_losses.append(float(task_train_loss)) new_weights = [] # update the meta model parameters for i in range(1,len(meta_model.layers)): # first layer is input new_weights.append(meta_model.layers[i].kernel-0.001*(meta_model.layers[i].kernel-model_copy.layers[i].kernel)) new_weights.append(meta_model.layers[i].bias-0.001*(meta_model.layers[i].bias-model_copy.layers[i].bias)) meta_model.set_weights(new_weights) losses.append(sum(task_losses)/len(task_losses)) if sum(losses)/len(losses)<0.7: break ``` ### Validation ``` # meta-learner model = get_model() model.set_weights(meta_model.get_weights()) model.compile(loss=BinaryCrossentropy(),optimizer=Adam(lr=0.001)) epoch_acc_meta = [] for _ in range(50): model.train_on_batch(train_x,train_y) test_pred = model(test_x).numpy() test_pred[test_pred<0.5]=0 test_pred[test_pred>=0.5]=1 epoch_acc_meta.append(balanced_accuracy_score(test_y,test_pred)) # Normal learner model = get_model() model.compile(loss=BinaryCrossentropy(),optimizer=Adam(lr=0.001)) epoch_acc = [] for _ in range(50): model.train_on_batch(train_x,train_y) test_pred = model(test_x).numpy() test_pred[test_pred<0.5]=0 test_pred[test_pred>=0.5]=1 epoch_acc.append(balanced_accuracy_score(test_y,test_pred)) plt.plot([_ for _ in range(len(epoch_acc))],epoch_acc,label="norm") plt.plot([_ for _ in range(len(epoch_acc_meta))],epoch_acc_meta,label="meta") plt.title("Modeling performance by model") plt.xlabel("Epoch") plt.ylabel("Accuracy") plt.legend() plt.show() ```
github_jupyter
##### Copyright 2018 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Generalized Linear Models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Generalized_Linear_Models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Generalized_Linear_Models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Generalized_Linear_Models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Generalized_Linear_Models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this notebook we introduce Generalized Linear Models via a worked example. We solve this example in two different ways using two algorithms for efficiently fitting GLMs in TensorFlow Probability: Fisher scoring for dense data, and coordinatewise proximal gradient descent for sparse data. We compare the fitted coefficients to the true coefficients and, in the case of coordinatewise proximal gradient descent, to the output of R's similar `glmnet` algorithm. Finally, we provide further mathematical details and derivations of several key properties of GLMs. # Background A generalized linear model (GLM) is a linear model ($\eta = x^\top \beta$) wrapped in a transformation (link function) and equipped with a response distribution from an exponential family. The choice of link function and response distribution is very flexible, which lends great expressivity to GLMs. The full details, including a sequential presentation of all the definitions and results building up to GLMs in unambiguous notation, are found in "Derivation of GLM Facts" below. We summarize: In a GLM, a predictive distribution for the response variable $Y$ is associated with a vector of observed predictors $x$. The distribution has the form: \begin{align*} p(y \, |\, x) &= m(y, \phi) \exp\left(\frac{\theta\, T(y) - A(\theta)}{\phi}\right) \\ \theta &:= h(\eta) \\ \eta &:= x^\top \beta \end{align*} Here $\beta$ are the parameters ("weights"), $\phi$ a hyperparameter representing dispersion ("variance"), and $m$, $h$, $T$, $A$ are characterized by the user-specified model family. The mean of $Y$ depends on $x$ by composition of **linear response** $\eta$ and (inverse) link function, i.e.: $$ \mu := g^{-1}(\eta) $$ where $g$ is the so-called **link function**. In TFP the choice of link function and model family are jointly specifed by a `tfp.glm.ExponentialFamily` subclass. Examples include: - `tfp.glm.Normal`, aka "linear regression" - `tfp.glm.Bernoulli`, aka "logistic regression" - `tfp.glm.Poisson`, aka "Poisson regression" - `tfp.glm.BernoulliNormalCDF`, aka "probit regression". TFP prefers to name model families according to the distribution over `Y` rather than the link function since `tfp.Distribution`s are already first-class citizens. If the `tfp.glm.ExponentialFamily` subclass name contains a second word, this indicates a [non-canonical link function](https://en.wikipedia.org/wiki/Generalized_linear_model#Link_function). GLMs have several remarkable properties which permit efficient implementation of the maximum likelihood estimator. Chief among these properties are simple formulas for the gradient of the log-likelihood $\ell$, and for the Fisher information matrix, which is the expected value of the Hessian of the negative log-likelihood under a re-sampling of the response under the same predictors. I.e.: \begin{align*} \nabla_\beta\, \ell(\beta\, ;\, \mathbf{x}, \mathbf{y}) &= \mathbf{x}^\top \,\text{diag}\left(\frac{ {\textbf{Mean}_T}'(\mathbf{x} \beta) }{ {\textbf{Var}_T}(\mathbf{x} \beta) }\right) \left(\mathbf{T}(\mathbf{y}) - {\textbf{Mean}_T}(\mathbf{x} \beta)\right) \\ \mathbb{E}_{Y_i \sim \text{GLM} | x_i} \left[ \nabla_\beta^2\, \ell(\beta\, ;\, \mathbf{x}, \mathbf{Y}) \right] &= -\mathbf{x}^\top \,\text{diag}\left( \frac{ \phi\, {\textbf{Mean}_T}'(\mathbf{x} \beta)^2 }{ {\textbf{Var}_T}(\mathbf{x} \beta) }\right)\, \mathbf{x} \end{align*} where $\mathbf{x}$ is the matrix whose $i$th row is the predictor vector for the $i$th data sample, and $\mathbf{y}$ is vector whose $i$th coordinate is the observed response for the $i$th data sample. Here (loosely speaking), ${\text{Mean}_T}(\eta) := \mathbb{E}[T(Y)\,|\,\eta]$ and ${\text{Var}_T}(\eta) := \text{Var}[T(Y)\,|\,\eta]$, and boldface denotes vectorization of these functions. Full details of what distributions these expectations and variances are over can be found in "Derivation of GLM Facts" below. # An Example In this section we briefly describe and showcase two built-in GLM fitting algorithms in TensorFlow Probability: Fisher scoring (`tfp.glm.fit`) and coordinatewise proximal gradient descent (`tfp.glm.fit_sparse`). ## Synthetic Data Set Let's pretend to load some training data set. ``` import numpy as np import pandas as pd import scipy import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp tfd = tfp.distributions def make_dataset(n, d, link, scale=1., dtype=np.float32): model_coefficients = tfd.Uniform( low=-1., high=np.array(1, dtype)).sample(d, seed=42) radius = np.sqrt(2.) model_coefficients *= radius / tf.linalg.norm(model_coefficients) mask = tf.random.shuffle(tf.range(d)) < int(0.5 * d) model_coefficients = tf.where( mask, model_coefficients, np.array(0., dtype)) model_matrix = tfd.Normal( loc=0., scale=np.array(1, dtype)).sample([n, d], seed=43) scale = tf.convert_to_tensor(scale, dtype) linear_response = tf.linalg.matvec(model_matrix, model_coefficients) if link == 'linear': response = tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) elif link == 'probit': response = tf.cast( tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) > 0, dtype) elif link == 'logit': response = tfd.Bernoulli(logits=linear_response).sample(seed=44) else: raise ValueError('unrecognized true link: {}'.format(link)) return model_matrix, response, model_coefficients, mask ``` ### Note: Connect to a local runtime. In this notebook, we share data between Python and R kernels using local files. To enable this sharing, please use runtimes on the same machine where you have permission to read and write local files. ``` x, y, model_coefficients_true, _ = [t.numpy() for t in make_dataset( n=int(1e5), d=100, link='probit')] DATA_DIR = '/tmp/glm_example' tf.io.gfile.makedirs(DATA_DIR) with tf.io.gfile.GFile('{}/x.csv'.format(DATA_DIR), 'w') as f: np.savetxt(f, x, delimiter=',') with tf.io.gfile.GFile('{}/y.csv'.format(DATA_DIR), 'w') as f: np.savetxt(f, y.astype(np.int32) + 1, delimiter=',', fmt='%d') with tf.io.gfile.GFile( '{}/model_coefficients_true.csv'.format(DATA_DIR), 'w') as f: np.savetxt(f, model_coefficients_true, delimiter=',') ``` ## Without L1 Regularization The function `tfp.glm.fit` implements Fisher scoring, which takes as some of its arguments: * `model_matrix` = $\mathbf{x}$ * `response` = $\mathbf{y}$ * `model` = callable which, given argument $\boldsymbol{\eta}$, returns the triple $\left( {\textbf{Mean}_T}(\boldsymbol{\eta}), {\textbf{Var}_T}(\boldsymbol{\eta}), {\textbf{Mean}_T}'(\boldsymbol{\eta}) \right)$. We recommend that `model` be an instance of the `tfp.glm.ExponentialFamily` class. There are several pre-made implementations available, so for most common GLMs no custom code is necessary. ``` @tf.function(autograph=False) def fit_model(): model_coefficients, linear_response, is_converged, num_iter = tfp.glm.fit( model_matrix=x, response=y, model=tfp.glm.BernoulliNormalCDF()) log_likelihood = tfp.glm.BernoulliNormalCDF().log_prob(y, linear_response) return (model_coefficients, linear_response, is_converged, num_iter, log_likelihood) [model_coefficients, linear_response, is_converged, num_iter, log_likelihood] = [t.numpy() for t in fit_model()] print(('is_converged: {}\n' ' num_iter: {}\n' ' accuracy: {}\n' ' deviance: {}\n' '||w0-w1||_2 / (1+||w0||_2): {}' ).format( is_converged, num_iter, np.mean((linear_response > 0.) == y), 2. * np.mean(log_likelihood), np.linalg.norm(model_coefficients_true - model_coefficients, ord=2) / (1. + np.linalg.norm(model_coefficients_true, ord=2)) )) ``` ### Mathematical Details Fisher scoring is a modification of Newton's method to find the maximum-likelihood estimate $$ \hat\beta := \underset{\beta}{\text{arg max}}\ \ \ell(\beta\ ;\ \mathbf{x}, \mathbf{y}). $$ Vanilla Newton's method, searching for zeros of the gradient of the log-likelihood, would follow the update rule $$ \beta^{(t+1)}_{\text{Newton}} := \beta^{(t)} - \alpha \left( \nabla^2_\beta\, \ell(\beta\ ;\ \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}}^{-1} \left( \nabla_\beta\, \ell(\beta\ ;\ \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} $$ where $\alpha \in (0, 1]$ is a learning rate used to control the step size. In Fisher scoring, we replace the Hessian with the negative Fisher information matrix: \begin{align*} \beta^{(t+1)} &:= \beta^{(t)} - \alpha\, \mathbb{E}_{ Y_i \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x_i^\top \beta^{(t)}), \phi) } \left[ \left( \nabla^2_\beta\, \ell(\beta\ ;\ \mathbf{x}, \mathbf{Y}) \right)_{\beta = \beta^{(t)}} \right]^{-1} \left( \nabla_\beta\, \ell(\beta\ ;\ \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \\[3mm] \end{align*} [Note that here $\mathbf{Y} = (Y_i)_{i=1}^{n}$ is random, whereas $\mathbf{y}$ is still the vector of observed responses.] By the formulas in "Fitting GLM Parameters To Data" below, this simplifies to \begin{align*} \beta^{(t+1)} &= \beta^{(t)} + \alpha \left( \mathbf{x}^\top \text{diag}\left( \frac{ \phi\, {\textbf{Mean}_T}'(\mathbf{x} \beta^{(t)})^2 }{ {\textbf{Var}_T}(\mathbf{x} \beta^{(t)}) }\right)\, \mathbf{x} \right)^{-1} \left( \mathbf{x}^\top \text{diag}\left(\frac{ {\textbf{Mean}_T}'(\mathbf{x} \beta^{(t)}) }{ {\textbf{Var}_T}(\mathbf{x} \beta^{(t)}) }\right) \left(\mathbf{T}(\mathbf{y}) - {\textbf{Mean}_T}(\mathbf{x} \beta^{(t)})\right) \right). \end{align*} ## With L1 Regularization `tfp.glm.fit_sparse` implements a GLM fitter more suited to sparse data sets, based on the algorithm in [Yuan, Ho and Lin 2012](#1). Its features include: * L1 regularization * No matrix inversions * Few evaluations of the gradient and Hessian. We first present an example usage of the code. Details of the algorithm are further elaborated in "Algorithm Details for `tfp.glm.fit_sparse`" below. ``` model = tfp.glm.Bernoulli() model_coefficients_start = tf.zeros(x.shape[-1], np.float32) @tf.function(autograph=False) def fit_model(): return tfp.glm.fit_sparse( model_matrix=tf.convert_to_tensor(x), response=tf.convert_to_tensor(y), model=model, model_coefficients_start=model_coefficients_start, l1_regularizer=800., l2_regularizer=None, maximum_iterations=10, maximum_full_sweeps_per_iteration=10, tolerance=1e-6, learning_rate=None) model_coefficients, is_converged, num_iter = [t.numpy() for t in fit_model()] coefs_comparison = pd.DataFrame({ 'Learned': model_coefficients, 'True': model_coefficients_true, }) print(('is_converged: {}\n' ' num_iter: {}\n\n' 'Coefficients:').format( is_converged, num_iter)) coefs_comparison ``` Note that the learned coefficients have the same sparsity pattern as the true coefficients. ``` # Save the learned coefficients to a file. with tf.io.gfile.GFile('{}/model_coefficients_prox.csv'.format(DATA_DIR), 'w') as f: np.savetxt(f, model_coefficients, delimiter=',') ``` ### Compare to R's `glmnet` We compare the output of coordinatewise proximal gradient descent to that of R's `glmnet`, which uses a similar algorithm. #### NOTE: To execute this section, you must switch to an R colab runtime. ``` suppressMessages({ library('glmnet') }) data_dir <- '/tmp/glm_example' x <- as.matrix(read.csv(paste(data_dir, '/x.csv', sep=''), header=FALSE)) y <- as.matrix(read.csv(paste(data_dir, '/y.csv', sep=''), header=FALSE, colClasses='integer')) fit <- glmnet( x = x, y = y, family = "binomial", # Logistic regression alpha = 1, # corresponds to l1_weight = 1, l2_weight = 0 standardize = FALSE, intercept = FALSE, thresh = 1e-30, type.logistic = "Newton" ) write.csv(as.matrix(coef(fit, 0.008)), paste(data_dir, '/model_coefficients_glmnet.csv', sep=''), row.names=FALSE) ``` #### Compare R, TFP and true coefficients (Note: back to Python kernel) ``` DATA_DIR = '/tmp/glm_example' with tf.io.gfile.GFile('{}/model_coefficients_glmnet.csv'.format(DATA_DIR), 'r') as f: model_coefficients_glmnet = np.loadtxt(f, skiprows=2 # Skip column name and intercept ) with tf.io.gfile.GFile('{}/model_coefficients_prox.csv'.format(DATA_DIR), 'r') as f: model_coefficients_prox = np.loadtxt(f) with tf.io.gfile.GFile( '{}/model_coefficients_true.csv'.format(DATA_DIR), 'r') as f: model_coefficients_true = np.loadtxt(f) coefs_comparison = pd.DataFrame({ 'TFP': model_coefficients_prox, 'R': model_coefficients_glmnet, 'True': model_coefficients_true, }) coefs_comparison ``` # Algorithm Details for `tfp.glm.fit_sparse` We present the algorithm as a sequence of three modifications to Newton's method. In each one, the update rule for $\beta$ is based on a vector $s$ and a matrix $H$ which approximate the gradient and Hessian of the log-likelihood. In step $t$, we choose a coordinate $j^{(t)}$ to change, and we update $\beta$ according to the update rule: \begin{align*} u^{(t)} &:= \frac{ \left( s^{(t)} \right)_{j^{(t)}} }{ \left( H^{(t)} \right)_{j^{(t)},\, j^{(t)}} } \\[3mm] \beta^{(t+1)} &:= \beta^{(t)} - \alpha\, u^{(t)} \,\text{onehot}(j^{(t)}) \end{align*} This update is a Newton-like step with learning rate $\alpha$. Except for the final piece (L1 regularization), the modifications below differ only in how they update $s$ and $H$. ## Starting point: Coordinatewise Newton's method In coordinatewise Newton's method, we set $s$ and $H$ to the true gradient and Hessian of the log-likelihood: \begin{align*} s^{(t)}_{\text{vanilla}} &:= \left( \nabla_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \\ H^{(t)}_{\text{vanilla}} &:= \left( \nabla^2_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \end{align*} ## Fewer evaluations of the gradient and Hessian The gradient and Hessian of the log-likelihood are often expensive to compute, so it is often worthwhile to approximate them. We can do so as follows: * Usually, approximate the Hessian as locally constant and approximate the gradient to first order using the (approximate) Hessian: \begin{align*} H_{\text{approx}}^{(t+1)} &:= H^{(t)} \\ s_{\text{approx}}^{(t+1)} &:= s^{(t)} + H^{(t)} \left( \beta^{(t+1)} - \beta^{(t)} \right) \end{align*} * Occasionally, perform a "vanilla" update step as above, setting $s^{(t+1)}$ to the exact gradient and $H^{(t+1)}$ to the exact Hessian of the log-likelihood, evaluated at $\beta^{(t+1)}$. ## Substitute negative Fisher information for Hessian To further reduce the cost of the vanilla update steps, we can set $H$ to the negative Fisher information matrix (efficiently computable using the formulas in "Fitting GLM Parameters to Data" below) rather than the exact Hessian: \begin{align*} H_{\text{Fisher}}^{(t+1)} &:= \mathbb{E}_{Y_i \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x_i^\top \beta^{(t+1)}), \phi)} \left[ \left( \nabla_\beta^2\, \ell(\beta\, ;\, \mathbf{x}, \mathbf{Y}) \right)_{\beta = \beta^{(t+1)}} \right] \\ &= -\mathbf{x}^\top \,\text{diag}\left( \frac{ \phi\, {\textbf{Mean}_T}'(\mathbf{x} \beta^{(t+1)})^2 }{ {\textbf{Var}_T}(\mathbf{x} \beta^{(t+1)}) }\right)\, \mathbf{x} \\ s_{\text{Fisher}}^{(t+1)} &:= s_{\text{vanilla}}^{(t+1)} \\ &= \left( \mathbf{x}^\top \,\text{diag}\left(\frac{ {\textbf{Mean}_T}'(\mathbf{x} \beta^{(t+1)}) }{ {\textbf{Var}_T}(\mathbf{x} \beta^{(t+1)}) }\right) \left(\mathbf{T}(\mathbf{y}) - {\textbf{Mean}_T}(\mathbf{x} \beta^{(t+1)})\right) \right) \end{align*} ## L1 Regularization via Proximal Gradient Descent To incorporate L1 regularization, we replace the update rule $$ \beta^{(t+1)} := \beta^{(t)} - \alpha\, u^{(t)} \,\text{onehot}(j^{(t)}) $$ with the more general update rule \begin{align*} \gamma^{(t)} &:= -\frac{\alpha\, r_{\text{L1}}}{\left(H^{(t)}\right)_{j^{(t)},\, j^{(t)}}} \\[2mm] \left(\beta_{\text{reg}}^{(t+1)}\right)_j &:= \begin{cases} \beta^{(t+1)}_j &\text{if } j \neq j^{(t)} \\ \text{SoftThreshold} \left( \beta^{(t)}_j - \alpha\, u^{(t)} ,\ \gamma^{(t)} \right) &\text{if } j = j^{(t)} \end{cases} \end{align*} where $r_{\text{L1}} > 0$ is a supplied constant (the L1 regularization coefficient) and $\text{SoftThreshold}$ is the soft thresholding operator, defined by $$ \text{SoftThreshold}(\beta, \gamma) := \begin{cases} \beta + \gamma &\text{if } \beta < -\gamma \\ 0 &\text{if } -\gamma \leq \beta \leq \gamma \\ \beta - \gamma &\text{if } \beta > \gamma. \end{cases} $$ This update rule has the following two inspirational properties, which we explain below: 1. In the limiting case $r_{\text{L1}} \to 0$ (i.e., no L1 regularization), this update rule is identical to the original update rule. 1. This update rule can be interpreted as applying a proximity operator whose fixed point is the solution to the L1-regularized minimization problem $$ \underset{\beta - \beta^{(t)} \in \text{span}\{ \text{onehot}(j^{(t)}) \}}{\text{arg min}} \left( -\ell(\beta \,;\, \mathbf{x}, \mathbf{y}) + r_{\text{L1}} \left\lVert \beta \right\rVert_1 \right). $$ ### Degenerate case $r_{\text{L1}} = 0$ recovers the original update rule To see (1), note that if $r_{\text{L1}} = 0$ then $\gamma^{(t)} = 0$, hence \begin{align*} \left(\beta_{\text{reg}}^{(t+1)}\right)_{j^{(t)}} &= \text{SoftThreshold} \left( \beta^{(t)}_{j^{(t)}} - \alpha\, u^{(t)} ,\ 0 \right) \\ &= \beta^{(t)}_{j^{(t)}} - \alpha\, u^{(t)}. \end{align*} Hence \begin{align*} \beta_{\text{reg}}^{(t+1)} &= \beta^{(t)} - \alpha\, u^{(t)} \,\text{onehot}(j^{(t)}) \\ &= \beta^{(t+1)}. \end{align*} ### Proximity operator whose fixed point is the regularized MLE To see (2), first note (see [Wikipedia](#3)) that for any $\gamma > 0$, the update rule $$ \left(\beta_{\text{exact-prox}, \gamma}^{(t+1)}\right)_{j^{(t)}} := \text{prox}_{\gamma \lVert \cdot \rVert_1} \left( \beta^{(t)}_{j^{(t)}} + \frac{\gamma}{r_{\text{L1}}} \left( \left( \nabla_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \right)_{j^{(t)}} \right) $$ satisfies (2), where $\text{prox}$ is the proximity operator (see [Yu](#4), where this operator is denoted $\mathsf{P}$). The right-hand side of the above equation is computed [here](#2): $$ \left(\beta_{\text{exact-prox}, \gamma}^{(t+1)}\right)_{j^{(t)}} = \text{SoftThreshold} \left( \beta^{(t)}_{j^{(t)}} + \frac{\gamma}{r_{\text{L1}}} \left( \left( \nabla_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \right)_{j^{(t)}} ,\ \gamma \right). $$ In particular, setting $\gamma = \gamma^{(t)} = -\frac{\alpha\, r_{\text{L1}}}{\left(H^{(t)}\right)_{j^{(t)}, j^{(t)}}}$ (note that $\gamma^{(t)} > 0$ as long as the negative log-likelihood is convex), we obtain the update rule $$ \left(\beta_{\text{exact-prox}, \gamma^{(t)}}^{(t+1)}\right)_{j^{(t)}} = \text{SoftThreshold} \left( \beta^{(t)}_{j^{(t)}} - \alpha \frac{ \left( \left( \nabla_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}} \right)_{j^{(t)}} }{ \left(H^{(t)}\right)_{j^{(t)}, j^{(t)}} } ,\ \gamma^{(t)} \right). $$ We then replace the exact gradient $\left( \nabla_\beta\, \ell(\beta \,;\, \mathbf{x}, \mathbf{y}) \right)_{\beta = \beta^{(t)}}$ with its approximation $s^{(t)}$, obtaining \begin{align*} \left(\beta_{\text{exact-prox}, \gamma^{(t)}}^{(t+1)}\right)_{j^{(t)}} &\approx \text{SoftThreshold} \left( \beta^{(t)}_{j^{(t)}} - \alpha \frac{ \left(s^{(t)}\right)_{j^{(t)}} }{ \left(H^{(t)}\right)_{j^{(t)}, j^{(t)}} } ,\ \gamma^{(t)} \right) \\ &= \text{SoftThreshold} \left( \beta^{(t)}_{j^{(t)}} - \alpha\, u^{(t)} ,\ \gamma^{(t)} \right). \end{align*} Hence $$ \beta_{\text{exact-prox}, \gamma^{(t)}}^{(t+1)} \approx \beta_{\text{reg}}^{(t+1)}. $$ # Derivation of GLM Facts In this section we state in full detail and derive the results about GLMs that are used in the preceding sections. Then, we use TensorFlow's `gradients` to numerically verify the derived formulas for gradient of the log-likelihood and Fisher information. ## Score and Fisher information Consider a family of probability distributions parameterized by parameter vector $\theta$, having probability densities $\left\{p(\cdot | \theta)\right\}_{\theta \in \mathcal{T}}$. The **score** of an outcome $y$ at parameter vector $\theta_0$ is defined to be the gradient of the log likelihood of $y$ (evaluated at $\theta_0$), that is, $$ \text{score}(y, \theta_0) := \left[\nabla_\theta\, \log p(y | \theta)\right]_{\theta=\theta_0}. $$ ### Claim: Expectation of the score is zero Under mild regularity conditions (permitting us to pass differentiation under the integral), $$ \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[\text{score}(Y, \theta_0)\right] = 0. $$ #### Proof We have \begin{align*} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[\text{score}(Y, \theta_0)\right] &:=\mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[\left(\nabla_\theta \log p(Y|\theta)\right)_{\theta=\theta_0}\right] \\ &\stackrel{\text{(1)}}{=} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[\frac{\left(\nabla_\theta p(Y|\theta)\right)_{\theta=\theta_0}}{p(Y|\theta=\theta_0)}\right] \\ &\stackrel{\text{(2)}}{=} \int_{\mathcal{Y}} \left[\frac{\left(\nabla_\theta p(y|\theta)\right)_{\theta=\theta_0}}{p(y|\theta=\theta_0)}\right] p(y | \theta=\theta_0)\, dy \\ &= \int_{\mathcal{Y}} \left(\nabla_\theta p(y|\theta)\right)_{\theta=\theta_0}\, dy \\ &\stackrel{\text{(3)}}{=} \left[\nabla_\theta \left(\int_{\mathcal{Y}} p(y|\theta)\, dy\right) \right]_{\theta=\theta_0} \\ &\stackrel{\text{(4)}}{=} \left[\nabla_\theta\, 1 \right]_{\theta=\theta_0} \\ &= 0, \end{align*} where we have used: (1) chain rule for differentiation, (2) definition of expectation, (3) passing differentiation under the integral sign (using the regularity conditions), (4) the integral of a probability density is 1. ### Claim (Fisher information): Variance of the score equals negative expected Hessian of the log likelihood Under mild regularity conditions (permitting us to pass differentiation under the integral), $$ \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \text{score}(Y, \theta_0) \text{score}(Y, \theta_0)^\top \right] = -\mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \left(\nabla_\theta^2 \log p(Y | \theta)\right)_{\theta=\theta_0} \right] $$ where $\nabla_\theta^2 F$ denotes the Hessian matrix, whose $(i, j)$ entry is $\frac{\partial^2 F}{\partial \theta_i \partial \theta_j}$. The left-hand side of this equation is called the **Fisher information** of the family $\left\{p(\cdot | \theta)\right\}_{\theta \in \mathcal{T}}$ at parameter vector $\theta_0$. #### Proof of claim We have \begin{align*} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \left(\nabla_\theta^2 \log p(Y | \theta)\right)_{\theta=\theta_0} \right] &\stackrel{\text{(1)}}{=} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \left(\nabla_\theta^\top \frac{ \nabla_\theta p(Y | \theta) }{ p(Y|\theta) }\right)_{\theta=\theta_0} \right] \\ &\stackrel{\text{(2)}}{=} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \frac{ \left(\nabla^2_\theta p(Y | \theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) } - \left(\frac{ \left(\nabla_\theta\, p(Y|\theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) }\right) \left(\frac{ \left(\nabla_\theta\, p(Y|\theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) }\right)^\top \right] \\ &\stackrel{\text{(3)}}{=} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \frac{ \left(\nabla^2_\theta p(Y | \theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) } - \text{score}(Y, \theta_0) \,\text{score}(Y, \theta_0)^\top \right], \end{align*} where we have used (1) chain rule for differentiation, (2) quotient rule for differentiation, (3) chain rule again, in reverse. To complete the proof, it suffices to show that $$ \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \frac{ \left(\nabla^2_\theta p(Y | \theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) } \right] \stackrel{\text{?}}{=} 0. $$ To do that, we pass differentiation under the integral sign twice: \begin{align*} \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)}\left[ \frac{ \left(\nabla^2_\theta p(Y | \theta)\right)_{\theta=\theta_0} }{ p(Y|\theta=\theta_0) } \right] &= \int_{\mathcal{Y}} \left[ \frac{ \left(\nabla^2_\theta p(y | \theta)\right)_{\theta=\theta_0} }{ p(y|\theta=\theta_0) } \right] \, p(y | \theta=\theta_0)\, dy \\ &= \int_{\mathcal{Y}} \left(\nabla^2_\theta p(y | \theta)\right)_{\theta=\theta_0} \, dy \\ &= \left[ \nabla_\theta^2 \left( \int_{\mathcal{Y}} p(y | \theta) \, dy \right) \right]_{\theta=\theta_0} \\ &= \left[ \nabla_\theta^2 \, 1 \right]_{\theta=\theta_0} \\ &= 0. \end{align*} ### Lemma about the derivative of the log partition function If $a$, $b$ and $c$ are scalar-valued functions, $c$ twice differentiable, such that the family of distributions $\left\{p(\cdot | \theta)\right\}_{\theta \in \mathcal{T}}$ defined by $$ p(y|\theta) = a(y) \exp\left(b(y)\, \theta - c(\theta)\right) $$ satisfies the mild regularity conditions that permit passing differentiation with respect to $\theta$ under an integral with respect to $y$, then $$ \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ b(Y) \right] = c'(\theta_0) $$ and $$ \text{Var}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ b(Y) \right] = c''(\theta_0). $$ (Here $'$ denotes differentiation, so $c'$ and $c''$ are the first and second derivatives of $c$. ) #### Proof For this family of distributions, we have $\text{score}(y, \theta_0) = b(y) - c'(\theta_0)$. The first equation then follows from the fact that $\mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ \text{score}(y, \theta_0) \right] = 0$. Next, we have \begin{align*} \text{Var}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ b(Y) \right] &= \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ \left(b(Y) - c'(\theta_0)\right)^2 \right] \\ &= \text{the one entry of } \mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ \text{score}(y, \theta_0) \text{score}(y, \theta_0)^\top \right] \\ &= \text{the one entry of } -\mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ \left(\nabla_\theta^2 \log p(\cdot | \theta)\right)_{\theta=\theta_0} \right] \\ &= -\mathbb{E}_{Y \sim p(\cdot | \theta=\theta_0)} \left[ -c''(\theta_0) \right] \\ &= c''(\theta_0). \end{align*} ## Overdispersed Exponential Family A (scalar) **overdispersed exponential family** is a family of distributions whose densities take the form $$ p_{\text{OEF}(m, T)}(y\, |\, \theta, \phi) = m(y, \phi) \exp\left(\frac{\theta\, T(y) - A(\theta)}{\phi}\right), $$ where $m$ and $T$ are known scalar-valued functions, and $\theta$ and $\phi$ are scalar parameters. *\[Note that $A$ is overdetermined: for any $\phi_0$, the function $A$ is completely determined by the constraint that $\int p_{\text{OEF}(m, T)}(y\ |\ \theta, \phi=\phi_0)\, dy = 1$ for all $\theta$. The $A$'s produced by different values of $\phi_0$ must all be the same, which places a constraint on the functions $m$ and $T$.\]* ### Mean and variance of the sufficient statistic Under the same conditions as "Lemma about the derivative of the log partition function," we have $$ \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta, \phi)} \left[ T(Y) \right] = A'(\theta) $$ and $$ \text{Var}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta, \phi)} \left[ T(Y) \right] = \phi A''(\theta). $$ #### Proof By "Lemma about the derivative of the log partition function," we have $$ \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta, \phi)} \left[ \frac{T(Y)}{\phi} \right] = \frac{A'(\theta)}{\phi} $$ and $$ \text{Var}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta, \phi)} \left[ \frac{T(Y)}{\phi} \right] = \frac{A''(\theta)}{\phi}. $$ The result then follows from the fact that expectation is linear ($\mathbb{E}[aX] = a\mathbb{E}[X]$) and variance is degree-2 homogeneous ($\text{Var}[aX] = a^2 \,\text{Var}[X]$). ## Generalized Linear Model In a generalized linear model, a predictive distribution for the response variable $Y$ is associated with a vector of observed predictors $x$. The distribution is a member of an overdispersed exponential family, and the parameter $\theta$ is replaced by $h(\eta)$ where $h$ is a known function, $\eta := x^\top \beta$ is the so-called **linear response**, and $\beta$ is a vector of parameters (regression coefficients) to be learned. In general the dispersion parameter $\phi$ could be learned too, but in our setup we will treat $\phi$ as known. So our setup is $$ Y \sim p_{\text{OEF}(m, T)}(\cdot\, |\, \theta = h(\eta), \phi) $$ where the model structure is characterized by the distribution $p_{\text{OEF}(m, T)}$ and the function $h$ which converts linear response to parameters. Traditionally, the mapping from linear response $\eta$ to mean $\mu := \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot\, |\, \theta = h(\eta), \phi)}\left[ Y\right]$ is denoted $$ \mu = g^{-1}(\eta). $$ This mapping is required to be one-to-one, and its inverse, $g$, is called the **link function** for this GLM. Typically, one describes a GLM by naming its link function and its family of distributions -- e.g., a "GLM with Bernoulli distribution and logit link function" (also known as a logistic regression model). In order to fully characterize the GLM, the function $h$ must also be specified. If $h$ is the identity, then $g$ is said to be the **canonical link function**. ### Claim: Expressing $h'$ in terms of the sufficient statistic Define $$ {\text{Mean}_T}(\eta) := \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(\eta), \phi)} \left[ T(Y) \right] $$ and $$ {\text{Var}_T}(\eta) := \text{Var}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(\eta), \phi)} \left[ T(Y) \right]. $$ Then we have $$ h'(\eta) = \frac{\phi\, {\text{Mean}_T}'(\eta)}{{\text{Var}_T}(\eta)}. $$ #### Proof By "Mean and variance of the sufficient statistic," we have $$ {\text{Mean}_T}(\eta) = A'(h(\eta)). $$ Differentiating with the chain rule, we obtain $$ {\text{Mean}_T}'(\eta) = A''(h(\eta))\, h'(\eta), $$ and by "Mean and variance of the sufficient statistic," $$ \cdots = \frac{1}{\phi} {\text{Var}_T}(\eta)\ h'(\eta). $$ The conclusion follows. ## Fitting GLM Parameters to Data The properties derived above lend themselves very well to fitting GLM parameters $\beta$ to a data set. Quasi-Newton methods such as Fisher scoring rely on the gradient of the log likelihood and the Fisher information, which we now show can be computed especially efficiently for a GLM. Suppose we have observed predictor vectors $x_i$ and associated scalar responses $y_i$. In matrix form, we'll say we have observed predictors $\mathbf{x}$ and response $\mathbf{y}$, where $\mathbf{x}$ is the matrix whose $i$th row is $x_i^\top$ and $\mathbf{y}$ is the vector whose $i$th element is $y_i$. The log likelihood of parameters $\beta$ is then $$ \ell(\beta\, ;\, \mathbf{x}, \mathbf{y}) = \sum_{i=1}^{N} \log p_{\text{OEF}(m, T)}(y_i\, |\, \theta = h(x_i^\top \beta), \phi). $$ ### For a single data sample To simplify the notation, let's first consider the case of a single data point, $N=1$; then we will extend to the general case by additivity. #### Gradient We have \begin{align*} \ell(\beta\, ;\, x, y) &= \log p_{\text{OEF}(m, T)}(y\, |\, \theta = h(x^\top \beta), \phi) \\ &= \log m(y, \phi) + \frac{\theta\, T(y) - A(\theta)}{\phi}, \quad\text{where}\ \theta = h(x^\top \beta). \end{align*} Hence by the chain rule, $$ \nabla_\beta \ell(\beta\, ; \, x, y) = \frac{T(y) - A'(\theta)}{\phi}\, h'(x^\top \beta)\, x. $$ Separately, by "Mean and variance of the sufficient statistic," we have $A'(\theta) = {\text{Mean}_T}(x^\top \beta)$. Hence, by "Claim: Expressing $h'$ in terms of the sufficient statistic," we have $$ \cdots = \left(T(y) - {\text{Mean}_T}(x^\top \beta)\right) \frac{{\text{Mean}_T}'(x^\top \beta)}{{\text{Var}_T}(x^\top \beta)} \,x. $$ #### Hessian Differentiating a second time, by the product rule we obtain \begin{align*} \nabla_\beta^2 \ell(\beta\, ;\, x, y) &= \left[ -A''(h(x^\top \beta))\, h'(x^\top \beta) \right] h'(x^\top \beta)\, x x^\top + \left[ T(y) - A'(h(x^\top \beta)) \right] h''(x^\top \beta)\, xx^\top ] \\ &= \left( -{\text{Mean}_T}'(x^\top \beta)\, h'(x^\top \beta) + \left[T(y) - A'(h(x^\top \beta))\right] \right)\, x x^\top. \end{align*} #### Fisher information By "Mean and variance of the sufficient statistic," we have $$ \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x^\top \beta), \phi)} \left[ T(y) - A'(h(x^\top \beta)) \right] = 0. $$ Hence \begin{align*} \mathbb{E}_{Y \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x^\top \beta), \phi)} \left[ \nabla_\beta^2 \ell(\beta\, ;\, x, y) \right] &= -{\text{Mean}_T}'(x^\top \beta)\, h'(x^\top \beta) x x^\top \\ &= -\frac{\phi\, {\text{Mean}_T}'(x^\top \beta)^2}{{\text{Var}_T}(x^\top \beta)}\, x x^\top. \end{align*} ### For multiple data samples We now extend the $N=1$ case to the general case. Let $\boldsymbol{\eta} := \mathbf{x} \beta$ denote the vector whose $i$th coordinate is the linear response from the $i$th data sample. Let $\mathbf{T}$ (resp. ${\textbf{Mean}_T}$, resp. ${\textbf{Var}_T}$) denote the broadcasted (vectorized) function which applies the scalar-valued function $T$ (resp. ${\text{Mean}_T}$, resp. ${\text{Var}_T}$) to each coordinate. Then we have \begin{align*} \nabla_\beta \ell(\beta\, ;\, \mathbf{x}, \mathbf{y}) &= \sum_{i=1}^{N} \nabla_\beta \ell(\beta\, ;\, x_i, y_i) \\ &= \sum_{i=1}^{N} \left(T(y) - {\text{Mean}_T}(x_i^\top \beta)\right) \frac{{\text{Mean}_T}'(x_i^\top \beta)}{{\text{Var}_T}(x_i^\top \beta)} \, x_i \\ &= \mathbf{x}^\top \,\text{diag}\left(\frac{ {\textbf{Mean}_T}'(\mathbf{x} \beta) }{ {\textbf{Var}_T}(\mathbf{x} \beta) }\right) \left(\mathbf{T}(\mathbf{y}) - {\textbf{Mean}_T}(\mathbf{x} \beta)\right) \\ \end{align*} and \begin{align*} \mathbb{E}_{Y_i \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x_i^\top \beta), \phi)} \left[ \nabla_\beta^2 \ell(\beta\, ;\, \mathbf{x}, \mathbf{Y}) \right] &= \sum_{i=1}^{N} \mathbb{E}_{Y_i \sim p_{\text{OEF}(m, T)}(\cdot | \theta = h(x_i^\top \beta), \phi)} \left[ \nabla_\beta^2 \ell(\beta\, ;\, x_i, Y_i) \right] \\ &= \sum_{i=1}^{N} -\frac{\phi\, {\text{Mean}_T}'(x_i^\top \beta)^2}{{\text{Var}_T}(x_i^\top \beta)}\, x_i x_i^\top \\ &= -\mathbf{x}^\top \,\text{diag}\left( \frac{ \phi\, {\textbf{Mean}_T}'(\mathbf{x} \beta)^2 }{ {\textbf{Var}_T}(\mathbf{x} \beta) }\right)\, \mathbf{x}, \end{align*} where the fractions denote element-wise division. ## Verifying the Formulas Numerically We now verify the above formula for gradient of the log likelihood numerically using `tf.gradients`, and verify the formula for Fisher information with a Monte Carlo estimate using `tf.hessians`: ``` def VerifyGradientAndFIM(): model = tfp.glm.BernoulliNormalCDF() model_matrix = np.array([[1., 5, -2], [8, -1, 8]]) def _naive_grad_and_hessian_loss_fn(x, response): # Computes gradient and Hessian of negative log likelihood using autodiff. predicted_linear_response = tf.linalg.matvec(model_matrix, x) log_probs = model.log_prob(response, predicted_linear_response) grad_loss = tf.gradients(-log_probs, [x])[0] hessian_loss = tf.hessians(-log_probs, [x])[0] return [grad_loss, hessian_loss] def _grad_neg_log_likelihood_and_fim_fn(x, response): # Computes gradient of negative log likelihood and Fisher information matrix # using the formulas above. predicted_linear_response = tf.linalg.matvec(model_matrix, x) mean, variance, grad_mean = model(predicted_linear_response) v = (response - mean) * grad_mean / variance grad_log_likelihood = tf.linalg.matvec(model_matrix, v, adjoint_a=True) w = grad_mean**2 / variance fisher_info = tf.linalg.matmul( model_matrix, w[..., tf.newaxis] * model_matrix, adjoint_a=True) return [-grad_log_likelihood, fisher_info] @tf.function(autograph=False) def compute_grad_hessian_estimates(): # Monte Carlo estimate of E[Hessian(-LogLikelihood)], where the expectation is # as written in "Claim (Fisher information)" above. num_trials = 20 trial_outputs = [] np.random.seed(10) model_coefficients_ = np.random.random(size=(model_matrix.shape[1],)) model_coefficients = tf.convert_to_tensor(model_coefficients_) for _ in range(num_trials): # Sample from the distribution of `model` response = np.random.binomial( 1, scipy.stats.norm().cdf(np.matmul(model_matrix, model_coefficients_)) ).astype(np.float64) trial_outputs.append( list(_naive_grad_and_hessian_loss_fn(model_coefficients, response)) + list( _grad_neg_log_likelihood_and_fim_fn(model_coefficients, response)) ) naive_grads = tf.stack( list(naive_grad for [naive_grad, _, _, _] in trial_outputs), axis=0) fancy_grads = tf.stack( list(fancy_grad for [_, _, fancy_grad, _] in trial_outputs), axis=0) average_hess = tf.reduce_mean(tf.stack( list(hess for [_, hess, _, _] in trial_outputs), axis=0), axis=0) [_, _, _, fisher_info] = trial_outputs[0] return naive_grads, fancy_grads, average_hess, fisher_info naive_grads, fancy_grads, average_hess, fisher_info = [ t.numpy() for t in compute_grad_hessian_estimates()] print("Coordinatewise relative error between naively computed gradients and" " formula-based gradients (should be zero):\n{}\n".format( (naive_grads - fancy_grads) / naive_grads)) print("Coordinatewise relative error between average of naively computed" " Hessian and formula-based FIM (should approach zero as num_trials" " -> infinity):\n{}\n".format( (average_hess - fisher_info) / average_hess)) VerifyGradientAndFIM() ``` # References <a name='1'></a>[1]: Guo-Xun Yuan, Chia-Hua Ho and Chih-Jen Lin. An Improved GLMNET for L1-regularized Logistic Regression. _Journal of Machine Learning Research_, 13, 2012. http://www.jmlr.org/papers/volume13/yuan12a/yuan12a.pdf <a name='2'></a>[2]: skd. Derivation of Soft Thresholding Operator. 2018. https://math.stackexchange.com/q/511106 <a name='3'></a>[3]: Wikipedia Contributors. Proximal gradient methods for learning. _Wikipedia, The Free Encyclopedia_, 2018. https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning <a name='4'></a>[4]: Yao-Liang Yu. The Proximity Operator. https://www.cs.cmu.edu/~suvrit/teach/yaoliang_proximity.pdf
github_jupyter
## relation extraction experiment > Tutorial author:余海阳(yuhaiyang@zju.edu.cn) On this demo,we use `pcnn` model to extract relations. We hope this demo can help you understand the process of conctruction knowledge graph and the the principles and common methods of triplet extraction. This demo uses `Python3`. ### Dataset In this example,we get some Chinese text to extract the triples. sentence|relation|head|tail :---:|:---:|:---:|:---: 孔正锡在2005年以一部温馨的爱情电影《长腿叔叔》敲开电影界大门。|导演|长腿叔叔|孔正锡 《伤心的树》是吴宗宪的音乐作品,收录在《你比从前快乐》专辑中。|所属专辑|伤心的树|你比从前快乐 2000年8月,「天坛大佛」荣获「香港十大杰出工程项目」第四名。|所在城市|天坛大佛|香港 - train.csv: It contains 6 training triples,each lines represent one triple,sorted by sentence, relationship, head entity and tail entity, and separated by `,`. - valid.csv: It contains 3 validing triples,each lines represent one triple,sorted by sentence, relationship, head entity and tail entity, and separated by `,`. - test.csv: It contains 3 testing triples,each lines represent one triple,sorted by sentence, relationship, head entity and tail entity, and separated by `,`. - relation.csv: It contains 4 relation triples,each lines represent one triple,sorted by sentence, relationship, head entity and tail entity, and separated by `,`. ### PCNN ![PCNN](img/PCNN.jpg) The sentence information mainly includes word embedding and position embedding.After the convolution layer,according to the position of head tail, it is divided into three sections for maximum pooling,and then through the full connection layer, the relationship information of the sentence can be obtained. ``` # Run the neural network with pytorch and confirm whether it is installed before running !pip install torch !pip install matplotlib !pip install transformers # import the whole modules import os import csv import math import pickle import logging import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt from torch import optim from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence from torch.utils.data import Dataset,DataLoader from sklearn.metrics import precision_recall_fscore_support from typing import List, Tuple, Dict, Any, Sequence, Optional, Union from transformers import BertTokenizer, BertModel logger = logging.getLogger(__name__) # Configuration file of model parameters # use_pcnn Parameter controls whether there is a piece_ Wise pooling class Config(object): model_name = 'cnn' # ['cnn', 'gcn', 'lm'] use_pcnn = True min_freq = 1 pos_limit = 20 out_path = 'data/out' batch_size = 2 word_dim = 10 pos_dim = 5 dim_strategy = 'sum' # ['sum', 'cat'] out_channels = 20 intermediate = 10 kernel_sizes = [3, 5, 7] activation = 'gelu' pooling_strategy = 'max' dropout = 0.3 epoch = 10 num_relations = 4 learning_rate = 3e-4 lr_factor = 0.7 # 学习率的衰减率 lr_patience = 3 # 学习率衰减的等待epoch weight_decay = 1e-3 # L2正则 early_stopping_patience = 6 train_log = True log_interval = 1 show_plot = True only_comparison_plot = False plot_utils = 'matplot' lm_file = 'bert-base-chinese' lm_num_hidden_layers = 2 rnn_layers = 2 cfg = Config() # Word token builds a one hot dictionary, and then inputs it to the embedding layer to obtain the corresponding word information matrix # 0 is pad by default and 1 is unknown class Vocab(object): def __init__(self, name: str = 'basic', init_tokens = ["[PAD]", "[UNK]"]): self.name = name self.init_tokens = init_tokens self.trimed = False self.word2idx = {} self.word2count = {} self.idx2word = {} self.count = 0 self._add_init_tokens() def _add_init_tokens(self): for token in self.init_tokens: self._add_word(token) def _add_word(self, word: str): if word not in self.word2idx: self.word2idx[word] = self.count self.word2count[word] = 1 self.idx2word[self.count] = word self.count += 1 else: self.word2count[word] += 1 def add_words(self, words: Sequence): for word in words: self._add_word(word) def trim(self, min_freq=2, verbose: Optional[bool] = True): assert min_freq == int(min_freq), f'min_freq must be integer, can\'t be {min_freq}' min_freq = int(min_freq) if min_freq < 2: return if self.trimed: return self.trimed = True keep_words = [] new_words = [] for k, v in self.word2count.items(): if v >= min_freq: keep_words.append(k) new_words.extend([k] * v) if verbose: before_len = len(keep_words) after_len = len(self.word2idx) - len(self.init_tokens) logger.info('vocab after be trimmed, keep words [{} / {}] = {:.2f}%'.format(before_len, after_len, before_len / after_len * 100)) # Reinitialize dictionaries self.word2idx = {} self.word2count = {} self.idx2word = {} self.count = 0 self._add_init_tokens() self.add_words(new_words) # Functions required for preprocessing Path = str def load_csv(fp: Path, is_tsv: bool = False, verbose: bool = True) -> List: if verbose: logger.info(f'load csv from {fp}') dialect = 'excel-tab' if is_tsv else 'excel' with open(fp, encoding='utf-8') as f: reader = csv.DictReader(f, dialect=dialect) return list(reader) def load_pkl(fp: Path, verbose: bool = True) -> Any: if verbose: logger.info(f'load data from {fp}') with open(fp, 'rb') as f: data = pickle.load(f) return data def save_pkl(data: Any, fp: Path, verbose: bool = True) -> None: if verbose: logger.info(f'save data in {fp}') with open(fp, 'wb') as f: pickle.dump(data, f) def _handle_relation_data(relation_data: List[Dict]) -> Dict: rels = dict() for d in relation_data: rels[d['relation']] = { 'index': int(d['index']), 'head_type': d['head_type'], 'tail_type': d['tail_type'], } return rels def _add_relation_data(rels: Dict,data: List) -> None: for d in data: d['rel2idx'] = rels[d['relation']]['index'] d['head_type'] = rels[d['relation']]['head_type'] d['tail_type'] = rels[d['relation']]['tail_type'] def _convert_tokens_into_index(data: List[Dict], vocab): unk_str = '[UNK]' unk_idx = vocab.word2idx[unk_str] for d in data: d['token2idx'] = [vocab.word2idx.get(i, unk_idx) for i in d['tokens']] def _add_pos_seq(train_data: List[Dict], cfg): for d in train_data: d['head_offset'], d['tail_offset'], d['lens'] = int(d['head_offset']), int(d['tail_offset']), int(d['lens']) entities_idx = [d['head_offset'], d['tail_offset']] if d['head_offset'] < d['tail_offset'] else [d['tail_offset'], d['head_offset']] d['head_pos'] = list(map(lambda i: i - d['head_offset'], list(range(d['lens'])))) d['head_pos'] = _handle_pos_limit(d['head_pos'], int(cfg.pos_limit)) d['tail_pos'] = list(map(lambda i: i - d['tail_offset'], list(range(d['lens'])))) d['tail_pos'] = _handle_pos_limit(d['tail_pos'], int(cfg.pos_limit)) if cfg.use_pcnn: d['entities_pos'] = [1] * (entities_idx[0] + 1) + [2] * (entities_idx[1] - entities_idx[0] - 1) +\ [3] * (d['lens'] - entities_idx[1]) def _handle_pos_limit(pos: List[int], limit: int) -> List[int]: for i, p in enumerate(pos): if p > limit: pos[i] = limit if p < -limit: pos[i] = -limit return [p + limit + 1 for p in pos] def seq_len_to_mask(seq_len: Union[List, np.ndarray, torch.Tensor], max_len=None, mask_pos_to_true=True): if isinstance(seq_len, list): seq_len = np.array(seq_len) if isinstance(seq_len, np.ndarray): seq_len = torch.from_numpy(seq_len) if isinstance(seq_len, torch.Tensor): assert seq_len.dim() == 1, logger.error(f"seq_len can only have one dimension, got {seq_len.dim()} != 1.") batch_size = seq_len.size(0) max_len = int(max_len) if max_len else seq_len.max().long() broad_cast_seq_len = torch.arange(max_len).expand(batch_size, -1).to(seq_len.device) if mask_pos_to_true: mask = broad_cast_seq_len.ge(seq_len.unsqueeze(1)) else: mask = broad_cast_seq_len.lt(seq_len.unsqueeze(1)) else: raise logger.error("Only support 1-d list or 1-d numpy.ndarray or 1-d torch.Tensor.") return mask # Preprocess logger.info('load raw files...') train_fp = os.path.join('data/train.csv') valid_fp = os.path.join('data/valid.csv') test_fp = os.path.join('data/test.csv') relation_fp = os.path.join('data/relation.csv') train_data = load_csv(train_fp) valid_data = load_csv(valid_fp) test_data = load_csv(test_fp) relation_data = load_csv(relation_fp) for d in train_data: d['tokens'] = eval(d['tokens']) for d in valid_data: d['tokens'] = eval(d['tokens']) for d in test_data: d['tokens'] = eval(d['tokens']) logger.info('convert relation into index...') rels = _handle_relation_data(relation_data) _add_relation_data(rels, train_data) _add_relation_data(rels, valid_data) _add_relation_data(rels, test_data) logger.info('verify whether use pretrained language models...') logger.info('build vocabulary...') vocab = Vocab('word') train_tokens = [d['tokens'] for d in train_data] valid_tokens = [d['tokens'] for d in valid_data] test_tokens = [d['tokens'] for d in test_data] sent_tokens = [*train_tokens, *valid_tokens, *test_tokens] for sent in sent_tokens: vocab.add_words(sent) vocab.trim(min_freq=cfg.min_freq) logger.info('convert tokens into index...') _convert_tokens_into_index(train_data, vocab) _convert_tokens_into_index(valid_data, vocab) _convert_tokens_into_index(test_data, vocab) logger.info('build position sequence...') _add_pos_seq(train_data, cfg) _add_pos_seq(valid_data, cfg) _add_pos_seq(test_data, cfg) logger.info('save data for backup...') os.makedirs(cfg.out_path, exist_ok=True) train_save_fp = os.path.join(cfg.out_path, 'train.pkl') valid_save_fp = os.path.join(cfg.out_path, 'valid.pkl') test_save_fp = os.path.join(cfg.out_path, 'test.pkl') save_pkl(train_data, train_save_fp) save_pkl(valid_data, valid_save_fp) save_pkl(test_data, test_save_fp) vocab_save_fp = os.path.join(cfg.out_path, 'vocab.pkl') vocab_txt = os.path.join(cfg.out_path, 'vocab.txt') save_pkl(vocab, vocab_save_fp) logger.info('save vocab in txt file, for watching...') with open(vocab_txt, 'w', encoding='utf-8') as f: f.write(os.linesep.join(vocab.word2idx.keys())) # pytorch construct Dataset def collate_fn(cfg): def collate_fn_intra(batch): batch.sort(key=lambda data: int(data['lens']), reverse=True) max_len = int(batch[0]['lens']) def _padding(x, max_len): return x + [0] * (max_len - len(x)) def _pad_adj(adj, max_len): adj = np.array(adj) pad_len = max_len - adj.shape[0] for i in range(pad_len): adj = np.insert(adj, adj.shape[-1], 0, axis=1) for i in range(pad_len): adj = np.insert(adj, adj.shape[0], 0, axis=0) return adj x, y = dict(), [] word, word_len = [], [] head_pos, tail_pos = [], [] pcnn_mask = [] adj_matrix = [] for data in batch: word.append(_padding(data['token2idx'], max_len)) word_len.append(int(data['lens'])) y.append(int(data['rel2idx'])) if cfg.model_name != 'lm': head_pos.append(_padding(data['head_pos'], max_len)) tail_pos.append(_padding(data['tail_pos'], max_len)) if cfg.model_name == 'gcn': head = eval(data['dependency']) adj = head_to_adj(head, directed=True, self_loop=True) adj_matrix.append(_pad_adj(adj, max_len)) if cfg.use_pcnn: pcnn_mask.append(_padding(data['entities_pos'], max_len)) x['word'] = torch.tensor(word) x['lens'] = torch.tensor(word_len) y = torch.tensor(y) if cfg.model_name != 'lm': x['head_pos'] = torch.tensor(head_pos) x['tail_pos'] = torch.tensor(tail_pos) if cfg.model_name == 'gcn': x['adj'] = torch.tensor(adj_matrix) if cfg.model_name == 'cnn' and cfg.use_pcnn: x['pcnn_mask'] = torch.tensor(pcnn_mask) return x, y return collate_fn_intra class CustomDataset(Dataset): def __init__(self, fp): self.file = load_pkl(fp) def __getitem__(self, item): sample = self.file[item] return sample def __len__(self): return len(self.file) # embedding layer class Embedding(nn.Module): def __init__(self, config): super(Embedding, self).__init__() # self.xxx = config.xxx self.vocab_size = config.vocab_size self.word_dim = config.word_dim self.pos_size = config.pos_limit * 2 + 2 self.pos_dim = config.pos_dim if config.dim_strategy == 'cat' else config.word_dim self.dim_strategy = config.dim_strategy self.wordEmbed = nn.Embedding(self.vocab_size,self.word_dim,padding_idx=0) self.headPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0) self.tailPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0) def forward(self, *x): word, head, tail = x word_embedding = self.wordEmbed(word) head_embedding = self.headPosEmbed(head) tail_embedding = self.tailPosEmbed(tail) if self.dim_strategy == 'cat': return torch.cat((word_embedding,head_embedding, tail_embedding), -1) elif self.dim_strategy == 'sum': # 此时 pos_dim == word_dim return word_embedding + head_embedding + tail_embedding else: raise Exception('dim_strategy must choose from [sum, cat]') # Gelu activation function, specified by transformer, works better than relu class GELU(nn.Module): def __init__(self): super(GELU, self).__init__() def forward(self, x): return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) # cnn model class CNN(nn.Module): def __init__(self, config): super(CNN, self).__init__() if config.dim_strategy == 'cat': self.in_channels = config.word_dim + 2 * config.pos_dim else: self.in_channels = config.word_dim self.out_channels = config.out_channels self.kernel_sizes = config.kernel_sizes self.activation = config.activation self.pooling_strategy = config.pooling_strategy self.dropout = config.dropout for kernel_size in self.kernel_sizes: assert kernel_size % 2 == 1, "kernel size has to be odd numbers." self.convs = nn.ModuleList([ nn.Conv1d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=1, padding=k // 2, dilation=1, groups=1, bias=False) for k in self.kernel_sizes ]) assert self.activation in ['relu', 'lrelu', 'prelu', 'selu', 'celu', 'gelu', 'sigmoid', 'tanh'], \ 'activation function must choose from [relu, lrelu, prelu, selu, celu, gelu, sigmoid, tanh]' self.activations = nn.ModuleDict([ ['relu', nn.ReLU()], ['lrelu', nn.LeakyReLU()], ['prelu', nn.PReLU()], ['selu', nn.SELU()], ['celu', nn.CELU()], ['gelu', GELU()], ['sigmoid', nn.Sigmoid()], ['tanh', nn.Tanh()], ]) # pooling assert self.pooling_strategy in ['max', 'avg', 'cls'], 'pooling strategy must choose from [max, avg, cls]' self.dropout = nn.Dropout(self.dropout) def forward(self, x, mask=None): x = torch.transpose(x, 1, 2) act_fn = self.activations[self.activation] x = [act_fn(conv(x)) for conv in self.convs] x = torch.cat(x, dim=1) if mask is not None: mask = mask.unsqueeze(1) x = x.masked_fill_(mask, 1e-12) if self.pooling_strategy == 'max': xp = F.max_pool1d(x, kernel_size=x.size(2)).squeeze(2) elif self.pooling_strategy == 'avg': x_len = mask.squeeze().eq(0).sum(-1).unsqueeze(-1).to(torch.float).to(device=mask.device) xp = torch.sum(x, dim=-1) / x_len else: xp = x[:, :, 0] x = x.transpose(1, 2) x = self.dropout(x) xp = self.dropout(xp) return x, xp # pcnn model class PCNN(nn.Module): def __init__(self, cfg): super(PCNN, self).__init__() self.use_pcnn = cfg.use_pcnn self.embedding = Embedding(cfg) self.cnn = CNN(cfg) self.fc1 = nn.Linear(len(cfg.kernel_sizes) * cfg.out_channels, cfg.intermediate) self.fc2 = nn.Linear(cfg.intermediate, cfg.num_relations) self.dropout = nn.Dropout(cfg.dropout) if self.use_pcnn: self.fc_pcnn = nn.Linear(3 * len(cfg.kernel_sizes) * cfg.out_channels, len(cfg.kernel_sizes) * cfg.out_channels) self.pcnn_mask_embedding = nn.Embedding(4, 3) masks = torch.tensor([[0, 0, 0], [100, 0, 0], [0, 100, 0], [0, 0, 100]]) self.pcnn_mask_embedding.weight.data.copy_(masks) self.pcnn_mask_embedding.weight.requires_grad = False def forward(self, x): word, lens, head_pos, tail_pos = x['word'], x['lens'], x['head_pos'], x['tail_pos'] mask = seq_len_to_mask(lens) inputs = self.embedding(word, head_pos, tail_pos) out, out_pool = self.cnn(inputs, mask=mask) if self.use_pcnn: out = out.unsqueeze(-1) # [B, L, Hs, 1] pcnn_mask = x['pcnn_mask'] pcnn_mask = self.pcnn_mask_embedding(pcnn_mask).unsqueeze(-2) # [B, L, 1, 3] out = out + pcnn_mask # [B, L, Hs, 3] out = out.max(dim=1)[0] - 100 # [B, Hs, 3] out_pool = out.view(out.size(0), -1) # [B, 3 * Hs] out_pool = F.leaky_relu(self.fc_pcnn(out_pool)) # [B, Hs] out_pool = self.dropout(out_pool) output = self.fc1(out_pool) output = F.leaky_relu(output) output = self.dropout(output) output = self.fc2(output) return output # p,r,f1 measurement class PRMetric(): def __init__(self): self.y_true = np.empty(0) self.y_pred = np.empty(0) def reset(self): self.y_true = np.empty(0) self.y_pred = np.empty(0) def update(self, y_true:torch.Tensor, y_pred:torch.Tensor): y_true = y_true.cpu().detach().numpy() y_pred = y_pred.cpu().detach().numpy() y_pred = np.argmax(y_pred,axis=-1) self.y_true = np.append(self.y_true, y_true) self.y_pred = np.append(self.y_pred, y_pred) def compute(self): p, r, f1, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='macro',warn_for=tuple()) _, _, acc, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='micro',warn_for=tuple()) return acc,p,r,f1 # Iteration in training process def train(epoch, model, dataloader, optimizer, criterion, cfg): model.train() metric = PRMetric() losses = [] for batch_idx, (x, y) in enumerate(dataloader, 1): optimizer.zero_grad() y_pred = model(x) loss = criterion(y_pred, y) loss.backward() optimizer.step() metric.update(y_true=y, y_pred=y_pred) losses.append(loss.item()) data_total = len(dataloader.dataset) data_cal = data_total if batch_idx == len(dataloader) else batch_idx * len(y) if (cfg.train_log and batch_idx % cfg.log_interval == 0) or batch_idx == len(dataloader): acc,p,r,f1 = metric.compute() print(f'Train Epoch {epoch}: [{data_cal}/{data_total} ({100. * data_cal / data_total:.0f}%)]\t' f'Loss: {loss.item():.6f}') print(f'Train Epoch {epoch}: Acc: {100. * acc:.2f}%\t' f'macro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]') if cfg.show_plot and not cfg.only_comparison_plot: if cfg.plot_utils == 'matplot': plt.plot(losses) plt.title(f'epoch {epoch} train loss') plt.show() return losses[-1] # Iteration in testing process def validate(epoch, model, dataloader, criterion,verbose=True): model.eval() metric = PRMetric() losses = [] for batch_idx, (x, y) in enumerate(dataloader, 1): with torch.no_grad(): y_pred = model(x) loss = criterion(y_pred, y) metric.update(y_true=y, y_pred=y_pred) losses.append(loss.item()) loss = sum(losses) / len(losses) acc,p,r,f1 = metric.compute() data_total = len(dataloader.dataset) if verbose: print(f'Valid Epoch {epoch}: [{data_total}/{data_total}](100%)\t Loss: {loss:.6f}') print(f'Valid Epoch {epoch}: Acc: {100. * acc:.2f}%\tmacro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]\n\n') return f1,loss # Load dataset train_dataset = CustomDataset(train_save_fp) valid_dataset = CustomDataset(valid_save_fp) test_dataset = CustomDataset(test_save_fp) train_dataloader = DataLoader(train_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg)) valid_dataloader = DataLoader(valid_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg)) test_dataloader = DataLoader(test_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg)) # After the preprocessed data is loaded, vocab_size is known vocab = load_pkl(vocab_save_fp) vocab_size = vocab.count cfg.vocab_size = vocab_size # main entry, define optimization function, loss function and so on # start epoch # Use the loss of the valid dataset to make an early stop judgment. When it does not decline, this is the time when the model generalization is the best. model = PCNN(cfg) print(model) optimizer = optim.Adam(model.parameters(), lr=cfg.learning_rate, weight_decay=cfg.weight_decay) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=cfg.lr_factor, patience=cfg.lr_patience) criterion = nn.CrossEntropyLoss() best_f1, best_epoch = -1, 0 es_loss, es_f1, es_epoch, es_patience, best_es_epoch, best_es_f1, = 1000, -1, 0, 0, 0, -1 train_losses, valid_losses = [], [] logger.info('=' * 10 + ' Start training ' + '=' * 10) for epoch in range(1, cfg.epoch + 1): train_loss = train(epoch, model, train_dataloader, optimizer, criterion, cfg) valid_f1, valid_loss = validate(epoch, model, valid_dataloader, criterion) scheduler.step(valid_loss) train_losses.append(train_loss) valid_losses.append(valid_loss) if best_f1 < valid_f1: best_f1 = valid_f1 best_epoch = epoch # 使用 valid loss 做 early stopping 的判断标准 if es_loss > valid_loss: es_loss = valid_loss es_f1 = valid_f1 best_es_f1 = valid_f1 es_epoch = epoch best_es_epoch = epoch es_patience = 0 else: es_patience += 1 if es_patience >= cfg.early_stopping_patience: best_es_epoch = es_epoch best_es_f1 = es_f1 if cfg.show_plot: if cfg.plot_utils == 'matplot': plt.plot(train_losses, 'x-') plt.plot(valid_losses, '+-') plt.legend(['train', 'valid']) plt.title('train/valid comparison loss') plt.show() print(f'best(valid loss quota) early stopping epoch: {best_es_epoch}, ' f'this epoch macro f1: {best_es_f1:0.4f}') print(f'total {cfg.epoch} epochs, best(valid macro f1) epoch: {best_epoch}, ' f'this epoch macro f1: {best_f1:.4f}') test_f1, _ = validate(0, model, test_dataloader, criterion,verbose=False) print(f'after {cfg.epoch} epochs, final test data macro f1: {test_f1:.4f}') ``` This demo does not include parameter adjustment. Interested students can go to [deepke] by themselves( http://openkg.cn/tool/deepke )Warehouse, download and use more models:)
github_jupyter
## Hook callbacks This provides both a standalone class and a callback for registering and automatically deregistering [PyTorch hooks](https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks), along with some pre-defined hooks. Hooks can be attached to any [`nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), for either the forward or the backward pass. We'll start by looking at the pre-defined hook [`ActivationStats`](/callbacks.hooks.html#ActivationStats), then we'll see how to create our own. ``` from fastai.gen_doc.nbdoc import * from fastai.callbacks.hooks import * from fastai.train import * from fastai.vision import * show_doc(ActivationStats) ``` [`ActivationStats`](/callbacks.hooks.html#ActivationStats) saves the layer activations in `self.stats` for all `modules` passed to it. By default it will save activations for *all* modules. For instance: ``` path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) #learn = cnn_learner(data, models.resnet18, callback_fns=ActivationStats) learn = Learner(data, simple_cnn((3,16,16,2)), callback_fns=ActivationStats) learn.fit(1) ``` The saved `stats` is a `FloatTensor` of shape `(2,num_modules,num_batches)`. The first axis is `(mean,stdev)`. ``` len(learn.data.train_dl),len(learn.activation_stats.modules) learn.activation_stats.stats.shape ``` So this shows the standard deviation (`axis0==1`) of 2th last layer (`axis1==-2`) for each batch (`axis2`): ``` plt.plot(learn.activation_stats.stats[1][-2].numpy()); ``` ### Internal implementation ``` show_doc(ActivationStats.hook) ``` ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(ActivationStats.on_train_begin) show_doc(ActivationStats.on_batch_end) show_doc(ActivationStats.on_train_end) show_doc(Hook) ``` Registers and manually deregisters a [PyTorch hook](https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks). Your `hook_func` will be called automatically when forward/backward (depending on `is_forward`) for your module `m` is run, and the result of that function is placed in `self.stored`. ``` show_doc(Hook.remove) ``` Deregister the hook, if not called already. ``` show_doc(Hooks) ``` Acts as a `Collection` (i.e. `len(hooks)` and `hooks[i]`) and an `Iterator` (i.e. `for hook in hooks`) of a group of hooks, one for each module in `ms`, with the ability to remove all as a group. Use `stored` to get all hook results. `hook_func` and `is_forward` behavior is the same as [`Hook`](/callbacks.hooks.html#Hook). See the source code for [`HookCallback`](/callbacks.hooks.html#HookCallback) for a simple example. ``` show_doc(Hooks.remove) ``` Deregister all hooks created by this class, if not previously called. ## Convenience functions for hooks ``` show_doc(hook_output) ``` Function that creates a [`Hook`](/callbacks.hooks.html#Hook) for `module` that simply stores the output of the layer. ``` show_doc(hook_outputs) ``` Function that creates a [`Hook`](/callbacks.hooks.html#Hook) for all passed `modules` that simply stores the output of the layers. For example, the (slightly simplified) source code of [`model_sizes`](/callbacks.hooks.html#model_sizes) is: ```python def model_sizes(m, size): x = m(torch.zeros(1, in_channels(m), *size)) return [o.stored.shape for o in hook_outputs(m)] ``` ``` show_doc(model_sizes) show_doc(model_summary) ``` This method only works on a [`Learner`](/basic_train.html#Learner) object with `train_ds` in it. If it was created as a result of [`load_learner`](/basic_train.html#load_learner), there is no [`data`](/vision.data.html#vision.data) to run through the model and therefore it's not possible to create such summary. A sample `summary` looks like: ``` ====================================================================== Layer (type) Output Shape Param # Trainable ====================================================================== Conv2d [64, 176, 176] 9,408 False ______________________________________________________________________ BatchNorm2d [64, 176, 176] 128 True ______________________________________________________________________ ReLU [64, 176, 176] 0 False ______________________________________________________________________ MaxPool2d [64, 88, 88] 0 False ______________________________________________________________________ Conv2d [64, 88, 88] 36,864 False ... ``` Column definition: 1. **Layer (type)** is the name of the corresponding [`nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module). 2. **Output Shape** is the shape of the output of the corresponding layer (minus the batch dimension, which is always the same and has no impact on the model params). 3. **Param #** is the number of weights (and optionally bias), and it will vary for each layer. The number of params is calculated differently for each layer type. Here is how it's calculated for some of the most common layer types: * Conv: `kernel_size*kernel_size*ch_in*ch_out` * Linear: `(n_in+bias) * n_out` * Batchnorm: `2 * n_out` * Embeddings: `n_embed * emb_sz` 4. **Trainable** indicates whether a layer is trainable or not. * Layers with `0` parameters are always Untrainable (e.g., `ReLU` and `MaxPool2d`). * Other layers are either Trainable or not, usually depending on whether they are frozen or not. See [Discriminative layer training](https://docs.fast.ai/basic_train.html#Discriminative-layer-training). To better understand this summary it helps to also execute `learn.model` and correlate the two outputs. Example: Let's feed to a [`Learner`](/basic_train.html#Learner) a dataset of 3-channel images size 352x352 and look at the model and its summary: ``` data.train_ds[0][0].data.shape learn = cnn_learner(data, models.resnet34, ...) print(learn.model) print(learn.summary()) ``` Here are the outputs with everything but the relevant to the example lines removed: ``` torch.Size([3, 352, 352]) [...] (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [...] (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) [...] (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (8): Linear(in_features=512, out_features=37, bias=True) ====================================================================== Layer (type) Output Shape Param # Trainable ====================================================================== Conv2d [64, 176, 176] 9,408 False ______________________________________________________________________ BatchNorm2d [64, 176, 176] 128 True ______________________________________________________________________ [...] MaxPool2d [64, 88, 88] 0 False ______________________________________________________________________ Conv2d [64, 88, 88] 36,864 False [...] ______________________________________________________________________ Linear [37] 18,981 True ``` **So let's calculate some params:** For the `Conv2d` layers, multiply the first 4 numbers from the corresponding layer definition: ``` Conv2d(3, 64, kernel_size=(7, 7), ...) 3*64*7*7 = 9,408 Conv2d(64, 64, kernel_size=(3, 3), ...) 64*64*3*3 = 36,864 ``` For the `BatchNorm2d` layer, multiply the first number by 2: ``` BatchNorm2d(64, ...) 64*2 = 128 ``` For `Linear` we multiply the first 2 and include the bias if it's `True`: ``` Linear(in_features=512, out_features=37, bias=True) (512+1)*37 = 18,981 ``` **Now let's calculate some output shapes:** We started with 3x352x352 image and run it through this layer: `Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)` How did we get: `[64, 176, 176]` The number of output channels is `64`, that's the first dimension in the number above. And then our image of `352x352` got convolved into `176x176` because of stride `2x2` (`352/2`). Then we had: `MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)` which reduced `[64, 176, 176]` to `[64, 88, 88]` again because of stride 2. And so on, finishing with: `Linear(in_features=512, out_features=37, bias=True)` which reduced everything to just `[37]`. ``` show_doc(num_features_model) ``` It can be useful to get the size of each layer of a model (e.g. for printing a summary, or for generating cross-connections for a [`DynamicUnet`](/vision.models.unet.html#DynamicUnet)), however they depend on the size of the input. This function calculates the layer sizes by passing in a minimal tensor of `size`. ``` show_doc(dummy_batch) show_doc(dummy_eval) show_doc(HookCallback) ``` For all `modules`, uses a callback to automatically register a method `self.hook` (that you must define in an inherited class) as a hook. This method must have the signature: ```python def hook(self, m:Model, input:Tensors, output:Tensors) ``` If `do_remove` then the hook is automatically deregistered at the end of training. See [`ActivationStats`](/callbacks.hooks.html#ActivationStats) for a simple example of inheriting from this class. ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(HookCallback.on_train_begin) show_doc(HookCallback.on_train_end) ``` ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(HookCallback.remove) show_doc(Hook.hook_fn) ``` ## New Methods - Please document or move to the undocumented section
github_jupyter
``` %matplotlib inline ``` Autograd: 자동 미분 =================================== PyTorch의 모든 신경망의 중심에는 ``autograd`` 패키지가 있습니다. 먼저 이것을 가볍게 살펴본 뒤, 첫번째 신경망을 학습시켜보겠습니다. ``autograd`` 패키지는 Tensor의 모든 연산에 대해 자동 미분을 제공합니다. 이는 실행-기반-정의(define-by-run) 프레임워크로, 이는 코드를 어떻게 작성하여 실행하느냐에 따라 역전파가 정의된다는 뜻이며, 역전파는 학습 과정의 매 단계마다 달라집니다. 좀 더 간단한 용어로 몇 가지 예를 살펴보겠습니다. Tensor -------- 패키지의 중심에는 ``torch.Tensor`` 클래스가 있습니다. 만약 ``.requires_grad`` 속성을 ``True`` 로 설정하면, 그 tensor에서 이뤄진 모든 연산들을 추적(Track)하기 시작합니다. 계산이 완료된 후 ``.backward()`` 를 호출하여 모든 변화도(gradient)를 자동으로 계산할 수 있습니다. 이 Tensor의 변화도는 ``.grad`` 에 누적됩니다. Tensor가 기록을 중단하게 하려면, ``.detach()`` 를 호출하여 연산 기록으로부터 분리(Detach)하여 이후 연산들이 기록되는 것을 방지할 수 있습니다. 연산 기록을 추적하는 것(과 메모리 사용)을 멈추기 위해, 코드 블럭(Code Block)을 ``with torch.no_grad():`` 로 감쌀 수 있습니다. 이는 특히 변화도(Gradient)는 필요없지만, `requires_grad=True` 가 설정되어 학습 가능한 매개변수(Parameter)를 갖는 모델을 실행(Evaluate)할 때 유용합니다. Autograd 구현에서 매우 중요한 클래스가 하나 더 있는데요, 바로 ``Function`` 클래스입니다. ``Tensor`` 와 ``Function`` 은 상호 연결되어 있으며, 모든 연산 과정을 부호화(encode)하여 순환하지 않은 그래프(acyclic graph)를 생성합니다. 각 변수는 ``.grad_fn`` 속성을 갖고 있는데, 이는 ``Tensor`` 를 생성한 ``Function`` 을 참조하고 있습니다. (단, 사용자가 만든 Tensor는 예외로, 이 때 ``grad_fn`` 은 ``None`` 입니다.) 도함수를 계산하기 위해서는, ``Tensor`` 의 ``.backward()`` 를 호출하면 됩니다. ``Tensor`` 가 스칼라(scalar)인 경우(예. 하나의 요소만 갖는 등)에는, ``backward`` 에 인자를 정해줄 필요가 없습니다. 하지만 여러 개의 요소를 갖고 있을 때는 tensor의 모양을 ``gradient`` 의 인자로 지정할 필요가 있습니다. ``` import torch ``` tensor를 생성하고 requires_grad=True를 설정하여 연산을 기록합니다. ``` x = torch.ones(2, 2, requires_grad=True) print(x) ``` tensor에 연산을 수행합니다: ``` y = x + 2 print(y) ``` ``y`` 는 연산의 결과로 생성된 것이므로, ``grad_fn`` 을 갖습니다. ``` print(y.grad_fn) ``` y에 다른 연산을 수행합니다. ``` z = y * y * 3 out = z.mean() print(z, out) ``` ``.requires_grad_( ... )`` 는 기존 Tensor의 ``requires_grad`` 값을 In-place로 변경합니다. 입력값이 지정되지 않으면 기본값은 ``True`` 입니다. ``` a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) ``` 변화도(Gradient) ----------------- 이제 역전파(backprop)를 해보겠습니다. ``out`` 은 하나의 스칼라(Scalar) 값만 갖고 있기 때문에, ``out.backward()`` 는 ``out.backward(torch.tensor(1))`` 을 하는 것과 똑같습니다. ``` out.backward() ``` 변화도 d(out)/dx를 출력합니다. ``` print(x.grad) ``` ``4.5`` 로 이루어진 행렬이 보일 것입니다. ``out`` 을 *Tensor* “$o$” 라고 하면, 다음과 같이 구할 수 있습니다. $o = \frac{1}{4}\sum_i z_i$, $z_i = 3(x_i+2)^2$ 이고 $z_i\bigr\rvert_{x_i=1} = 27$ 입니다. 따라서, $\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)$ 이므로, $\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$. autograd로 많은 정신나간 일들(crazy things)도 할 수 있습니다! ``` x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(gradients) print(x.grad) ``` ``with torch.no_grad():`` 로 코드 블럭(Code Block)을 감싸서, autograd가 requires_grad=True인 Tensor들의 연산 기록을 추적하는 것을 멈출 수 있습니다. ``` print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) ``` **더 읽을거리:** ``Variable`` 과 ``Function`` 관련 문서는 http://pytorch.org/docs/autograd 에 있습니다.
github_jupyter
Back to the main [Index](../index.ipynb) ### Inline OMEX and COMBINE archives Tellurium provides a way to easily edit the contents of COMBINE archives in a human-readable format called inline OMEX. To create a COMBINE archive, simply create a string containing all models (in Antimony format) and all simulations (in PhraSEDML format). Tellurium will transparently convert the Antimony to SBML and PhraSEDML to SED-ML, then execute the resulting SED-ML. The following example will work in either Jupyter or the [Tellurium notebook viewer](http://tellurium.readthedocs.io/en/latest/installation.html#front-end-1-tellurium-notebook). The Tellurium notebook viewer allows you to create specialized cells for inline OMEX, which contain correct syntax-highlighting for the format. ``` import tellurium as te, tempfile, os te.setDefaultPlottingEngine('matplotlib') %matplotlib inline antimony_str = ''' model myModel S1 -> S2; k1*S1 S1 = 10; S2 = 0 k1 = 1 end ''' phrasedml_str = ''' model1 = model "myModel" sim1 = simulate uniform(0, 5, 100) task1 = run sim1 on model1 plot "Figure 1" time vs S1, S2 ''' # create an inline OMEX (inline representation of a COMBINE archive) # from the antimony and phrasedml strings inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) # export to a COMBINE archive workingDir = tempfile.mkdtemp(suffix="_omex") te.exportInlineOmex(inline_omex, os.path.join(workingDir, 'archive.omex')) ``` ### Forcing Functions A common task in modeling is to represent the influence of an external, time-varying input on the system. In SED-ML, this can be accomplished using a repeated task to run a simulation for a short amount of time and update the forcing function between simulations. In the example, the forcing function is a pulse represented with a `piecewise` directive, but it can be any arbitrarily complex time-varying function, as shown in the second example. ``` import tellurium as te antimony_str = ''' // Created by libAntimony v2.9 model *oneStep() // Compartments and Species: compartment compartment_; species S1 in compartment_, S2 in compartment_, $X0 in compartment_, $X1 in compartment_; species $X2 in compartment_; // Reactions: J0: $X0 => S1; J0_v0; J1: S1 => $X1; J1_k3*S1; J2: S1 => S2; (J2_k1*S1 - J2_k_1*S2)*(1 + J2_c*S2^J2_q); J3: S2 => $X2; J3_k2*S2; // Species initializations: S1 = 0; S2 = 1; X0 = 1; X1 = 0; X2 = 0; // Compartment initializations: compartment_ = 1; // Variable initializations: J0_v0 = 8; J1_k3 = 0; J2_k1 = 1; J2_k_1 = 0; J2_c = 1; J2_q = 3; J3_k2 = 5; // Other declarations: const compartment_, J0_v0, J1_k3, J2_k1, J2_k_1, J2_c, J2_q, J3_k2; end ''' phrasedml_str = ''' model1 = model "oneStep" stepper = simulate onestep(0.1) task0 = run stepper on model1 task1 = repeat task0 for local.x in uniform(0, 10, 100), J0_v0 = piecewise(8, x<4, 0.1, 4<=x<6, 8) task2 = repeat task0 for local.index in uniform(0, 10, 1000), local.current = index -> abs(sin(1 / (0.1 * index + 0.1))), model1.J0_v0 = current : current plot "Forcing Function (Pulse)" task1.time vs task1.S1, task1.S2, task1.J0_v0 plot "Forcing Function (Custom)" task2.time vs task2.S1, task2.S2, task2.J0_v0 ''' # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # export to a COMBINE archive workingDir = tempfile.mkdtemp(suffix="_omex") archive_name = os.path.join(workingDir, 'archive.omex') te.exportInlineOmex(inline_omex, archive_name) # convert the COMBINE archive back into an # inline OMEX (transparently) and execute it te.convertAndExecuteCombineArchive(archive_name) ``` ### 1d Parameter Scan This example shows how to perform a one-dimensional parameter scan using Antimony/PhraSEDML and convert the study to a COMBINE archive. The example uses a PhraSEDML repeated task `task1` to run a timecourse simulation `task0` on a model for different values of the parameter `J0_v0`. ``` import tellurium as te antimony_str = ''' // Created by libAntimony v2.9 model *parameterScan1D() // Compartments and Species: compartment compartment_; species S1 in compartment_, S2 in compartment_, $X0 in compartment_, $X1 in compartment_; species $X2 in compartment_; // Reactions: J0: $X0 => S1; J0_v0; J1: S1 => $X1; J1_k3*S1; J2: S1 => S2; (J2_k1*S1 - J2_k_1*S2)*(1 + J2_c*S2^J2_q); J3: S2 => $X2; J3_k2*S2; // Species initializations: S1 = 0; S2 = 1; X0 = 1; X1 = 0; X2 = 0; // Compartment initializations: compartment_ = 1; // Variable initializations: J0_v0 = 8; J1_k3 = 0; J2_k1 = 1; J2_k_1 = 0; J2_c = 1; J2_q = 3; J3_k2 = 5; // Other declarations: const compartment_, J0_v0, J1_k3, J2_k1, J2_k_1, J2_c, J2_q, J3_k2; end ''' phrasedml_str = ''' model1 = model "parameterScan1D" timecourse1 = simulate uniform(0, 20, 1000) task0 = run timecourse1 on model1 task1 = repeat task0 for J0_v0 in [8, 4, 0.4], reset=true plot task1.time vs task1.S1, task1.S2 ''' # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) ``` ### 2d Parameter Scan There are multiple was to specify the set of values that should be swept over. This example uses two repeated tasks instead of one. It sweeps through a discrete set of values for the parameter `J1_KK2`, and then sweeps through a uniform range for another parameter `J4_KK5`. ``` import tellurium as te antimony_str = ''' // Created by libAntimony v2.9 model *parameterScan2D() // Compartments and Species: compartment compartment_; species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_; species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_; species MAPK_P in compartment_, MAPK_PP in compartment_; // Reactions: J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK)); J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P); J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK); J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P); J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP); J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P); J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK); J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P); J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP); J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P); // Species initializations: MKKK = 90; MKKK_P = 10; MKK = 280; MKK_P = 10; MKK_PP = 10; MAPK = 280; MAPK_P = 10; MAPK_PP = 10; // Compartment initializations: compartment_ = 1; // Variable initializations: J0_V1 = 2.5; J0_Ki = 9; J0_n = 1; J0_K1 = 10; J1_V2 = 0.25; J1_KK2 = 8; J2_k3 = 0.025; J2_KK3 = 15; J3_k4 = 0.025; J3_KK4 = 15; J4_V5 = 0.75; J4_KK5 = 15; J5_V6 = 0.75; J5_KK6 = 15; J6_k7 = 0.025; J6_KK7 = 15; J7_k8 = 0.025; J7_KK8 = 15; J8_V9 = 0.5; J8_KK9 = 15; J9_V10 = 0.5; J9_KK10 = 15; // Other declarations: const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3; const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8; const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10; end ''' phrasedml_str = ''' model_3 = model "parameterScan2D" sim_repeat = simulate uniform(0,3000,100) task_1 = run sim_repeat on model_3 repeatedtask_1 = repeat task_1 for J1_KK2 in [1, 5, 10, 50, 60, 70, 80, 90, 100], reset=true repeatedtask_2 = repeat repeatedtask_1 for J4_KK5 in uniform(1, 40, 10), reset=true plot repeatedtask_2.J4_KK5 vs repeatedtask_2.J1_KK2 plot repeatedtask_2.time vs repeatedtask_2.MKK, repeatedtask_2.MKK_P ''' # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) ``` ### Stochastic Simulation and RNG Seeding It is possible to programatically set the RNG seed of a stochastic simulation in PhraSEDML using the `<simulation-name>.algorithm.seed = <value>` directive. Simulations run with the same seed are identical. If the seed is not specified, a different value is used each time, leading to different results. ``` # -*- coding: utf-8 -*- """ phrasedml repeated stochastic test """ import tellurium as te antimony_str = ''' // Created by libAntimony v2.9 model *repeatedStochastic() // Compartments and Species: compartment compartment_; species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_; species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_; species MAPK_P in compartment_, MAPK_PP in compartment_; // Reactions: J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK)); J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P); J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK); J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P); J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP); J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P); J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK); J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P); J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP); J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P); // Species initializations: MKKK = 90; MKKK_P = 10; MKK = 280; MKK_P = 10; MKK_PP = 10; MAPK = 280; MAPK_P = 10; MAPK_PP = 10; // Compartment initializations: compartment_ = 1; // Variable initializations: J0_V1 = 2.5; J0_Ki = 9; J0_n = 1; J0_K1 = 10; J1_V2 = 0.25; J1_KK2 = 8; J2_k3 = 0.025; J2_KK3 = 15; J3_k4 = 0.025; J3_KK4 = 15; J4_V5 = 0.75; J4_KK5 = 15; J5_V6 = 0.75; J5_KK6 = 15; J6_k7 = 0.025; J6_KK7 = 15; J7_k8 = 0.025; J7_KK8 = 15; J8_V9 = 0.5; J8_KK9 = 15; J9_V10 = 0.5; J9_KK10 = 15; // Other declarations: const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3; const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8; const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10; end ''' phrasedml_str = ''' model1 = model "repeatedStochastic" timecourse1 = simulate uniform_stochastic(0, 4000, 1000) timecourse1.algorithm.seed = 1003 timecourse2 = simulate uniform_stochastic(0, 4000, 1000) task1 = run timecourse1 on model1 task2 = run timecourse2 on model1 repeat1 = repeat task1 for local.x in uniform(0, 10, 10), reset=true repeat2 = repeat task2 for local.x in uniform(0, 10, 10), reset=true plot "Repeats with same seed" repeat1.time vs repeat1.MAPK, repeat1.MAPK_P, repeat1.MAPK_PP, repeat1.MKK, repeat1.MKK_P, repeat1.MKKK, repeat1.MKKK_P plot "Repeats without seeding" repeat2.time vs repeat2.MAPK, repeat2.MAPK_P, repeat2.MAPK_PP, repeat2.MKK, repeat2.MKK_P, repeat2.MKKK, repeat2.MKKK_P ''' # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) ``` ### Resetting Models This example is another parameter scan which shows the effect of resetting the model or not after each simulation. When using the repeated task directive in PhraSEDML, you can pass the `reset=true` argument to reset the model to its initial conditions after each repeated simulation. Leaving this argument off causes the model to retain its current state between simulations. In this case, the time value is not reset. ``` import tellurium as te antimony_str = """ model case_02 J0: S1 -> S2; k1*S1; S1 = 10.0; S2=0.0; k1 = 0.1; end """ phrasedml_str = """ model0 = model "case_02" model1 = model model0 with S1=5.0 sim0 = simulate uniform(0, 6, 100) task0 = run sim0 on model1 # reset the model after each simulation task1 = repeat task0 for k1 in uniform(0.0, 5.0, 5), reset = true # show the effect of not resetting for comparison task2 = repeat task0 for k1 in uniform(0.0, 5.0, 5) plot "Repeated task with reset" task1.time vs task1.S1, task1.S2 plot "Repeated task without reset" task2.time vs task2.S1, task2.S2 """ # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) ``` ### 3d Plotting This example shows how to use PhraSEDML to perform 3d plotting. The syntax is `plot <x> vs <y> vs <z>`, where `<x>`, `<y>`, and `<z>` are references to model state variables used in specific tasks. ``` import tellurium as te antimony_str = ''' // Created by libAntimony v2.9 model *case_09() // Compartments and Species: compartment compartment_; species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_; species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_; species MAPK_P in compartment_, MAPK_PP in compartment_; // Reactions: J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK)); J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P); J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK); J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P); J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP); J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P); J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK); J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P); J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP); J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P); // Species initializations: MKKK = 90; MKKK_P = 10; MKK = 280; MKK_P = 10; MKK_PP = 10; MAPK = 280; MAPK_P = 10; MAPK_PP = 10; // Compartment initializations: compartment_ = 1; // Variable initializations: J0_V1 = 2.5; J0_Ki = 9; J0_n = 1; J0_K1 = 10; J1_V2 = 0.25; J1_KK2 = 8; J2_k3 = 0.025; J2_KK3 = 15; J3_k4 = 0.025; J3_KK4 = 15; J4_V5 = 0.75; J4_KK5 = 15; J5_V6 = 0.75; J5_KK6 = 15; J6_k7 = 0.025; J6_KK7 = 15; J7_k8 = 0.025; J7_KK8 = 15; J8_V9 = 0.5; J8_KK9 = 15; J9_V10 = 0.5; J9_KK10 = 15; // Other declarations: const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3; const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8; const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10; end ''' phrasedml_str = ''' mod1 = model "case_09" # sim1 = simulate uniform_stochastic(0, 4000, 1000) sim1 = simulate uniform(0, 4000, 1000) task1 = run sim1 on mod1 repeat1 = repeat task1 for local.x in uniform(0, 10, 10), reset=true plot "MAPK oscillations" repeat1.MAPK vs repeat1.time vs repeat1.MAPK_P, repeat1.MAPK vs repeat1.time vs repeat1.MAPK_PP, repeat1.MAPK vs repeat1.time vs repeat1.MKK # report repeat1.MAPK vs repeat1.time vs repeat1.MAPK_P, repeat1.MAPK vs repeat1.time vs repeat1.MAPK_PP, repeat1.MAPK vs repeat1.time vs repeat1.MKK ''' # create the inline OMEX string inline_omex = '\n'.join([antimony_str, phrasedml_str]) # execute the inline OMEX te.executeInlineOmex(inline_omex) ```
github_jupyter
# Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking". Let's do it! ``` # Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector ``` ## 1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient): $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small." We know the following: - $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! ## 2) 1-dimensional gradient checking Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input. You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. <img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;"> <caption><center> <u> **Figure 1** </u>: **1D linear model**<br> </center></caption> The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = np.dot(theta, x) ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J)) ``` **Expected Output**: <table style=> <tr> <td> ** J ** </td> <td> 8</td> </tr> </table> **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$. ``` # GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta)) ``` **Expected Output**: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. **Instructions**: - First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$ - Then compute the gradient using backward propagation, and store the result in a variable "grad" - Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula: $$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$ You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them. - If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation. ``` # GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = None # Step 1 thetaminus = thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference)) ``` **Expected Output**: The gradient is correct! <table> <tr> <td> ** difference ** </td> <td> 2.9193358103083e-10 </td> </tr> </table> Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! ## 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. <img src="images/NDgrad_kiank.png" style="width:600px;height:400px;"> <caption><center> <u> **Figure 2** </u>: **deep neural network**<br>*LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*</center></caption> Let's look at your implementations for forward propagation and backward propagation. ``` def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache ``` Now, run backward propagation. ``` def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) * 2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients ``` You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still: $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them. The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. <img src="images/dictionary_to_vector.png" style="width:600px;height:400px;"> <caption><center> <u> **Figure 2** </u>: **dictionary_to_vector() and vector_to_dictionary()**<br> You will need these functions in gradient_check_n()</center></caption> We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that. **Exercise**: Implement gradient_check_n(). **Instructions**: Here is pseudo-code that will help you implement the gradient check. For each i in num_parameters: - To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$ - Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$ Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$ ``` # GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y) ``` **Expected output**: <table> <tr> <td> ** There is a mistake in the backward propagation!** </td> <td> difference = 0.285093156781 </td> </tr> </table> It seems that there were errors in the `backward_propagation_n` code we gave you! Good that you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code. Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. **Note** - Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. - Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) <font color='blue'> **What you should remember from this notebook**: - Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation). - Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
github_jupyter
# Naive Bayes Classifier Building **We will use three datasets for demonstrating Naive Bayes Algorithm, namely:** 1. A **dummy dataset** with three columns: weather, temperature, and play. The first two are features (weather, temperature) and the other is the label. 2. [Wine Dataset](https://archive.ics.uci.edu/ml/datasets/wine) from sklearn. 2. [mushrooms.csv](https://www.kaggle.com/uciml/mushroom-classification) from kaggle. ## 1. Using a Dummy Dataset ## Defining Dataset ``` # Assigning features and label variables weather=['Sunny','Sunny','Overcast','Rainy','Rainy','Rainy','Overcast','Sunny','Sunny', 'Rainy','Sunny','Overcast','Overcast','Rainy'] temp=['Hot','Hot','Hot','Mild','Cool','Cool','Cool','Mild','Cool','Mild','Mild','Mild','Hot','Mild'] play=['No','No','Yes','Yes','Yes','No','Yes','No','Yes','Yes','Yes','Yes','Yes','No'] ``` ## Encoding Features First, we need to convert these string labels into numbers. For example: 'Overcast', 'Rainy', 'Sunny' as 0, 1, 2. This is known as **label encoding**. <br> Scikit-learn provides **LabelEncoder** library for encoding labels with a value between 0 and one less than the number of discrete classes. ``` # Import LabelEncoder from sklearn import preprocessing #creating labelEncoder le = preprocessing.LabelEncoder() # Converting string labels into numbers. weather_encoded=le.fit_transform(weather) print (weather_encoded) ``` Similarly, we can also encode **temp** and **play** columns. ``` # Converting string labels into numbers temp_encoded=le.fit_transform(temp) label=le.fit_transform(play) print ("Temp:",temp_encoded) print ("Play:",label) ``` Now combine both the features (weather and temp) in a single variable (list of tuples). ``` #Combinig weather and temp into single listof tuples features = list(zip(weather_encoded,temp_encoded)) print (features) ``` ## Generating Model Generate a model using naive bayes classifier in the following steps: - Create naive bayes classifier - Fit the dataset on classifier - Perform prediction ``` #Import Gaussian Naive Bayes model from sklearn.naive_bayes import GaussianNB #Create a Gaussian Classifier model = GaussianNB() # Train the model using the training sets model.fit(features,label) #Predict Output predicted = model.predict([[0,2]]) # 0:Overcast, 2:Mild print ("Predicted Value:", predicted) ``` Here, **1** indicates that players can **'play'**. ## 2. Using Wine Dataset ## Naive Bayes with Multiple Labels Till now we have learned Naive Bayes classification with binary labels. Now we will learn about multiple class classification in Naive Bayes. Which is known as **Multinomial Naive Bayes classification**. For example, if you want to classify a news article about technology, entertainment, politics, or sports. In model building part, we can use **wine dataset** which is a very famous multi-class classification problem. <br> **This dataset is the result of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars.** Dataset comprises of **13 features** (alcohol, malic_acid, ash, alcalinity_of_ash, magnesium, total_phenols, flavanoids, nonflavanoid_phenols, proanthocyanins, color_intensity, hue, od280/od315_of_diluted_wines, proline) and **type of wine cultivar**. This data has **three types of wine Class_0, Class_1, and Class_3**. Here we can build a model to classify the type of wine. The dataset is available in the **scikit-learn library**. ## Loading Data Let's first load the required wine dataset from scikit-learn datasets. ``` #Import scikit-learn dataset library from sklearn import datasets #Load dataset wine = datasets.load_wine() ``` ## Exploring Data We can print the target and feature names, to make sure we have the right dataset, as such: ``` # print the names of the 13 features print ("FEATURES: ", wine.feature_names) # print the label type of wine(class_0, class_1, class_2) print ("\nLABELS: ", wine.target_names) ``` It's a good idea to always **explore the data a bit**, so we know what we're working with. <br> Here, we can see the first five rows of the dataset are printed, as well as the target variable for the whole dataset. ``` # print data(feature)shape wine.data.shape # print the wine data features (top 5 records) print (wine.data[0:5]) # print the wine labels (0:Class_0, 1:class_2, 2:class_2) print (wine.target) ``` ## Splitting Data First, we separate the columns into dependent and independent variables(or features and label). Then we **split those variables into train and test set**. <img src = https://res.cloudinary.com/dyd911kmh/image/upload/f_auto,q_auto:best/v1543836883/image_6_cfpjpr.png width="50%"> ``` # Import train_test_split function from sklearn.model_selection import train_test_split # Split dataset into training set and test set X_train, X_test, y_train, y_test = train_test_split(wine.data, wine.target, test_size=0.3,random_state=109) # 70% training and 30% test ``` ## Model Generation After splitting, we will generate a **naive bayes classifier model** on the training set and perform prediction on test set features. ``` #Import Gaussian Naive Bayes model from sklearn.naive_bayes import GaussianNB #Create a Gaussian Classifier gnb = GaussianNB() #Train the model using the training sets gnb.fit(X_train, y_train) #Predict the response for test dataset y_pred = gnb.predict(X_test) ``` ## Evaluating Model After model generation, check the accuracy using actual and predicted values. ``` #Import scikit-learn metrics module for accuracy calculation from sklearn import metrics # Model Accuracy, how often is the classifier correct? print("Accuracy:", metrics.accuracy_score(y_test, y_pred)) ``` We got a classification rate of **90.74%**, considered as good accuracy. ### Confusion Matrix for wine dataset ``` from sklearn.metrics import confusion_matrix import seaborn as sns import matplotlib.pyplot as plt cm = confusion_matrix(y_test, y_pred) x_axis_labels = ['class_0', 'class_1', 'class_2'] y_axis_labels = ['class_0', 'class_1', 'class_2'] f, ax = plt.subplots(figsize =(7,7)) sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Reds", xticklabels=x_axis_labels, yticklabels=y_axis_labels) plt.xlabel("PREDICTED LABEL") plt.ylabel("TRUE LABEL") plt.title('Confusion Matrix for Naive Bayes Classifier (Wine Dataset)') plt.show() ``` ## 3. Using mushrooms.csv ## Importing Required Libraries Let's first load the required libraries. ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import os from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.metrics import classification_report, confusion_matrix ``` ## Reading the csv file of the dataset Pandas read_csv() function imports a CSV file (in our case, ‘mushrooms.csv’) to DataFrame format. ``` df = pd.read_csv("mushrooms.csv") ``` ## Examining the Data After importing the data, to learn more about the dataset, we'll use .head() .info() and .describe() methods. ``` df.head() df.info() df.describe() ``` ## Shape of the dataset ``` print("Dataset shape:", df.shape) ``` ## Visualizing the count of edible and poisonous mushrooms ``` df['class'].value_counts() df["class"].unique() count = df['class'].value_counts() plt.figure(figsize=(8,7)) sns.barplot(count.index, count.values, alpha=0.8, palette="plasma") plt.ylabel('Count', fontsize=12) plt.xlabel('Class', fontsize=12) plt.title('Number of poisonous/edible mushrooms') plt.show() ``` #### The dataset is balanced. ## Data Manipulation The data is **categorical** so we’ll use **LabelEncoder to** convert it to **ordinal**. <br> **LabelEncoder converts each value in a column to a number.** <br> This approach requires the category column to be of ‘category’ datatype. By default, a non-numerical column is of ‘object’ datatype. From the df.describe() method, we saw that our columns are of ‘object’ datatype. So we will have to change the type to ‘category’ before using this approach. ``` df = df.astype('category') df.dtypes labelencoder=LabelEncoder() for column in df.columns: df[column] = labelencoder.fit_transform(df[column]) df.head() ``` The column "veil-type" is 0 and not contributing to the data so we remove it. ``` df['veil-type'] df=df.drop(["veil-type"],axis=1) ``` ## Quick look at the characteristics of the data The violin plot below represents the distribution of the classification characteristics. It is possible to see that "gill-color" property of the mushroom breaks to two parts, one below 3 and one above 3, that may contribute to the classification. ``` df_div = pd.melt(df, "class", var_name="Characteristics") fig, ax = plt.subplots(figsize=(16,6)) p = sns.violinplot(ax = ax, x="Characteristics", y="value", hue="class", split = True, data=df_div, inner = 'quartile', palette = 'prism') df_no_class = df.drop(["class"],axis = 1) p.set_xticklabels(rotation = 90, labels = list(df_no_class.columns)); ``` ## Let's look at the correlation between the variables ``` plt.figure(figsize=(14,12)) sns.heatmap(df.corr(),linewidths=.1,cmap="Reds", annot=True, annot_kws={"size": 7}) plt.yticks(rotation=0); ``` ## Preparing the Data Setting X and y axis and splitting the data into train and test respectively. ``` X = df.drop(['class'], axis=1) y = df["class"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.1) ``` ## Naive Bayes Classifier ``` from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(X_train, y_train) print("Test Accuracy: {}%".format(round(nb.score(X_test, y_test)*100, 2))) ``` ### Classification report of Naive Bayes Classifier ``` y_pred_nb = nb.predict(X_test) print("Naive Bayes Classifier report: \n\n", classification_report(y_test, y_pred_nb)) ``` ### Confusion Matrix for Naive Bayes Classifier ``` cm = confusion_matrix(y_test, y_pred_nb) x_axis_labels = ["Edible", "Poisonous"] y_axis_labels = ["Edible", "Poisonous"] f, ax = plt.subplots(figsize =(7,7)) sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Reds", xticklabels=x_axis_labels, yticklabels=y_axis_labels) plt.xlabel("PREDICTED LABEL") plt.ylabel("TRUE LABEL") plt.title('Confusion Matrix for Naive Bayes Classifier (Mushroom Dataset)') plt.show() ``` ## Predictions Predicting some of the X_test results and matching it with true i.e. y_test values using Naive Bayes Classifier. ``` preds = nb.predict(X_test) print(preds[:36]) print(y_test[:36].values) # 0 - Edible # 1 - Poisonous ``` As we can see the predicted and the true values almost are the same. ## Conclusion From the confusion matrix, we saw that our train and test data is balanced. <br> Naive Bayes Classifier hit **90.74%** accuracy with the wine dataset and **92.62%** accuracy with the mushroom dataset.
github_jupyter
# Deep Learning With Python: Ch. 7 **Advanced deep-learning best practices** <br> --- ## Tópicos * Técnicas de Programação - Funções _Callback_ - Programação Orientada a Objetos: Classes e Herança * API de _callback_ do Keras * Boas Práticas em Deep Learning * Aplicações - Implementação do Concrete Dropout usando Keras - Fitsbook (projeto próprio) --- # Callbacks ### 1. No Python, toda função é um objeto de primeira ordem ``` def q(x): return x ** 2 def p(x, func): print('O quadrado de', x, 'é', func(x)) p(2, q) ``` ### 2. Fluxo de um programa ``` import time def fit(lr): print('Inicio', lr) time.sleep(3) print('Fim', lr) lr = 3 fit(lr) lr = 5 fit(lr) ``` ### 3. Implementação de um callback ``` def fit(func): model = { 'lr': 100, 'epoch': 0 } print('Estado Inicial', model) for i in range(10): lr = func(model) time.sleep(2) model['epoch'] = model['epoch'] + 1 print(model) def dynamic_lr(m): m['lr'] = m['lr'] / 2. fit(dynamic_lr) ``` # Programação Orientada a Objetos ## Classes: Visão Geral Estrutura de dados que associa um conjunto de funções a um estado. ### Exemplo: Definindo e criando instâncias de classes ``` class Bolo: def __init__(self): self.ingrediente = [] self.cobertura = '' self.assado = False self.restante = 1 def assar(self): self.assado = True print('Bolo Assado') def colocar_cobertura(self, cobertura): self.cobertura = cobertura def comer(self): if (self.restante >= .5): self.restante = self.restante - .5 print('Hmmmm..') else: print('O bolo Acabou!!') bolo1 = Bolo() bolo1.cobertura = 'chocolate' bolo1.ingredientes = ['farinha', 'ovo'] bolo1.comer() print(bolo1.restante) bolo2 = Bolo() bolo2.comer() bolo2.comer() print(bolo2.restante) ``` ## Classes: Herança Uma forma de incorporar métodos e atributos de uma classe para outras classes. Garante a padronização do código. ### Exemplo: Herança ``` class Animal: def __init__(self, pes, voa): self.pes = pes self.voa = voa def comer(self): print('Comendo...') def beber(self): print('Bebendo...') class Cachorro(Animal): def __init__(self): super(Cachorro, self).__init__(4, False) def comer(self): print('Ração') def latir(self): print('Au Au') c = Cachorro() c.comer() c.beber() ``` # API de Callbacks do Keras Quando se trabalha com um grande conjunto de dados e uma rede neural complexa treinando por um número alto de épocas, esperar o término da execução dos métodos **compile** e **fit** (principalmente) torna-se improdutivo, pelo alto investimento de tempo. Os callbacks ajudam a ter maior controle sobre o fluxo do programa. ## Aplicação 1 - Fitsbook [Link do site](https://nmcardoso.github.io/fitsbook) ### Treinamento de exemplo: identificar digitos do MNIST ``` %tensorflow_version 1.x import random import tensorflow as tf import numpy as np from keras import backend as K import os from keras import layers from keras import models from keras.datasets import mnist from keras.utils import to_categorical from keras import callbacks import matplotlib.pyplot as plt !pip3 install git+https://github.com/nmcardoso/fitsbook-python import fitsbook as fb dense_model = models.Sequential() dense_model.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) dense_model.add(layers.Dense(10, activation='softmax')) dense_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) dense_model.summary() (dense_train_img, dense_train_lbl), (dense_test_img, dense_test_lbl) = mnist.load_data() dense_train_img = dense_train_img.reshape((60000, 28 * 28)) dense_train_img = dense_train_img.astype('float32') / 255 dense_test_img = dense_test_img.reshape((10000, 28 * 28)) dense_test_img = dense_test_img.astype('float32') / 255 dense_train_lbl = to_categorical(dense_train_lbl) dense_test_lbl = to_categorical(dense_test_lbl) %%time dense_history = dense_model.fit(dense_train_img, dense_train_lbl, epochs=10, batch_size=128, validation_split=0.17, callbacks=[fb.callbacks.FitsbookCallback()], validation_data=(dense_test_img, dense_test_lbl)) def plot_figure(history): epochs = history.epoch plt.figure(figsize=(10, 6)) plt.plot(epochs, history.history['acc'], linestyle='--', color='blue', label='Train Acc') plt.plot(epochs, history.history['val_acc'], color='blue', label='Validation Acc') plt.plot(epochs, history.history['loss'], linestyle='--', color='green', label='Train Loss') plt.plot(epochs, history.history['val_loss'], color='green', label='Validation Loss') plt.grid(b=True) plt.title('Dense Layers Metrics', fontsize=17) plt.ylabel('Metrics', fontsize=13) plt.xlabel('Epoch', fontsize=13) plt.legend(fontsize='large') plt.show() plot_figure(dense_history) ``` ### Eventos * on_batch_begin * on_batch_end * on_epoch_begin * on_epoch_end * on_train_begin * on_train_end ### Implementações * [Keras - Módulo Callback](https://github.com/keras-team/keras/blob/master/keras/callbacks/callbacks.py#L118-L269) * [Keras - Função train-loop](https://github.com/keras-team/keras/blob/1cf5218edb23e575a827ca4d849f1d52d21b4bb0/keras/engine/training_arrays.py#L144-L218) * [FitsbookPython - Classe Callback](https://github.com/nmcardoso/fitsbook-python/blob/master/fitsbook/callbacks.py) ## Aplicação 2 - Concrete Dropout * [Artigo teórico](http://papers.neurips.cc/paper/6949-concrete-dropout.pdf) * [Artigo com implementação](https://arxiv.org/pdf/1705.07832.pdf) * [Keras - Classe Wrapper](https://github.com/keras-team/keras/blob/5be4ed3d9e7548dfa9d51d1d045a3f951d11c2b1/keras/layers/wrappers.py#L19-L113) # Deep-learning best practices ### 1. Batch Normalization Normalizar seus dados de entrada para um intervalo e centrar os dados em zero. ### 2. Depthwise separable convolution Se as localizações da imagem estão correlacionadas, mas os canais puderem ser interpretados idependentemente, faz sentido separar as camadas de convolução por canal e juntá-las depois. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAiYAAAELCAIAAABidtpkAAAgAElEQVR4Aeydd18TW9f3n5cSEORSMCJFBEEFQZCqWCjSISRKU0poonQEOYCgNBWMVKlHpASRIggiVaWHnhBG0IMoSA3k+dxn7nuumIQQkAwJ7PmHyczee631WzP7y7S9/x8TLEABoABQACgAFEBFgf+HihVgBCgAFAAKAAWAAkyAHHAQAAWAAkABoABKCgDkoCQ0MAMUAAoABYACADngGAAKAAWAAkABlBQAyEFJaGAGKAAUAAoABQBywDEAFAAKAAWAAigpAJCDktDADFAAKAAUAAoA5IBjACgAFAAKAAVQUgAgByWhgRmgAFAAKAAUAMjZL8fA0NDQysrKfokWxAkUAAoIpQIAOUKZFgE4lZyc/OHDBwE0DJoECgAFgAL8KgCQw69Sol7OwsIiJiZG1KMA/gMFgAIirQBAjkinj1/nFxcXpaSkjI2N+a0AygEFgAJAAQEoAJAjAFGFr8l3795hMJgDBw78/PlT+LwDHgEFgAL7RQGAnH2R6Xv37mH+XSorK/dFwCBIoABQQCgVAMgRyrTstFN6enowcgICAna6bdAeUAAoABTgVwGAHH6VEt1yMzMzYmJiMHI0NDRENxDgOVAAKCDqCgDkiHoGN/d/enq6qKjo6tWrRCKxuLh4bW1t8zqgBFAAKAAUEIACADkCEFUomyQSiTk5OULpGnAKKAAU2C8KAOTsl0wD5OyXTIM4gQJCrABAjhAnZ0ddA8jZUTlBY0ABoMB2FADI2Y5qolgHIEcUswZ8BgrsMQUAcvZYQjcMByBnQ2nADqAAUAAtBQBy0FJ6t+0A5Ox2BoB9oABQgAmQs18OAoCc/ZJpECdQQIgVAMgR4uTsqGsAOTsqJ2gMKAAU2I4CADnbUU0U6wDkiGLWgM9AgT2mAEDOHkvohuEA5GwoDdgBFAAKoKUAQA5aSu+2HYCc3c4AsA8UAAqA1wf2zTEAkLNvUg0CBQoIrwLgKmcXcsNgMGpra7OystJRXC5evOji4oKiwfScnJzOzs5d0BeYBAoABYRVAYActDOTlpamrKxsYGBw69YtLxSXmzdvuru7o2jQy9XVVU1NTU9Pr7q6Gm2VgT2gAFBAKBUAyEE1LaGhodra2nV1ddD+WCYnJwsKChQUFF69eoWq0MAYUAAoIJQKAOSgl5acnJxz584NDAzsD9z8N8p3797Jycn19PSgpzWwBBQACgilAgA56KXl7NmzZWVl/+2J99PavXv3PD090dMaWAIKAAWEUgGAHJTS8vHjxzNnzuwnyvwWa19f36FDh1ZWVlCSG5gBCgAFhFIBgByU0lJZWWltbf1bN7zPfsjJyU1PT6MkNzADFAAKCKUCADkopaWsrMzOzm6fUea3cI8fPz45OYmS3MAMUAAoIJQKAOSglBaAHIAclA41YAYoIMQKAOSglJw/RA6dTn/+/Pnbt29/u3DY7AeFQnn48CGFQtmsIBr7AXJQOtSAGaCAECsAkINScv4EOTQa7dGjR0ePHs3NzeUfDj09PdevXz916tTnz5/5ryW4kgA5KB1qwAxQQIgVAMhBKTl/gpyIiAhra2sjIyM25Hz69IlAILx//767u9vFxeXdu3cIMAYHBy0sLG7evGlgYMCJnLdv3wYHB8fFxcG7uru7o6KifH19S0pK6HT6u3fvcnNzExMT/f39q6qqqFRqWlpaa2srBEEjIyNPnz7t7u5GDPG/ApCD0qEGzAAFhFgBgByUkvMnyBkaGqLT6XZ2dmzIodPpSUlJOjo6RkZGISEhNBoNAcDExMTw8HB7e7uxsTEbcurq6vT09OLj4yMiImxtbTs7O01MTIKDg3NycgwNDZ89e0YikRQVFUNDQ2NiYrS1tdvb2/39/UNCQiAIqqioMDc3HxoaQgzxvwKQg9KhBswABYRYAYAclJLzJ8iBu3VO5EAQ1NXVpaurq6Ki0tjYyNn7c0VOVFRUUFAQBEFUKrW+vj4vL8/CwmJ0dBSCoMTEREtLSxKJZGFhMTExMTAwcPXq1devX5PJ5KtXr1IoFH9//5iYGE5D/GwByEHpUANmgAJCrABADkrJEQRyenp6rl27FhgYGB8fr6+v39LSwtb1c0UOkUhkxUZWVpatrS2dTocgqKSkxNramkQi4fF4CIKGhobMzMzIZPLIyIilpWV2drapqWlTUxObFT5/AuSgdKgBM0ABIVYAIAel5AgCOWNjY6WlpTQajU6nk8lkztHbuCInOTmZQCBQqdT29nZLS8v8/PwLFy60t7dDEBQeHk4kEjmRA0EQTDUnJyfW23d8wgYuBpCD0qEGzAAFhFgBgByUkiMI5Gza43NFzsDAAB6PV1dX19DQiI2NpdPpMTExOjo6ly5dsrKy6uzs5Iqc5uZmVVXVrKysTY1uVAAgB6VDDZgBCgixAgA5KCXnz5GzUVe+ve1DQ0Pw8xu4OpVKHR4e3l5TfNYCyEHpUANmgAJCrABADkrJETbk8MmJHSwGkIPSoQbMAAWEWAGAHJSSA5ADkIPSoQbMAAWEWAGAHJSSA5ADkIPSoQbMAAWEWAGAHJSSwxU5ra2tjx8/5ufmVU1NzdOnT7mW7Ojo4PEW2ZMnT7jWEtzG4eHhxMREzidDADkoHWrADFBAiBUAyEEpOZzIoVKp3t7epaWl/PT+9fX1GRkZnCWrqqosLS05+3ek5PPnz5F11FaSkpIyMzPZzAHkoHSoATNAASFWACAHpeRwIufNmzd2dnYjIyPv3r3LysqKi4sLCQlBPrR88+bN7du3Q0JC2traIAh69+5dTk4OhUIhkUh5eXm+vr5paWmDg4PBwcGnTp169OjR48ePe3t7JyYm0tLS4AGnKyoqysrK4Neax8bGUlNTvby8kpOTYT719/fHxMT4+vq+evWKjQ1/PgJbS0uLnZ1db28va8sAOSgdasAMUECIFQDIQSk5nMhJSEiABy4jkUjHjh17+PDh06dPjY2N29vby8vLdXR00tPT4+Li9PT0Wltb4W9l2tvb1dTUCARCenr6+fPn09LSnjx5oqenBzeem5v76dMnNTU1b2/viYkJR0fHvLw8HA4Hj2Rja2tbVFRkZ2cXFBQ0NDR07do1Ly+v3NzcS5cu5eTkIGzYkRHYaDQaDocrKytDmoUgCCAHpUMNmAEKCLECADkoJYcTOe7u7vCDHBKJZGxsPDIyAkGQs7MziUQiEonR0dHwMGgWFhbJyckIcnR1dZubmyEIunv3bmhoKJlMvnbt2vDwcEJCgq+vb3Fx8dWrV62srBobG01NTT99+gQjJyUlRUdHJzk5ub29fWJioqysTFNTk0Qi5efne3t729vbI2zYqRHYbt26xfaYCiAHpUMNmAEKCLECADkoJYcTOba2tiQSCYIgEonk6uoKd/pEIjExMZFAICAvC/j6+sbFxSHIuXjxYkdHBwRBoaGhwcHBCHLq6uosLCwCAgISEhIsLS2joqJcXFzodDqMHBqNlpWV5eDgoKioGBwc/OLFi1OnTvn5+QX8u6SkpCDI2akR2Hx8fO7du4c0C65yUDrOgBmggHArAJCDUn44kePu7h4REQEjx9DQsL+/n0KhmJmZlZWV3b9/39XVlUajDQ4OmpqaFhUVbYScyspKMzOzoaGh4eHha9eu6ejo1NfX+/j4qKmpwTxDrnJghhUVFV25cqWqqgouCd9zS05ORtiwIyOw0el0a2trtpflwFUOSocaMAMUEGIFAHJQSg4nckgkkpOTE4wcJSUlHR0dAwODsLCwiYmJ/v5+R0dHQ0NDAwODoKAgKpW6EXLa29u1tLRwONzo6GhQUJC+vv7g4GBWVpaqqio8qRqMnMbGRnhaHR0dnadPn9Lp9Pj4eDU1NSMjI3Nzc9YhqHdkBLb+/n5TU9O6ujqEZOAqB6XjDJgBCgi3AgA5KOWHEzk9PT22trbv37+HcTL+78LaRw8PD1OpVNYtXNcnJibGxsa47mLdODExAb/Shmyk0WgbTbb2hyOw5efn37x5k815cJWD0qEGzAAFhFgBgByUksOJHAiCMjIy7t27ByMHIYGor4yNjd26dQt+UZs1FoAclA41YAYoIMQKAOSglByuyKFSqR8+fBgdHe3v72ftnUV6fXx8/MOHD5whAOSgdKgBM0ABIVYAIAel5HBFDme/vIe3AOSgdKgBM0ABIVYAIAel5ADkAOSgdKgBM0ABIVYAIAel5JDJZGtr6z18EbNpaHJyctPT0yjJDcwABYACQqkAQA5Kaeno6NDQ0Ni0X96rBfr7+w8dOrS8vIyS3MAMUAAoIJQKAOSglJb19XV1dXUymbxXocI7rvv377u7u6OkNTADFAAKCKsCADnoZSYjI0NXV3ejT2F4d9kivbe5uVleXr6rqws9rYEloABQQCgVAMhBNS0BAQH6+vrIDAUiDRI+nS8pKTl+/HhBQQGqQgNjQAGggFAqAJCDdloSExMVFBQuX77s7e3tv6cXT0/Ps2fPampqlpeXo60ysAcUAAoIpQIAObuQlpWVFTKZnJaWlozicv/+/aSkJNQMpqSkPHv27P3797ugLzAJFAAKCKsCADnCmpmd9otEIoGnKTstKmgPKAAU2JoCADlb00t0S+NwuKSkJNH1H3gOFAAK7AEFAHL2QBI3D4HBYBw5cuTatWubFwUlgAJAAaCAwBQAyBGYtMLUcFtbGwaDkZKSWlxcFCa/gC9AAaDA/lIAIGdf5PvBgweYf5f6+vp9ETAIEigAFBBKBQByhDItO+3UlStXYORERETsdNugPaAAUAAowK8CADn8KiW65ebn5yUkJGDk6OnpiW4gwHOgAFBA1BUAyBH1DG7uP51Oj46OPn/+vJ2dXUxMzNra2uZ1QAmgAFAAKCAABQByBCCqUDZJJBJzcnKE0jXgFFAAKLBfFADI2S+ZBsjZL5kGcQIFhFgBgBwhTs6OugaQs6NygsaAAkCB7SgAkLMd1USxDkCOKGYN+AwU2GMKAOTssYRuGA5AzobSgB1AAaAAWgrsGHJqwSLcClhaWgYFBQm3j/vduw8fPqB14gM7QIHdUWDHkIPBYKzAIsQKKCkpaWlpCbGD+921c+fOKSgo7E43AKwCBdBSYMeQg8Vi0fIZ2NmOAuDG2nZUQ7HO0NCQuro6igaBKaDALigAkLMLou+KSYCcXZGdf6MAOfxrBUqKrgIAOaKbu615DpCzNb1QLw2Qg7rkwOAuKACQswui74pJgJxdkZ1/owA5/GsFSoquAgA5opu7rXkOkLM1vVAvDZCDuuTA4C4oAJCzC6LvikmAnF2RnX+jADn8awVKiq4CADmim7uteQ6QszW9UC8NkIO65MDgLigAkLMLou+KSYCcXZGdf6MAOfxrBUqKrgIAOaKbu615DpCzNb1QLw2Qg7rkwOAuKACQswui74pJgJxdkZ1/owA5/GsFSoquAgA5u5C7lZUVMpmclpaWguJiaGh4/fp1FA2mPHv2rLm5eRf0FU2TADmimTfg9dYUAMjZml5/Xjo5OVlRUfHSpUve3t5+KC6+/y4oGvTz9PTU0NDQ0tIik8l/rtuebwEgZ8+nGATIZDIBclA9DO7cuaOvr9/U1ATtm6WkpOT48eOFhYWoCi2CxgByRDBpwOUtKwCQs2XJtl0hIyPj/PnzQ0ND+wY3/xtoc3OzvLx8V1fXtqXbDxUBcvZDlkGMADkoHQPr6+vq6uoVFRX7jTdwvPfv33d3d0dJa9E0A5AjmnkDXm9NAYCcrem17dIdHR0aGhr7kzcQBPX39x86dGh5eXnbAu75igA5ez7FIEDwLAe9Y4BMJltbW+9b5EAQJCcnNz09jZ7iomYJIEfUMgb83Y4C4CpnO6pto05ZWZmdnd1+Rs7x48cnJye3Id0+qQKQs08Svc/DBMhB6QAAyAHI4X2oAeTw1gfs3RsKAOSglMctIae/v//x48cuLi6enp5VVVU7eG3U09NDoVC21+Do6OinT5+2VxeCIIAc3ocaQA5vfcDevaEAQA5KeeQfOT09PZaWljdu3MjJyYmKitLW1n779u22O3rWikNDQ5aWlttmWHh4eEREBGuDW1oHyOF9qAHk8NYH7N0bCgDkoJRH/pETFRXl6Og4NjYGd+jwEDUQBHV3d0dFRfn6+paUlNDpdAqFQiKR8vLyfH1909LSqFQqBEFv374NDg6Oi4v7/PkzBEGdnZ2RkZFEIvHJkydjY2OFhYVqamq+vr49PT39/f0xMTG+vr6vXr2CIOjdu3e5ubmJiYn+/v4Ik8rLywMCAvz9/cvLyzs6OkxNTa9evVpdXU2j0bKzs318fJKTk0dGRvgED0AO70MNIIe3PmDv3lAAIAelPPKJnImJCTs7u6dPn7L14/39/SYmJsHBwTk5OYaGhs+ePWtvb1dTUyMQCOnp6efPnyeRSHV1dXp6evHx8REREba2tu3t7cbGxlFRUZmZmTo6OikpKfX19Xp6eklJSd3d3deuXfPy8srNzb106VJOTg6JRFJUVAwNDY2JidHW1m5vb3/58qWenl5mZmZ0dLSGhkZ1dbWzs7OLi8vHjx+joqIuX76ck5Pj7u7u4eHB5upGPwFyeB9qADm89QF794YCADko5ZFP5IyNjZmZmWVkZLB13MXFxRYWFqOjoxAEJSYmWlpatre36+rqNjc3QxB09+7d0NDQqKiooKAgCIKoVGp9ff3AwEBzc3N7e3tRUZG5uXlISMjg4KC5uXlVVVVZWZmmpiaJRMrPz/f29ra3tyeRSBYWFhMTEwMDA1evXn39+nVvb29ra+u7d+/S0tI0NTUrKyuDg4MjIiL6+/uNjIzCw8Pz8/OTk5O1tbXZXN3oJ0AO70MNIIe3PmDv3lAAIAelPPKJHAiCPD09w8LCkI6bTCYnJiZmZWXZ2trS6XQIgkpKSqytrdvb2y9evNjR0QFBUGhoaHBwMJFIjImJQSr29PRYW1sbGxv7+vpaWloGBwcjyMnPzz916pSfn1/Av0tKSgqJRMLj8RAEDQ0NmZmZkcnk8vJybW1tGxubwMBAHR0dMpkMI6erq0tHR8fZ2Rmuy+oqYprrCkAO70MNIIe3PmDv3lAAIAelPPKPnFevXp07dw5+oPL58+cLFy4EBwfX1NRcuHChvb0dgqDw8HAikciJnOTkZAKBQKVS29vbLS0tnz59evnyZQqFMjIycu3atcDAQBg5lZWVDQ0NOjo69fX18DVTcnIyJ3I8PT2Dg4MhCKqvrz99+nRFRUVISEhoaOjY2JiFhUV8fDwEQTU1Nb6+vlwBw7kRIIf3oQaQw1sfsHdvKACQg1Ie+UcOBEHp6ena2tpnzpzBYrE3b96kUCh0Oj0mJkZHR+fSpUtWVladnZ2cyBkYGMDj8erq6hoaGrGxsfBVzqVLl0xNTa2srFxcXGg0mouLy9mzZ+vq6uLj49XU1IyMjMzNzVtaWjiRU1ZWpqura25ubm1traurm5OTk5mZqaiomJSUVF1draOjo6ura2BgkJWVxUkXrlsAcngfagA5vPUBe/eGAgA5KOVxS8iBu+yBgQH44Q3Sg1Op1OHhYeQn15WhoSHWWkNDQxMTE6wlkXfMaDQa72GtJyYm2AqMj4/TaDS4tcHBQfhGH2vjPNYBcngfagA5vPUBe/eGAgA5KOVxG8jh0X2L4i6AHN6HGkAOb33A3r2hAEAOSnkEyAHI4X2oAeTw1gfs3RsKAOSglEeAHIAc3ocaQA5vfcDevaEAQA5KeSSTyTY2NqJ4Q2ynfJaXl5+amkJJbhE0A5AjgkkDLm9ZAYCcLUu2vQotLS06Ojo71X2LXDsjIyNSUlKLi4vbU28/1ALI2Q9ZBjEC5KB0DDAYDGVl5bq6OpGjxY44/OjRIzwej5LWomkGIEc08wa83poCADlb0+tPSicmJl65cmV8fHxHOnERauTz58/KysrNzc1/ot6erwuQs+dTDAIEE1Gjegysrq5ev37d3Ny8t7dXhIDxh642NTWdOnUqMTERVa1F0Nh+QM7y8vKHDx/evHlTV1dHoVCWlpa2naiFhYXGxsYPHz4sLy/z0widTn///j0/JVEr09XVVVtby4+5hYWFhoaGmZkZfgoLeRlwlYNqglZXV+/evYvFYgkEwv379+MEsAQEBDj9vjj/u7i7u8Mrzs7Ov+93CgsLE4Ajcffu3TMxMVFQUCCRSKiqLJrG9gNyZmZm9PT0MP+36OjobOPaF34JhUqlKisr6+vr89MRr66u4vF4KSkpoTo0XFxcDh06xMOlhYWF2dlZJpOZl5cnLi4eEBDAo7Co7ALI2YVMzc7OkkiksLCwuwJYrKys/u+M/t+/4uLiWCzWxsZGX1//yJEjbHsxGIyHh8eOOxIYGHjv3r2XL1/y+U/oLqRByEzuH+RISEi4u7vb2dkdOHDg/PnzEATxn4qKiopbt24xmcwtIYfJZL58+dLX15d/QyiU5I2cpaWlO3fuFBYWMplMCoXi7u7e0NCAgleCNgGQI2iF0Wv/+/fvpaWlt27dYoUKzJjQ0FAmk8lgMK5cuYLBYFjBc/DgwfDw8Ldv34LXydBLFTdL+wc5UlJS1dXVc3NzV69exWAw6enpTCZzfn6+pqamvLwc/r8egqDKysq+vr7Xr1+/efNmYWGByWR+/PjxxIkTFy5cIJPJbW1t8FVOT09PSUlJT0/Pjx8/ampq2tra1tbWqFRqQ0PD4uLi7OwsvLGzs/PNmzew8IODgwUFBY2NjfD/Q2trax8/fiwuLh4aGuLMzPT0dGlp6evXr79//85kMuF7XH///TdSuKOjo7q6emJioqqqqra2dmFhobW19c2bN9++fWMwGC0tLXV1db9+/eKsiCCnvb29urp6dnYWDqG1tXVtbe3x48eSkpKhoaG1tbWwGgMDA0wmc21traenp6ioqKWlBfafTqeTyeSBgYGOjo6SkpJv377Bxebm5jjD2fUtADm7noI/cmBxcbG+vj4iIkJPT+/IkSN2dnZpaWmnTp3CYDAHDhyA2WNoaLiysgKbmZyclJWVhbfDBezs7OLi4i5fviwtLW1mZvbw4cOPHz+ura39kVug8tYV2G/IYTKZYWFhGAzG19d3amrKzMxM6t/FyMjoy5cvL1++FBMTk5aWho9VZ2dnCIKsrKzExMSkpKSOHz+ekZGhrKyMxWLh41lOTq6mpkZXV1ddXR2CICKRKCMj09zcXFJSIiEhERMTc/36dWlpaSaTWVdXh8Vi5eXlpaSk7ty5w2AwoqOjpaSkjhw5gsViq6urWVPX0tKioqIC++Dm5jY9PW1ubg7/lJaWzsrKYjKZeDxeUlLy5MmTGAxGXFw8OTk5IiJCTEwsLS1teHhYRUXFzMxsbGyMsyKMHAaDYWdnh8Vi29vbu7q6jh07ZmZm1tbWpqSkhMFgsFisnp4erEZYWBiDwYiJiZGUlIRtubm5zc3NFRUVYTAYFRWVgwcPYjAYc3Pz+fn5uLg4aWlpMpnMGo4wrAPkCEMWtubD2trap0+fEhMTzc3NpaWlL126FBsb29LSsrq6CjdkYWEBnxUYDEZGRmZ8fJzVQFVVFbIXg8HExsbCe+fn56urq4ODg8+dO3f06FECgZCRkTEyMsJaF6wLToF9iJz4+HgMBoPH49PS0sTExJ4+fVpQUCAhIZGVlQV3sg4ODk1NTdra2v/5z3/q6+ubm5tlZGQcHByQG2tHjhwpKCjw9vaGr5bCwsKkpKRyc3PPnDmDwWDu3btHJBLh3hxBTmRk5KFDh0pKSqamptra2np6ehQVFa9duzYwMKCiomJvb4+keGFhwdbWVlxc3M/Pr7m5uamp6cmTJ2JiYkQi8d27d0pKSmfOnJmYmMDj8WJiYnfv3s3JyfnPf/5jbW3d1taGxWItLCyePXsmISFBIpG4VuSBHCaTGR0djcFgCgoK4LuCYmJiYWFhXV1dCgoKmpqabW1tNjY2kpKSpaWlMHL09fXr6+s1NDTk5eUpFEp2dva5c+c+fPiAhCMkKwA5QpKIzd0YHR3NzMwkEAiysrJaWlpBQUFv3rzheu08NjYmIyMDc6WkpISz6aCgIHivpqbm/Pw8Z4Gpqani4mJPT88TJ06cPHmSSCS+fPkSvmDnLAy27IgC+xA5t2/fxmAwISEhXl5eGAxGTU3t9OnTYmJioaGhyP/1TCYzNDQUg8GQSCRO5MCvD+Tk5GAwmPj4+MbGxsOHD1+4cAGLxZ48eVJDQ+PUqVNmZma/fv1CkPPmzZsjR46Ii4s7ODiMjY2RyWRJScmjR49qa2tLSUnp6ekh2YQgSF1dXVFRsb+/H97o4eFx4MCBkpKSxcVFMzOzI0eOtLS04PF4GRmZpqamvr4+eXn5q1evLiwsWFpaKigo6OvrKysrDw4Ocq24DeSUl5dLSEi4ubkxmczHjx9jMJiYmBgYOZGRkcvLyxYWFlgstrOzE4lC2FYAcoQtI7/5888//5SUlBCJRFVVVSUlJXd398LCQt7DxjQ0NCgoKCQlJZ0+fdrHx+e35v7vx/Lysr6+vr29vbe3t5aW1ujo6P/t4fJ3cHDw2bNnOBwOi8Xq6OiEhYXBN6y5FAWb/kCB/YYcKpWqqakpISFRXFxMJBJhZtD/XWZnZ1mR4+npicFgXrx4ASPHzs4OucqBkfPixQu4+uzsrL6+PgaD0dDQSEhIEBcXx2AwT58+ZTKZCHKYTGZ/fz8ejz9w4IC7uzuZTD548KCJicnY2BidTmc9uaamps6dOwdfJMGJ9fPzExMTy8/Pn5ubMzY2hjt3BDkDAwMwcphMZm5uLnzj2s3NjcFgcK3IhpwPHz50dnZisVgzMzPkKicvL4/1KqeqqkpKSsrR0ZHBYDx48ACDwTx69AhBzurqqqWlJUDOH5yF+7Lq4uJiXV1dWFiYrq4uFot1cHBIT0+nUCj8iJGcnKyoqAi/eDoyMsLjjQAqlfrz508mk5mVlSUvL19TU7Np+wwGo729PSEhwcTERFpa2sTEJCEhob29ncFgbFoXFBTEWOoAACAASURBVNhUgf2DHHFx8fPnz8PPKmxtbefn53NzcyUkJDQ1NXNzc1NSUpqammDkqKur+/v7Hz169NixY52dnR8/fsRisYqKih4eHtnZ2chL0ghymExmVFQUBoPx8/OjUCjHjx9HrlEQ5OTl5cXFxT1//lxWVvbatWs9PT0qKipHjhyJiYnJzc1lfaF/dXUVvvwyNDQMDw+PiYkpLi6WlJS8dOlSdHS0jIyMgYHBzMwMV+RQqdTTp08fPHgQfprCtSKMHCaT6eXlJS4u7u7ubmNjg8FgYOQkJiZiMBgjIyM/Pz8EwGNjY6dPn1ZQUHjw4IGWlpaMjExjYyNX5GRmZp49e1bYPkUCn4Ju2g+gVIDBYHR0dCQkJJiamh4+fPjKlSsPHjxoa2tDHs9s6sfCwoKzs7OhoeHk5OSmhdkKwM8qExIS1tfX2XZt9PPXr1+1tbWhoaE6OjpYLBaHwz179mxwcHCj8mD7pgrsH+TA77YoKSnBs5jDr6vduXNHSkoKfgxeW1sLd7KnTp06fPiwlJRUQkICg8GYm5vD4XAYDObo0aM5OTlckfPhwwd5efnq6url5WVbW1sHBwf4tS4EOQUFBVgsFoPBKCkpwe+wFRUVwfw7fPhwVFQUa6aoVCr8MreUlJSPj8/CwkJERMShQ4cwGAzypIQrcphMpr+/v46OzvT0NPyeG2dFBDnv3r2Tk5OTlJS0srKSkZGBkfP582f4lYQLFy4gyGEymeXl5fD2o0ePpqamrq6uckVOQkICFot9/fo1azjCsA5urO1mFoaHh58/f47H47FYrLa2dkhISE1Nza9fv7bq0/j4uI6OjpeX17Y/556amjI2NiYQCFwf7fD25+vXry9fviQSiSoqKsrKyp6ensXFxfCZxrsi2MuqwH5ADmu8nOuLi4vT09Pw25JIJzs3Nwe/Ng2XX1tbm56e5nGcr66uTkxMwJj59u0b1w9Fl5aWpqamWP+f493s3Nwc61m5tLTEtVm2iObm5lhv0zGZTB4VFxYWfvz4wdbC0tISIgjrrrW1tW/fvrH6z7oXWedxkwMpg/4KQA7amn/9+vXvv//28vJCOui///77Tzrot2/fKigoPH/+/A8jWV5e9vPz09TUHB4e3nZTIyMjCETPnTsXHBxcXV29DYxt2wHRrQiQw5o7BDmsG8H6HlAAIAeNJCK3obS1tbFYrKOjI4lEQj4l+xMPEhMTjx8/3tLS8ieNsNbNzs6G70uwbtzGOoPB6OzsfPjwoZmZ2fZuFW7DqEhXAchhTR9ADqsae2kdIEdQ2WQwGG1tbfHx8VevXpWWljY1NU1ISOjo6Niph+2/fv26cePGhQsXtjReCD/Rtre3nzhx4sGDB/w/2uHd7OLi4tu3b+EXIo4cOWJvb8//CxG8W95LewFyWLO5uLg4NTUFro9ZNdkb6wA5O5xHCoWSnp7u4OCAxWLPnz8voFeKx8bGzp07RyQSedzU/pPApqamLl++7OjoyPW7nz9pGX7t29vbW1VV9fjx47du3SooKPjy5cuftLk36gLk7I08gih4KwCQw1sfvvZOTU0VFRV5eHggH04iIx3xVX+LhWpra+Xl5TMzM7dYb2vFV1ZWbt++ffbs2R25AcjV9tjYWFZW1vXr12VlZTU1NQMDA6uqqnYcclxNC+HG/YAceDQz5MtKHlmg0Whv3rxhGziDR/mt7pqfn29oaGhra9tqRUGU5zGxwtTUVHV1NZ/fSAjCtx1vEyBnm5LOz8+/efMmKChIS0sLGR6G9zeV27T0e7WHDx8qKSm1trb+vllQv168eCEvL19VVSUoA/+2yzaEj7GxcUxMzIcPH5Ch4QRqXUga3w/IGR4ePnHixLlz5zbV/Pnz5xgMJiUlZdOS2ysAf7YJv47Mfwtsb6DxX5Frye/fvy8uLvKeWKGkpERcXDw4OJhrC6K4ESBnC1lbXV1tbW2NjY29dOnS4cOHzczMHj16hNogmPPz8wQCwdjYGOXbUJ2dncrKyrGxsTv1aIe34ouLiw0NDffu3YOnWrC1tX369Ck//xfzblb49+4H5MzPz0dFRSUnJ2+aDiFEDjJvwqbO81MA/uJnbGyM98QKADkbionFYjfcJ+I7BgYG0tLS7Ozsjhw5oqenFx4eXl9fj/I776Ojo1paWr6+vvAHBygrOj09feXKFXt7e3jAAtSsz8zMlJaW+vr6qqmpKSoqurm55efn7/jrEqiFw9vQfkDOwsJCXV1dU1MTk8lkG/MfPrCXlpZqa2vJZHJKSgpylcPPvAZMJnNiYqKkpOT9+/fwBytc22cymT09PYWFhfDdafgqh23ygoWFhdra2ubm5vHx8ZcvX/b29rLNm0Cn01lTyToVAhxgS0tLW1tbcXEx8t/h9+/fyWTyixcv4C8QZmdn7e3tjx49mp2d3dHRwTqxAltJgBxWqX9b32PI+fLlS0FBwc2bNxUVFVVVVX18fF69evXPP//8FjNaP2pqauTl5bOzs9EyyMXOysrKnTt3NDQ0dmuIASqVmp2d7eTkdOzYMQ0NjYCAgMrKSpQRyEWXndu0H5CDzKvGdcz/1dXVgIAAeGC0//znPzBy+JnX4NevX+/evVNQUJCRkYHnmOHaPpPJJJPJ8GRRBw8ePHDggJmZGefkBVQqVUlJ6ejRo/C0CPLy8o2NjazzJpSXlyNpZ5sKAQ5QSkpKQkICg8GcPn26v7+/o6NDWVkZHkhXSUmpv7//4cOHBw8eFBcXV1BQIBKJyMgInCUBchCp2Vf2AHJ+/vz5+vXru3fvampqysrKXr9+PTs7W3APMNkV5PZ7fX09Pj7+xIkTQvKcMz8/X15evrKykpuzKG1bX1/v6upKTk62sLCQlpa+ePFidHT0+/fvd+X6bwdj3ofIYRvz/9OnT3JyckpKSkVFRTdv3oSRw8+8BlVVVdbW1nJycp8/fyYQCEpKSjBy2Nr/9euXubm5pKRkdHT0kydP4HvjnJMXwMjBYrFFRUXwiKIZGRmsg1izJp1tKgQYOSdPnqyurvb19cVgMMHBwW1tbRkZGTMzM76+vhISEvA0dEZGRsePH4dvrCHI4SwJkMOq9m/rIoqclZWV5ubmv/76y9jYWFpa+tq1a0lJSZ8/fxaGOcrm5ubweLyxsfHOPrT8LW1b//Hp0ycVFZW//voLnUc7vB1cWlpqbGyMiooyNDSUkZGxtrZOTU3t6ekRBt94e865VxSRMzIyUlhYGBcX5+vra29vb2BgoK6urqamttHg+WxXOWxj/sMj8xMIBAaDgTzL4Wdeg4SEBHV19YMHD2ppacnLyx8+fBhGDlv7dDr9zJkzysrKo6OjyOsDnJMXwMgxNDT8/v37s2fP4NGaN0IO21QISIAzMzMNDQ2HDx+2srKC7yISCARpaWlxcfGSkpLZ2VmuyOEsCZDDeab87xbRQk5fX9/jx49tbGzg4WAjIyPfvXuH8uOZDaX8d8fw8LCmpqa/v78QvrX17ds3ExMTW1tbzlGheAcl0L2zs7MVFRW3b98+ffq0vLy8q6vrixcvJiYmBGp0BxsXCeQwGIympqaHDx/a2dkdO3bs5MmTLi4uMTExmZmZlZWVnZ2dg4ODo6OjG51KSI/MigRkzH94ZH5bW9vl5WUEOfzMa/D48eOzZ88eO3asoaGBTqfDT/s4R9ucmprS1NRUVFQc+HeRl5c3MzOrrKxkm7yAFTmwG48ePWKdN4Et6axTIYyPjyODjZaXl0tKSjo4OJSVlWGx2IiICPi2IYIcRUVF+B1X5CqHsyRADpva//0p/MiZnJzMy8tzc3NTUFA4deqUn59fWVkZPJ/5f8MQjrWqqip5efnc3FzhcIeLF6urq4GBgerq6sL5xQCNRsvNzXVxcZGXlz9z5szt27crKipYR4fkEtJubxJy5MzMzCQmJp48edLAwCAsLKyiomIbF9+8kTM0NKSqqorFYoOCgq5cuQLfWONnXoOWlhYCgSAmJnbjxo3i4uKEhASuSFteXnZychIXF8fj8UQi8eDBg2ZmZhQKhW3yAq7IYZ03gXV4ZrapEIaGhpSVlWVkZIhEoq6urpiYWGJiooeHh5SUVExMjImJCQaD8fDwmJycNDExkZSUJBAIrJNkc5bMzc0FL0lzPzWFEzk/fvwgk8kBAQHq6upycnLOzs45OTlUKpV7DEKwdX19PS4uTllZuaOjQwjc2cSFgoICOTk5IZxfHfF7fX29p6cnNTXV2tpaWlra0NDw/v37jY2NAhq1AbG7jRWhRU5XV5enpycWi/Xy8urq6tpGaEgV3shhMpnZ2dlYLFZCQkJbWxtGzvz8PD/zGlAolMuXL4uJiYmLi5ubm3NFDpPJ7O3thVtWV1eXlpaG31hjm7yAK3JY501g/UyNbSoEOEBZWVklJSUxMTFbW9vp6Wn42gWLxeLxeElJSVNT02/fvj19+hR+yyAwMJDtKoe1ZHZ2NkAOcvz8tiI8yFleXm5qaoqOjr5w4YK0tLSlpWVKSkp3d7fw39//+fMnDoe7fPnynwws/VtWBP+jq6vr5MmT9+/fF4anX7zD5XpgdHV1CcmBIYTIqa+vNzY2PnnyZFJSEj/D9fPWn8+9CwsLnNej/MxrwGQyZ/9deBtiMBhfv35lO1x5T14AN7hRGdapEBCmTk1NsVqZm5uDx4ubmZlBRlnk6i3XkrwjEq29e+RT0PX19e7u7pSUFCsrK2lpaSMjI6H9Z3aj42NwcPDs2bMBAQFC+PBmI5/h7f/884+ZmZmtrS1nT8G74i7uRS5/NTQ05OTknJycsrOzd/fyV6iQQ6PR8Hj8qVOnysvL2XrnXcwabFqYB5lGkIMaoXc9HVt1AD3kzM/P5+XlxcbGRu3cEhgYaGdnp6WldfjwYVlZWX19fQKBEBISsj0L8FPQTY+VmZmZzMzMmJiY7VnhWuvGjRuHDx+2s7PjunerG2NjY/Py8jYdrAyCoPT09L/++mur7XOWj4yMNDIykpWV9fHx4dy77S1xcXEvX77c9CbYyMjI48ePo6Ojt2fozp07dnZ2586dk5aWPnr0qL6+Ph6PDw0N3V5rXGvFx8eTyWTk31uuZ6mQIGdpaSk+Pl5WVjYhIWFT5bkGIuiNADmCVlig7aOBHAaDER4efvToUVtb25CQkAhhXUJCQggEAhaL9fb25vrKzeLiore3NxaLhcEmrHFEhISE2NraHj16NDw8nGs3NzMzc+PGDSwW6+rqGhYWJrSBBAUFmZmZycnJJSUlcT0NaDSalZWVnJycp6dneHi40AYSGBh44cKFEydO8HglRBiQU11dferUKQKBIMxv+gnzvAYMBgOeilTYLg25nj67slHgyGEwGK6uriYmJp8+fYJEYenr6yMQCGZmZmzUWVxcNDMzw+Px/f39ohAH9OnTJxMTExcXFzbqzMzMnD9/3sfHZ2hoSCQC+fDhw/nz50NDQ9nOEBqNdvLkycjIyPHxcZEIpLa2VlVV9cmTJ2yBwD93Fzlfv351cnI6c+ZMfX09V/f27cYvX74IdCznxcXFxsbGDx8+7BOFBY6cpKSkS5cujY2NiUSnADtJp9MdHR3v3LnDehDcuXMHh8PBb/2LSixjY2OXLl1KTExkDcTBwcHPz09UQoD9pFAompqaxcXFrIHo6enFxsaKViAfP348ceJEc3MzayDw+i4ip6CgQF5ePjIyku3fLE4n9+EWQd/Ho9Foqqqqurq6+0RbwSKHwWCcOHHi7du3otUvQBDU1dWFxWKRIbx+/vyJxWK7urpELpC3b9+eOHECudChUqnHjh0bHR0VuUDy8/MNDQ2R07KlpeXUqVOTk5MiF8ijR4/weDwSCLKyK8ihUqmWlpZ6enp/+PYzEsXeWwHI2dmcChY5TU1Nurq6ItcpwA7b2NgUFhbCchcVFdnY2IhoILq6uo2NjXAgjx49IhKJohgInU5HvtZmMpn+/v7R0dGiGMjIyMihQ4d+/frFdiajjJylpaXk5ORjx44lJSXBQy+z+SO4n319fYWFhW1tbfCweFNTU6WlpZWVlfB32ZwDOf/48aOmpqapqWl5eXl2drampqajo2N1dbWxsbGgoGBgYIDV1Y3GcmYbLhqCoMrKyuHh4d7eXmTwaaSd6enp0tLS169ff//+HUFOZ2dnSUnJ169f4WJsQz7T6XQymTwwMNDR0YHM0LjRaNasg16DqxxE9q2tcP0u59WrVw4ODqLYL0AQ5Ofnl5qaCquQmpoqcjejENlxOFxJSQkcSFBQkMjdjEICMTQ0RIY3xePx2dnZyC7RWlFSUmIbAJ/JZKKJnJKSElVVVUdHRxQmFWTtR9bW1h4+fCglJYXBYA4cOFBaWtrY2KioqAiPsnz27Nne3l74S0zWgZwbGhr09fXl5eV7e3sLCgokJCSSkpLCw8OlpKQUFBSkpaVramoQK/BrymxjOXMOFw2DxM/PT1VVVV5evq+vD2mhpaVFRUUFdsnNzQ0uqaysfPDgQQwGY2ZmNjc3xznkc1FREQaDUVFRgYuZm5vPz8/D336ePHkSg8GIi4vDEwWxDXoNkIMov7UVrsj5+++/HR0dRas7QLy9ffs28qJUUlLS7du3kV2itYLH45GnIHfu3ImPjxct/xFvL168iDwFsbe3z83NRXaJ1oqKigrnN0DoIKetre3ixYv6+vqIkls7z/+sdG9vr6KioqysbGZmZl1dXW9vr6Wl5aFDh/Ly8pKTkyUkJDw8PGDksA3knJCQICYmlpKS4ujoqKio2NraamhoeO7cuaGhIZhSiF8wctjGcuYcLhoBiY2NTVJSEvydJpPJXFhYsLW1FRcX9/Pza25ubmpqgkvq6ek1NDRoaWnJycn19fVxDvkMI0dfX7++vl5DQ0NeXp5CoeDxeLbRrBcWFtgGvQbIQXK3tRWAHKHt9QByhC01u4Kc4eFhJycnZWXlwsLC3RpwoaKiQlJS0sbGBv7iBx7aWVVVlUajdXV1HTt27OLFi6zjzSADOff09MjLy+vq6iopKeHx+IWFhaCgIHFxcRkZmdjYWNa7lKwfYyJjOXMOFw2DRF1dne11cAiC1NXVFRUVkYlo4ZJhYWFra2u2trZYLLa9vZ1zyGcYOZGRkcvLyxYWFlgstrOzExlatK+vT15e/urVq3D7rINeA+RsjTRI6a0ip6qqKiAg4MGDB/y/czw+Pv7s2TN/f//c3FwajSbofoSfq5ztubS9WtuOlx/kbM+l7dXadiD8XOVsz6Wurq7IyMjg4OD6+vptu8d/RTSRs7i4WFxcbGpqqqCgkJCQsLvvpFVXV0tJSV25cgX+Thke2llJSWl4eLitrQ2LxV69epUVOchAzsvLy3g8HoPBSEhIvHz5kslkMhiM8vJyTU1NSUnJV69eIR0RK3KQsZw5h4uGQYLH49k+oJmamjp37hzCFXiWaDExMRg5dnZ28C7OIZ8R5KyurlpaWrIhBxkt+8uXL2yDXgPkILnb2sqWkNPc3GxkZJSSkuLn52drazsyMsLP6Zqammpubp6VlWViYoLC3SF+kLM9l7ZXix+JuJbhBznbc2l7tbg6yc9GfpCzDZfGx8cJBIK/v39ycrKenh4K1EEHOf39/YGBgceOHbOysiovLxeGgZSoVCo8sY2Tk1NAQEB5ebmbm5uEhISPjw88xnNERARX5DCZzJKSEgkJCQ0NDQiC5ubmIiMjMzMzAwMDxcTE0tLSkN4KRg7bWM6cw0XDyPHz80Mqwiurq6vwDD2Ghobh4eExMTGsVzkIcjiHfM7MzMRgMJGRkbyRs7S0xDboNUAOWwr4/bkl5NTV1WVlZUEQ1NDQYGBgwPqVaH5+vo+Pz8jISGlpqYeHB9IT0en0zMzMd+/eQRAUHx/v5OSE7IIgiEql5ubmBgQEZGdnw28Av3nz5vbt2yEhIW1tbRAE5ebmvnr1Kjg4OCIioru7+9OnT0+ePIFR19LSQiKRWFuD1zdFDm+XthdIT09PfHx8cHDwmzdv4LieP39OJBKTkpKGh4c5AykvLy8qKoIdLiwsrKqq4gxkU+TwCGRiYuLu3bvPnj2j0Wj37t1LSUlB2udRC4IgQQSyKXJ4uMQjkP7+/pSUFAqFMj4+fu3aNba3EngHQqFQSCRSXl6er69vWloalUrlJyPbQM76+jo/U4DDxbKysi5evHjixIm//vqLRqPxew6jUq66ulpdXR2DwcjKypaWlo6Ojpqbm4uLi0tISDg7O09NTW2EHPiS6O7du/ATF09PTwkJCTExMTMzM9YYYeSwjeXMZDLZhouGQRIcHMwZNJVKtbOzO3DggJSUlI+PD1fkcA4ODV+QbYocJpPJNug1QA5nCvjasiXkwN1Wdnb21atXQ0NDJyYmkI5scHDw+vXrV69e1dHRKSkpQbbDK0NDQ/fu3dPX1y8tLWXdBc9F8fz5cwcHh9jY2PLych0dnfT09Li4OD09vdbWVgcHB11d3dTUVGdnZ3gEgcuXL5PJZAiCAgMDw8LCWFuD1zdFDm+XthHI0NCQnZ2dh4fHs2fPLl68WFNTExISYmVllZ+f7+Li4uzsDEEQWyCvXr0yMTEZHBwcGBgwNTV9/fo1ZyCbIod3IPX19bq6uhYWFmZmZpxfJnHNiIAC2RQ5fxJIU1OTs7OztbV1d3c3ouGmgbS3t6upqREIhPT09PPnz5NIJH4yslXkUCiUK1eueHh4cD0Vf/361djYmJCQYGtrKysrq6qq6ubmVl1djXyMxbXW7m78/v0765vZs7OzyAN8Ho59+fKFdeTA2dnZb9++sZVHbqyxjeXMZDI3GgqarQX459zcHOsjIs4yfzjkM9dhpDmt7L0tgv0uh/cba4WFhbdu3bKwsPj8+TNykkMQVFRUdOTIEXt7e84bbhQKBb5zEhgYiFQZGRmxtLSsqKiAIKi3t7epqYlIJMIfbVCpVAsLi+TkZDs7O/g/9LKyMlNTUwiCQkJCgoKCent7zczM6urqkNaQFT6Rw9UluJGtBlJdXW1ubk6hUCAIev/+/bt37y5fvlxbWwtBUFNTk46ODgRBbIEMDg6amppWVFSUlJRYW1tzKgZBEJ/I2SgQGo3m7+8vJSWVlJSEiIOscK0loED4RA5XlyAI4h1IU1NTRESEgYEBctUIQdCmgbS3t+vq6jY3N0MQdPfu3dDQUH4ywj9yFhcXo6OjJSUlMRiMvb390NBQR0dHWVlZSkpKQECAra2tlpbWoUOHjI2Ntz1t2h7r1BDkbDo+7x4LXFTC2R3kjI2NwZ3j2NiYubn58+fPkS6svLxcW1s7OzubQCDcvHkT2Q5B0NDQEJVKhSCovr5eT08P2dXT03PlyhXWMQ4IBMLTp0/hAr6+vnFxcTgcLiMjA/7+y8zMDIKgmpoaU1PT58+f43A4roN08YOcjVyCIGgbgRQXF1taWiKDA3V2dl68eLG1tRVGqYmJCQRBnIFERUUFBAQQicTExEREE9YVfpCzUSATExPh4eEmJiZFRUXwf/GsLW9US0CB8IOcjVziEQidTocxD0FQXFwcHo9HYtw0kPb29osXL3Z0dEAQFBoaGhwcDEHQphnhEzmrq6sWFhbwByIYDEZSUlJdXd3IyAiPx4eEhKSnp1dVVfX19cEfVIpKjyNoPwFyBK3wH7a/O8iJjo62s7OjUqldXV06Ojqs/1d2dHTA1xyDg4PwhQt8/o+MjJiYmKSmpkIQlJ+fzzqowfj4uL29PcwYEonk4eFx//59V1dXGo0G/8tZVFTE2VOPjY05ODjo6ekhcEI6GnhlU+TwcAmCoG0E0tTUdPHixQ8fPtBoNDc3t/T0dAsLi8zMTJiUxsbGXJHT0NBgaGh48eLFlpYWthDgn5sih3cgVVVV8L0m+DMFxASPWgIKZFPk8HAJgqCNAqmpqdHR0Xn//j2dTvf4d0Fi3DQQrsjZNCN8IofJZC4vL6elpcnJyWEwGHiyyz884fd8dTCWs5CneHeQ093dbW1tra2traamFhAQwPUiAzntkZWysjItLS09Pb3Tp0+/ePEC2Q7fANHW1tb5d6muru7v73d0dDQ0NDQwMAgKCqJSqZzIgSAoNTX1zJkznZ2drE0h65siB4IgHi4h7XCubFSLTqfHx8erqanp6OjY29tTKJTXr1/r6elduXJFX1+/oKCAK3KoVKqNjY2zs/NGQ45uihxRCWRT5GwvkImJiXv37p05c0ZLS+vq1avt7e1IyjbNCFfkbJoR/pEDdx/z8/MPHjywtrYW8t4EuAcU2FSB3UEOfEr39/dzffaAnPCcKxMTEz09PayvGyBl6HR6f38/667h4WH4RhxSZksr/CAHgiAeLvEwx6MWlUodGBhA6sK3fTbCCVKMxwo/yBGJQPhBzrYDGRkZ2egTsR3PyFaRA5/Gwjlh2qZdDCgAFGBVYDeRw6OXFIZdfCJHGFzl7QOfyOHdiDDs5RM5wuAqbx+2hxzW8xasAwVEVAGAnA07B4CcDaXZpR0AOSLaywC3gQKIAgA5G3afADkbSrNLOwBykPMWrAAFRFQBwSIHTF6wS53zb2bB5AW/ySEEP3Z98gIR7a2A23tAAcEi5/3796xvMwvByb4FF6ytrYuKiuAcFxcXW1tbb6GyMBU9f/58U1MTHMijR4+8vLyEyTt+fYGnaBsbG4MDuX379v379/mtLEzlhoeHDx06tLCwwNZ9oDN5AZtR8BMogLICgkUOg8FQUVHh+m2/MHUCXHz5/PkzFotFRteYm5vDYrFsoyRwqSZ8m2pra5WVlZGxT6hUqqysrChORJ2Xl2dkZIScHm1tbSI6EXVCQgKBQEACQVYAchApwMoeVkCwyGEymSkpKcbGxqLVx01MTDg4ONy5c4c18Xfv3rW3t2d9CVv4+MLu0ejoqLGxMTwXIRILDofz9vZmLyrcv/v7+8+ePQuPWo8EYmBg8Ndffwm34+zedXR0KCkptbS0IFEgKwA5iBRgZQ8rIHDkrK2t3bx588qVK/CgVqVS+gAAIABJREFUIOynoPD97unpcXBwsLCwYJtZZHFx0cLCwsHBoaenR/i85uJRR0fHlStX3Nzc2GYEmZ2d1dXV9fLyGhwc5FJN+DY1NTVpa2tHRESwnYd0Ol1NTS08PFxU/qGprq5WUVF59uwZWyDwT4AcrrKAjXtMAYEjBx7A9f79+0ePHrWysrp7926IsC6BgYE4HA6Lxfr7+7PxBs764uLi7du3sVgsDocLDAwU1jhC7t69a2VldfTo0aioKDbewIHMzs66urpisVgnJ6egoCChDeTOnTtXrlxRUFB4/Pgx1xOPTqfb29vLysrevHkzODhYaAO5ffu2gYGBiopKQUEB10CYTCZAzkbKgO17SQE0kAPrtbCwUFhYmJCQELNzS0REBA6Hs7Ozs7GxsbKysrS0vHbtmpmZmbm5+V9//bVVO/Hx8Tk5ObOzs7wTPDs7m5OTEx8fv9X2eZQPDQ2NioriUWBLuxISEgoLC3kPvc5kMqempjIyMuLi4rbUOI/C4eHh9+7d41Fgq7sePXpUWlq66bCVVCo1PT09NjZ2q+2HhoaampqamZldu3bN0tLS2tra1tbWzs7uxo0b0dHRW21to/KxsbFJSUlv3rzhin/kYAPIQaQAK3tYAfSQIyAR+/r6rK2tkdF24ZXLly8LyJyAmp2bmzMxMYmIiEDeyBKQIUE0u76+3tbW5u7u7unpub6+LggTAmpzaWlJSkqK9eARExPz8/Obnp4WkEUezQLk8BAH7NozCog8cuBMvH37VltbG+k7pKWlb968mZubK0I9+ODgoLS0NAaDuXbtWllZmTBMG7zpUf7jxw8SiQQrr6mpyc8sW5u2iUKBtbW1rq6ux48f29vbHzhwADlsbG1tKRQKCg5wNQGQw1UWsHGPKbBHkMNkMhkMRm5urqKiIgaDKSwszM7OdnNzO378uLKy8q1bt168eDE+Pi7kyauoqEC6P0VFxfv37//48UM4fR4dHSUSif/5z39gh6WlpYeGhoTTVdir9fX17u7uJ0+eODg4YLHYs2fP+vv7l5SUREZGYjCY8+fPNzQ07K7/ADm7qz+wjo4Cewc5sF7wMO+sD/+Hh4ezs7NdXV0VFRVVVFTc3d3z8vKoVCo6+m7VSkREBEIdU1NT5HuarbYj6PJfv36F6Q57SyaTBW1xG+2vr6/39PSkpaXhcLijR49qaGj4+fmVlJRMTU0hrXV3d+fn5wuDzgA5SFLAyh5WYK8hh3eqhoaGsrKyXFxcFBUVT5486eHhkZ+fT6PReNdCc+/q6qqZmRkGg5GQkLCxsRHaq5yRkZHTp0/DvImMjERTIt621tfXe3t709PTHR0dZWVl1dXVfX19X758yYoZ3i3s1l6AnN1SHthFU4H9hRxWZQcHBzMzM52dnRUUFFRVVT09PQsKCiYmJljL7Mr6169fT548WVtbe+fOHTU1tc+fP++KGzyMvnr1Sk5OLjMz89GjR+bm5qurqzwKo7BrfX29r68vPT2dQCDIysqeOXPGx8fn77///vLlCwrWd8oEQM5OKQnaEWYF9i9yWLNCoVAyMjJu3LghLy+vqqrq5eVVWFg4OTnJWgbNdeRF7bKyMjk5ORKJhKZ1HraWlpZu3759+vTp7u5u+IsrxFUetQSxa319vb+//9mzZwQC4dixY6dPn/b29i4uLoYgSBDmUGgTIAcFkYGJXVcAIIc9BRQK5fnz59evX5eTk1NTUyMSiUVFRbuIn7GxMT09PScnp58/f7L7iu5v2JMbN27soicDAwMkEgnOzqlTp3Y9OzuYAYCcHRQTNCW0CgDk8EoN3MERCAQ5OTm4gysuLkb/dg1ybdHV1cXLXUHu28XrLfifAPgaFP4noLCwkE6nCzLcXWgbIGcXRAcmUVcAIIcvyTlv46D/tKCkpAR+gsKXxztXaHl5+c6dO6dOnULzqdLg4GBGRoaTkxP8pM3Ly0tInrTtnK7sLQHksCsCfu9FBQBytpxV5GE1Ho+HH1aj9k7UyMjI+fPnXVxckFkVtuz9FitQqVQDAwMCgYDCu3NDQ0PwCx3I+4QFBQVC9T7hFsXbWnGAnK3pBUqLpgIAOX+UN/iV3LS0NEdHx42+/PgjAxyVFxcXfX191dXVe3t7OXbu8AYymSwvL5+enr7D7bI0B381Bb+2Dn81lZ+fL7RfTbE4vvOrADk7ryloUfgUAMjZsZzAHx4+ffoU/vAQ+b5dEAN2/f3333JyctnZ2Tvm/e8NraysBAUFqaqqfvz48fc9O/BrZGQE/jgXGRtCmD/O3YGA+WsCIIc/nUAp0VYAIEcg+VtbW2MdXkVTU/P27duvXr36+vXrTtkbGhrS1tZ2c3PbdMTorVqk0WhGRkaOjo47+AL06OhoTk6Om5ubkpLSiRMnbt68KRJDEG1Vuj8pD5DzJ+qBuqKiAECOwDPFOogkFovV1NQMCAgoLS399u3bH9peXFwkEolnz57t7+//w6aQ6q9fv5aXl3/y5AmyZdsrY2Njubm5N2/ePHHihJKSkpubW05OjggNtLrtwLdXESBne7qBWqKlAEAOqvlaW1v7/PlzamqqnZ0dFovV0tIKCAgoLy//559/tu1HYWHhsWPHXrx4se0W4IorKyuhoaEnT55sb2/fdlPj4+MvXry4deuWsrLy8ePH3dzcsrOzR0dHt93g/qkIkLN/cr2fIwXI2bXsMxiMT58+paSk2NraYrHYc+fO3blzp6KiYmZmZqs+USgULS0td3f3hYWFrdaFy09MTFy8eNHOzu779+9bbYFKpebl5bm7u6uoqCgqKrq6umZnZ4+MjGy1nX1eHiBnnx8A+yR8gByhSDSDwfj48SOMnyNHjmhra9+9e5dMJvMPgIWFBQ8PD01NzW3M+FJdXa2goJCSksL/BGs0Gi0/P9/Dw+PkyZOKioouLi5ZWVnDw8NCoaZoOgGQI5p5A15vTQGAnK3phUJpBoPR2dmZnJxsY2MjIyOjo6MTFBREJpP5eZifn58vJydXUFDAp5+rq6sREREqKiptbW2bVpmYmCgoKPD09FRVVVVUVHR2ds7MzBTyaXI2DUp4CgDkCE8ugCeCUwAgR3Da7kDLq6urHR0dSUlJ1tbWMjIy58+fDw4Orqys5IGfgYEBTU1NIpHIOmkQV1cmJycvXbpkY2PD41be5ORkYWGhl5eXqqqqgoKCk5NTRkbGNi6kuDoANrIqAJDDqgZY36sKAOSITGZXV1fb29sTExOtrKxkZGR0dXVDQkJev37NiZ9fv37dvHnz3Llzg4ODG4VXW1urqKiYmJjIeTMNgqCioiIikaimpiYvL3/jxo3nz58DzGyk5E5tB8jZKSVBO8KsAECOMGdnQ99WV1fb2toePXpkaWkpLS2tp6cXGhpaVVXFOixNbm6unJxccXExWysMBiMqKkpZWbmlpQXZ9eXLl+LiYm9v71OnTsnJyV2/fp1EIg0MDCAFwIqgFQDIEbTCoH1hUAAgRxiy8Ec+rKystLW1PXz48Nq1a9LS0vr6+mFhYW/evPn582dfX5+GhoaPjw9yk+3Lly9Xr161srL69u3b1NTU33//7ePjc/r06WPHjhEIhGfPnvX393Ne9/yRf6AyfwoA5PCnEygl2goA5Ih2/ti8X1lZaW1tTUhIMDc3l5aWNjAwCAwMNDEx0dLSGh4efvv2rby8vJOTk4+Pj7q6uqysLIFASE9P7+vrA5hhUxL9nwA56GsOLKKvAEAO+pozV1dXq6urnz9//lSQS2pq6t27d21sbM6cOXPgwAFxcXExMbGDBw9qa2vj8fjw8PAnT54I0v7TrKwsfl6E24UECKVJgByhTAtwaocVAMjZYUE3be7x48dKSkpGRkYeHh7eaC2enp729vZ2dnZEIhEtm943b948ffq0jo7O69evN5UFFNhZ5KysrFAolC6WhUKhfPnypb6+XkDzQXz9+pXF2v+scv2q7Nu3b729vfPz80jG19bW6HR6a2vr+Pj42toasp33yo8fP7q7u7u6uigUyvT0NP8VuTY7NjbW19eH3H/mWgbZKFAZESt/uLKystLf3z8yMsKqzPfv37u6unZwpMdtOAmQsw3Rtl8lODhYR0envr4e2h/L5ORkUVGRoqIi51sM2xdxj9bcWeTMzMz4+/vjWBZ/f//GxkYcDkcmk9kkXFlZ+f79O2vfxFaAn58vX75ksfY/qw0NDawV4dGevL29XVxckFEBFxcXnz175ujoiMPh8Hh8RkbG0tISa62N1tva2vB4PGzR0dExNjZ2SxPFrqysxMfHh4eHz8/PMxiMBw8euLu78wnjjWTcyNVd2f7161cvL6/79++zclQYPAfIQe94yM7O1tbWplAo+wM3/42yqalJTk6uu7sbPa1F0JIgkOPm5lZRUVH/79La2kqj0UpLS5FBIn7+/Emj0aanp0NCQiIjI6empr59+7a6urq0tPT169f5+fm1tbV//vnnx48fc/8uTCZzaWlpYmKC6+DlXV1dL/9vCQ0NdXV17evrQ/IwOzsbHR2Nx+MdHR1ZkVNTU4PH40kk0sTERGxsrKurK5/vScLIiYqKam1tTU5OxuPx0dHRiGM/fvyYnJxkMBhMJhOJ4tevX1++fFn7d6HRaIH/LlQqdXFxEUYOlUqdnJz8+fPn2trazMzMP//8s7a2trKy8u3bN/iy7OfPn9+/fx8fH0dkXFlZodFobF+2sVpHFIBX1tbWpqen6XQ6K+DZVP3x48e3b99WVlYgCGIFBoPBoNPpU1NTcF242NLSEjJA8OLi4vj4OPzaKoKc79+/w1EzmUw25PDwk83tHfwJkLODYm7SlIaGRllZ2X974v20FhUV5enpuYlA+3u3IJDj5eXFehcF6XG+fPkSGRnp6urq4+Pj6OgYGRk5Nzf38OHDW7dujY2NdXR0EAiEgoKCb9+++fj4BAQEEInE5ubmxsbGW7duOTo6Ojs7V1ZWwh06Z9I6OjpcXV2fP3/OWmBqaio+Pr61tTUmJgZBzsrKSkxMjKura39//8zMDJ1O53ovjtMEk8mEkZOcnMxkMn/9+nXv3j0CgdDR0bG4uJiRkUEgEBwdHf39/SkUChyFv7+/l5cXDoeLiYmhUql+fn7wFRIMuQcPHri4uAQFBeFwuFu3bnV1dUVHR7u5uY2MjLS1tREIhOTk5KWlpejo6MePHzc0NMAXi8PDw/7+/oR/l8TExMV/FzbrrM7Pz88nJibC5f38/OBR1TlVTU1NdXV1DQkJweFwnp6e/f39DAajrKzMzc0Nh8M5Ojo+ePBgbm4uNTXVzc0tJCQkKSlpdnb24cOHBAIBh8PduHGjrKwMRo6Xl5enpycc9Y8fP5ADgFMlVj8Fug6QI1B5/9t4Z2enurr6fqLMb7H29/cfOnRoZWXlv4qAtd8VEARyCARCSEhIeHj4X3/9RaPRkB6noqICh8NVV1f/+vUrPDzc09OTTqdvhBwcDhccHPz+/XsikRgWFjY9PZ2VleXm5sZ1rKMfP36EhIT4+vpOTk7+Ht///GIwGHFxcQhyfvz4ERgY6Orqeu/ePUdHx+vXr+fm5m7pxhqMHCaTmZubi8PhKioq3r59C/Pyy5cvgYGBkZGRdDrdx8fn1q1bbW1taWlpeDy+rq6utbXV09PT39+/tbX1x48fDx48IBAIWVlZmZmZOByuoKCgrKwMh8PV1dW9ePECh8MFBgYODg56eXk1/rvAyMnIyIB5OTw8nJWVBUEQp3XWy5RPnz7duHHj5cuXP378yM/Pb29vp9PpnKqmpqY6OjoWFhbW19ffuHHj0aNH379/f/To0dOnT79///7w4cPr1693dnampqbicDh3d/fKysru7u7w8PCamhoqlerl5RUQEDA8PAzzZnBwMCcnB4/HNzQ0IAcAbz85E7eDWwBydlBMXk1VVlZaW1v/1g3vsx9ycnKCmCCVl+gitU9AyAkMDAwJCYmOjmZFTnd3t6ura1hYWFZWlqura3R0NI+rHE9Pz8nJyZaWFgKBcOvWraCgIG9vb5hYnALX1tbi8fisrCzOXVyRA19YpKamtrS0BAQEXL9+nc+5M1ivcphMJolEgl16/Pixo6Ojn59fUFCQq6vrrVu3uru7fXx8wsLC5ufn3717h8PhXrx4MTMz4+fnB2+En+XcvHlzdHQUvsLLyckZHBx0dXV99OjR/fv3Q0JC3NzcioqKvL29JycnkY779evXjv8uXl5eGRkZ//zzD6f1iYkJRIrx8XF3d3ccDufi4hIbG9vX18dV1dTUVJjKU1NTXl5eQUFBTCZzamrq5cuXiYmJXl5eeDy+tbUVJtPbt2/hm4ddXV3Z2dnR0dHwVxAwIOEAP3z44OjomJubi3jO20/EYUGsAOQIQlUubZaVldnZ2e0zyvwW7vHjx7n+58tFrH25SRDI2ejGGpVKDQkJefz4cUFBwYcPH+DHNg8fPoS73ZaWFjwej9xYCwkJ+fnzJ7zxwYMHXV1dnZ2dra2tU1NTbIlaWVmJjY0lEAgbvRzPdpUDP8N3dnbu7e1FrlSqqqrYmuX6kxU5U1NT/v7+zs7OfX19qampBALh1atXXV1dbW1tnz59otFoPj4+cBT19fXwRQyMnJCQELbXBz5+/Hj9+vWcnJxfv36FhYXB9x5fv37t5OTk6fn/2/v2pyay9P1/ZX7b36Z2Z2vLmtqxyq2tLZ2x1h2Zqd3BWZ2d3eqkY0JCgJAbN7kZBGEWCaIIElFUQK6iEYwb4nC/DTcTbhESIDIhhACpZHPZ/lb5fj2fnnQIiOEWTv/C6e7Tb7/vc5rz5PQ5/T4JBQUFHo8Hddwej6e9vb2wsBA4uKamhnl3NL0EUUxMTJSVlaWlpcEANCiqN2/e5PF44+PjFoslISEhIyNjaWlJJpNJpdLu7u6ysjJEOVCNoqiuri6SJIuKigYHB5OTkyUSCZ1yurq6WCxWdXU18nxLP4NiHpaDmHLCAuPWRjDlYMoJ/ZTsJeVUVlayWKwffvihtrZWrVYbDAaPx1NSUkKSZFVVVUFBAfTLMAsCy7oWFhbEYrFMJuvt7b19+zaXyx0eHg6ICH6V09+5tbW1SSQSnU735s2bpqamuro6uVzO4XDKy8vr6+sNBgP0ldeuXYOaAoFgm9n8gHLkcvnt27elUilBEKWlpV6vV6fTkSSpVCoHBwdT325Go1EikQgEAq1WCxMePT09q6urycnJcXFxz58/t1qtaMUaohyKompqagiCyMrKMplMQCqw2A867idPnjx48ODKlSt6vb6vr4/P59fU1DDvTp+d6u/vT09Pf/HixdzcXMbbLSiqMHy5e/duS0sLSZKlpaVDQ0McDqegoGBmZiY9PR1RDlptAav+1Gp1Z2cn0CRQjkAg0Ol0N27cIEmyp6cHUU5oPwOaNby7mHLCi+em1rZDOf39/fDzJysra3h4+BdjhHc7N2/etFgsL168KCsrs1gs4+PjoZfA3bp1S6vVvrs6bH81Gg04sH2LmHI2fTjenthLyunr6+NyubA0GWakHz582N/fz+fzuVwuTF+jUQ5QDkVRAwMD0LkLhcJnz555vd6AiIxGo0AgiIuLQ+uVYeV0a2vryMgIl8sNWEXd3Nzs8/nUanV8fDxMS2i1WvpSrgD79F2gHIIgSJKUy+WNjY0wa+J2ux89esTj8VgsVlpa2sTEBBCn6O1GkuStW7ecTqff76+pqSFJEuZFglLO2NgYl8utqKhAyxxgsR/quA0GQ0ZGBixVyM/Pt1qtzLvTfbbb7fDSjCAIiUQyNDQUFNWbN29evHgxNTWVxWKlpqaaTCabzQbDR6lUWlpaShBER0cHNBMs8BsZGZFIJGw2u6CgICMjQyQS6fX6pKSkBw8eCAQCyGXlcrmQ56H9pPsc9jKmnLBDGtzglpQzPDx8+vTp5OTkqqoqoVB45syZsbExZocOb+d0Ol1lZeX09PT58+fb2tqY1dARgiAqKyvRbrgKt2/fJknyvaxhygn+ZLw7Gl7KeWc1yF9YMpCRkeFwONxu9+joKJ/PLysr8/v9sOYqyDXvDvn9/vX1dfpStHdnPuiv3++HpckfZIV2sdfrRV+bAuUoFIq1tbWA11wbGxsOh2ObJEcz/4ui0+lE94IT9Lv/ourbHa/Xu7q6Sr9pAKrAJZOTk/DCE1kApkS7AQW/3x8QHVTweDz0JQz0q0L7Sa8ZxjKmnDCCGcrUlpTT0tLyl7/8xWAwWCyW6enp9PT0zs7OH3/88d69ez/88ENGRkZnZ6fFYgHK+fHHH+/fv//o0aPjx49LpVJ47QsEMD4+XlhYmJ6e/vz5c4vFQhBEXl5eenr6lStXgMOGhoZycnISExNv3bo1Ozs7OTmpUqmqq6ulUml5ebnJZGIesVgsOp0uJSUlIyNjYGDAYrEgymlpaZFKpZmZmX19faEZCFNOqOeDovaMcvx+f2NjI4wDeDweSZK5ubn0We7Qfh66s4hygvbIBzAcoJwQyiMH0Oftu4QpZ/tYfVDNLSnHYDBER0efPn0aVHDm5+ctFotKpfrNb35TVFRUVlYWFRU1MDAAlKNSqdhstk6nO3369PXr141GI3T309PT//znP+Pj4ysqKs6ePavVar///vuTJ0+WlZXBgv3x8fGoqKgrV67cvXv31KlTN27cGBgYOH78OOT3/Pzzz1UqFfOIRqM5efJkUVGRUqmMiooaGhoCyuno6Pjyyy8rKysVCsXXX389MTERgnUw5YR+gPaMcsANv99vt9sXFxcDfqGHdvIwnoXR2zbXXh+EAN1ut8vlog+DDoJX4fIBU064kNzCzpaUY7FYjEZjVVWVQCD47LPPoqOjR0ZGVCpVVFQUMAqPx1OpVHTKmZqaOnfuHP3FmkajOXfuHMzudHV1jY2Nff/998XFxRaLpbm5OTo6enZ2tru7e2BgoK6u7ty5czBq+eKLL7q7uy0WS2pqamZm5sDAQMCRlJSU8+fP19TUVFdXnzlzRqVSAeV0dXWdPHkyLS1No9HMzs6G4BuLxYIpJ/QjsseUE9oZfBYjsEsIYMrZJWADzW5JOWq1uq6uDnpto9H43XffFRcXq1QqPp8PBxMTE4uLi0NTTn19/fnz5+m9P5rLefbsWXR09Pj4+HfffRcVFSWVSs+fP5+enj4wMHD27NnBwUGLxZKZmRn0SGxs7FdffZX8blOr1ejFmlarTUxMPHHixF//+tegk0+IhzDlBD4Tv9zHlPNLPPBeZCKAKWeP2nVLyqmurj59+nR/f7/FYpmamvr73/9+584dlUp15swZg8EwOTkZHR2NjMCLNRjlPHv2DHXrnZ2dZ8+e7enpMZvNAoGgqqoqgHLq6uq+/vrryclJo9H47bffpqWlbYdyrl279u23387Ozk5NTSUmJra3twPl6HS6pKQko9E4Njb25Zdftra2Ik+YBUw5oR81TDmh8cFnIwMBTDl71I6ILZh9MRwxmUygD33ixInPPvtMJpMZjUaVSnXs2LFTp079+c9/zsrKmp+fp49yzGZzTEzMH//4R5SaemFhobCw8Pjx46dOnfrXv/41OTkZQDkwyvnqq6+++eabCxcuxMTEbIdyJiYmSJI8ceLEmTNnpFLp3NwcUI7RaORwOCdPnjx9+nR8fPzMzMxm0eEXa1s+Z4eLctxuN+TG3zIul8vV3d09MjKyZc2wVPB6vVNvN+Ya7u3Yt9lso6OjKFHmlpeMj493dXVtWW3vK8zNzdHTqoZwYI9DwJQToi3CeWpLykGdtcFgMJlMsAujmbm3G6oQUEBrB9Bxk8kUeiZ/enoaliegS7ZTMBqNc3NzzJowZmIeDziCRzmhn6fDRTlLS0sJCQl5eXlBg6JLA4SuGfTyDzkIedtSUlIgofJ2TGk0GqFQCHoKWq12s1w+TFN+vx8S0zFP7e+RLR1DIW9ZM+yBYMoJO6TBDW6fcug9NVAO/cjhLWPKCf5kvDu6G5Tj8Xjm5+cDUuvb7faFhQU0CFhdXbXZbF6v12KxwEcqKysry8vLsGLK8XaDLF4WiwX9/EdE4vP5lpeXV1dXKYpaX19fXl52u910aYDV1dW2tjY0FPD7/XQ7LpcLvqCk59h3Op1WqzXommZ6fv53yP3/vz6fb3FxcWlpKS0tDVFOgC4A6DK4XC6bzQawOByOyspKHo8HiX8Q5aysrIBMAFwCBOZ0OkFWANQNnE5nX18fUiBcX18HKQTkmN/vX1pastls6Ai94Pf7rVYrElmAUy6Xy2w2o9iZ+KyvrwNiFEW5XC7UIvRmpRMJaBzQNSkCQqaHQFEU3Q407tramsvlmp+fD8uqP0w59GdgF8s7o5zXr1/DlzqHl2mQ55hyQj9eYaecnp4e+KqfJMkHDx54PB6n0wlvRFkslkQi0ev1kBVNJBIpFArI2w/fdcXExExMTGxsbGRkZDQ1NZlMpkuXLkEKy8LCQrvdjigHFdxud0VFBY/H6+3tpUsD9Pb2ovEQ045GoyFJsrCwkMPhkCQJUn5qtZogiJaWFjpizPz89LM2my07OxtSGPD5fKAcpi5Ab28vi8UCpQNIxQbfD0BahMbGRqCcrKws8KehoWF2dlYoFF65csXlckF+s+7u7rm5uYSEhO7ubhjl+P3+1tZWHo/H4XC4XK5Go6EoamFhITMzk8Visdns69evB4y6LBZLTk4O3FehUEDCOp1OJxAIWCwWl8t9/PgxKNYH4KPValksFgBVU1OjUCiYzYooBxXomhTl5eUoDURDQwMaqDHtmM3muLi47OxsyPeTm5u7trZGh30HZUw5OwBtJ5fsjHJQfx0BBUw5oZ+b8FLOmzdvpFJpQkJCT08PJFAZGRnRarVsNruhoQFSe2VmZq6urhYUFEDu56qqKhaLVVtbq9PpWCxWU1PT5OSkWCw2GAxKpRIk1yA3V21tLWIaVECUMzo6SpcGeP36NVCOx+Nh2tFoNARBZGdnazQaUH/Z2Njo7u5WKBSdnZ10xJj5+enpy1oBPT3GAAARgUlEQVRbW2HacmxsTCAQpKSkTExMMHUBgHJycnL0ej0o6AwMDFy9epXL5T558mRxcREoR6FQtLW1gWiNzWbLzc1NSEiYnJxMT08nCOL+/fs6nU4ikVgsFuivV1dXU1JSLl++7HA42tvbm5ubUbDDw8P9/f08Hq+5uRmF4/V6IbFmfX095AytqqpaWFgQiUSXL1+22+3FxcUxMTF6vZ6JD6gSKBSK5eXl7Ozs5uZmZrOura0hLoQCnXJmZ2dRyPPz84hymHYmJyfj4uKEQuHz58+zsrIgcSqKYmcFTDk7w+29r1Kr1f/4xz8igDl2HMJvf/tbZu7h98Yxci8IL+VAIsi8vDy32+1wOF6/fr2xsVFWVoZ0zC5fviwQCKampgoKCiCB9PDwMIfDuXfv3uLiImgY19fX5+TkLC4uymQyuVxut9shpX9ubi5iGlRAlKPX6+nSAKiCzWZj2oEuVa1WQ6+NXogx29nv9wfk56e/sAIhnP7+fsjXmZKSApIwAWoLQDl1dXXw85/L5Y6Pj5eXlyMJH6Ccp0+f2u12uVyelpbmcDiamppIkqyrq0tKSpJIJDk5OUqlsqio6L///S/01263Oz8/HxK+paWltbW1LS8vS6VSDoeTmpqakpJCkmRBQQEKym63JycnC4XCubk5j8czNzdntVpBC66iooKiqKamJpD/YeJjs9mUSmVsbKxWq5VKpTMzM8xmNRqNISiHoigUMhoGURTFtNPT0xMXFwdPUWVl5cWLF5m5XFFQ2yxgytkmUB9araen5/PPP99xf33YLzQajb/61a82y/X0oeBGxPW7RzkLCwsdHR0///xzRUUFJCF2uVwKhSI2NnZmZqagoCA+Pn5hYWF0dJTL5d67d8/n88FwRCQStbS0rKysyOVyqVS6vLwMlJOXl4eIBBXcbjdk3UeUA9IAqEJQO9ClarXaLaf9mfn5t6QcNpsdoLYAlFNdXe33+wsLC+Fne3l5OY/HC1g+ANQFlDMxMRETEwPiBSqVisvlCgQCjUZD769XVlZqa2tBa5XH40GWncTExJ6enpGRkd7eXnoCG6AcYHoY1RkMBpDqKS0tRZTz7NmzoPi8fPmSJMnk5OTc3FyXy8Vs1tnZWTrlBGhSAOVAyPQQmHb6+vri4uL+/e9/+3y++/fvY8o5TD2N1+s9duwYWs182Cnkff0vLi4mCOIwNdie+xpeygGFFRAWA92XiYmJzs5OSCo8NDQUFxeXk5PjcDiYlENRVEdHB5vNBiVmr9dbWlrK5XJfvnzZ2NhIkmRTUxMikuXlZbFYLBKJmpqaxGIxdOJ0aYDJyUl4sRbUTtAutb29XSKRvHjxgt4IzPz8dMppa2tjsVg3b95Uq9U8Hg9erDHVFoByMjIyuru7xWKxVCq1Wq137tyBV4szMzNo+QCdctbX10EWuq6urru7m81mC4XC169fo/76559/vnr1akVFhdlsfvjwIY/H83g8169fhykZjUYTExNDl60DKFgsVmlpaXl5OUxiQZOlpKQYDAaQwZ6amgqKz5s3b2ByBaa7mM2KBPcoimJqUlAUhUKenp5GL9aYdqampjDl0B/CQ1ZWKpV/+9vfzGbz+/bXh73+2NjYp59+GvBq/pA13u67G17KoShqcHBQIpEQBMHn89Vqtd/vd7vdtbW1IFuQlZU1OzsLywcCRjmgQSkWi3Nzc2GR0tLS0tWrV0ErWqVSbWxsIMrxer337t2D9QhKpRIohy4N0N7ejpYPMO0E7VKDLh9g5uenv6e12+35+fmwNCA5ORle0DHVFoByUlNTuVwun8/XarUURY2MjAiFQpgKCko5IB/H4XAGBwdNJhOwNaRBg/4aJBhEIhFBEDwe79GjR4AhzJNxOJwbN27Aoj70HC0tLYHDJEmWlJTA4oL+/n7gkvj4eJ1O5/f7g+Lj8/lKSkpiY2ONRiNFUcxmRVxIURRTk4Ie8p07dxDlMO3A8gE8ykGtdsgKHo+HxWJduHAhYhahbYcLe3p6/vCHP1y7du2Qtdaeuxt2yoGVzQ6HA62Hhpg8Hs/OUnk6nc7NlsmCtmYAZptJA4SwE2AhYHez/PyoGjO9f4AuAFAOiOt4PB50ocfjWVlZCQAKnd1mwe/3gx4EvX7oYJ1vN3p9cHgHOT1DNGtQTYrNQg5hh+7njst4LmfH0O3kQo/HI5fLP/74Yx6Pl5+ffy2it9zc3HPnzn3yySfl5eU7AeuIXbMblHPEINw6XEQ5W1fFNXYHAUw5u4NrSKs2m62srCw9PT1pDzeJRLKHd0tKTk7OzMysra3FSwZCPgv/dxJTzv9hsWsln8/ndDrp45tduxU2HBwBTDnBcYm8o83NzREswxUB7YUpJwIaEYewJQKYcraEKEIqiMXihw8fRkgwkRgGppxIbFUcUyACmHICEYnU/d///vcxMTGRGl0ExIUpJwIaEYewJQKYcraEKBIqGI3Gjz766JNPPvnf//4XCfFEYgxhpxyfz2cwGLq6ugKyewF4KLk9KoQR1PX19fHx8cXFRbpNu90+OjpqtVrpB3e7/ObNG51OZzabP+RGYTHyIQ5sdu32Id1jCYnNHKYoClNOCHAi51RlZeVHb7exsbHIiSqyIgk75dTW1rLZ7KB5sdB3G6gQXixRIgO62a6uLoIgnjx5Qj+42+WOjg6CINRqddAbmUwmkUhUV1cX9Cw6GNoIqrb3hdCO0aND31HtvZMBd8SUEwBIZO6y2WygnBs3bkRmhIc/qvBSjtPpzM7OTkhIMBqN8JEHPSk9YhpUQPjRq6E8+X6/3/52g+9JIYE/fIQ4Pz+PMu1TFOV0OhcXF3/66SfInYPMUhSFKGd5eRkycm5sbFitVljTCIIC9PpQXllZMZlMzO+BmDIEUD8g+b/ZbH78+PHMzAxTBQB++HO53Lt371qt1pWVFZvN5vf7PR7P8vIyfLq0trZmt9vn5ubACEVRHo/HbDYH6EE4HI4ADQJ6IEwJiQAFB6ZGAF0xAc6CwYALEeUwJSTW1ta6u7shOpvNtrGxQZeQCECJCQ7d//CWMeWEF8+DaM3n83388cdAOefPnz+ILmKfKCq8lFNdXQ1aAzKZzGw2B2gWIKZBBWCLgGooC8Dy8rJMJpNIJFar9dmzZzKZzGq1MqUBhoaGBAIBQRASiYQkSXqKF0Q5aWlpoAvQ1NQEGVbu3LlDUVRNTU1OTg79QdDr9cnJySwWiyAIsVg8OTlJPwtf2NBlCPx+PzP5P+qUmSoJkEYT0vhnZWVBjhmj0QjHS0pK3G53Xl5eaWnpy5cvYag0MzMjl8vJt1txcTF8YllZWUmSJIvFksvlAU5SFMWUkGAqODA1AiBLUFZW1sbGhl6vF4lEFEUxL0TRoUEMSq4KqYkgusuXL6MKFEUxUWKCQ4c6vGVMOeHF8yBam56e/uKLL379619/+umnUVFRO/iw+SBGFXE+hZdyhoeHxWJxXFycWq1ubW0N0CxgJrenKIqZu358fFwgEBQWFg4PD/N4PC6XOzw8XFRUdP36dZPJFCANoNfr8/PzeTxed3d3Y2MjQRBBKaegoGBxcfHSpUsikQgUBFJSUhYXF7OyspqamlCr+v3+hoaG3NxcyHvGZrNVKhU6S1EUUA5dhmB4eJiZ/B91ypA5hq6SsLCwUF9fz2azS0pKJiYmWlpaCILQarUPHz4kCCItLW1qakokEnW83YByKisr+Xy+wWCYmZm5d++exWKBfNW1tbVv3rxJS0vLycmhf4jGlJAYHBxkKjgA5dA1AsbGxoqKivh8/tTUVH19vUKhQGoIer0eSUig6BCjIMrp6+ujR4cqbFMiYWf5KegNtFkZU85myETa8cTExPv370daVBEUT3gpB9TV5HL5ysoKMyk9M7l90Nz1er1eoVBIpdKqqqqkpCQ+n19VVSWTydrb23t7e0mSpEsDNDc3y+VymUxms9l++ukn0EGgtw+8WIOJk5KSEi6XOzY2dufOnYsXLzY3NzPHMS6X6z//+U95eTmow0GKZWQQKIcuQ9DQ0ABJS1Em5qdPn6JOGSgnQCUBBjTgEiTJViqVubm5GRkZkBFVLBYvLi4iI62trTB2FIlElZWVNputtLSUxWLJZLJLly7x+XyhUEj/+o0pIWE2m5kKDkA5ARoBOp2OzWY3Njbm5OQ8fvw4qPQDcgwxCqIcvV5Pjw5VgINbSiQEXXKCwP+QAqacD0HvMF2LKeeAt9buUQ4zKX1AcnuhUEhRFLPa3NxcfX09h8MRCoWNjY2pqamxsbFisdhsNvf29gZIA0xNTW2HciDlJWRZfvXq1cjICJfLFQqF2dnZ9F/WKO10Q0PDixcvLl68GJRy6DIEoGoTkPwfdcpB82NC/wsuOZ1OSLktkUhANg3UCjweDzLi8Xja29sLCwsh+WZNTQ0orTU3N4+Ojvb39w8PD9NntuiUAxISgFKAEkTQ7JkWiyUxMVEkEkkkkpmZmaDSD8gxxCh0CQl6dKjC9iUSdun/BVPOLgF74MxiyjlwTfJLh3aPcphJ6VFye/pcDrOa0+l89eoVj8cDBbOSkhKCIODHOOiKymSy3t7e27dvc7nc/v7+goICLpf79OnT0tLSzV6sZWZm9vf3AzlZrVYYjREEAcrKCBJQlImPj9fr9Xfv3iUIIijl0GUIDAaDTCYLSP6POuWglAOjMYVCMTQ0BPNJBEFkZWWZTCYgFVjqBkaePHny4MGDK1eu6PX6vr4+Pp9fU1MD77iUSuXg4GDq242uVcqUkHj16hVTCSIo5fh8vuvXrxMEAaI4iIPpEhIouqASEvToEOVsXyIBtUV4C5hywovnwbWGKefgts1bz3aPcphJ6RHToELQHPgURTkcjkuXLiUlJa2srLS1tREE8fjxY0CSKQ0wPj4uEok4HI5SqQxQiIHlA6mpqbdu3SJJMjY29uXLl2CnsbERiXKiNvL5fPX19TExMTwer7y8XCAQKJVKdBbN5QTIEDCT/6NOOSjl2O12hUJBEERKSgpFUWNjY1wut6KiwuPx5Ofn8/n8mZkZUA+CuRyDwZCRkQGLBfLz861Wq9vtfvToEY/HY7FYaWlpExMTdCeDSkgwFRyCUg5FUfAj4OnTp2CTeSGKLqiEBD06RDkgZ7AdiYSAQMK1GzbK+eijj7R4O8AIXLhw4dKlSwfYwaPu2u3bt3/3u9+F6x+baWebSem3WQ3sB0gDUBTl9Xrp8+dMN2BpHMgEOByOoaGh4uLivLy8oFe5325BjaCc0C6Xi56m832T/8P676B3D3pf8J/+DhCiDjhCvxZEDQKUEUKLGtAvDyiHuJApIbFZdO+LUoAPH7IbNsr55ptvzuPtACNw7NixP/3pTwfYwaPuWnR0dGJi4of8Mx+6aw0GQ0xMjFAoHBwcfF/nEeW874W4/v4iEDbK2d8w8N23RAC/WNsSIlxhjxHw+Xx2u50+Rtm+A1iGYPtYHaiamHIOVHPsojOYcnYRXGwaI4AR2B4CmHK2h9Phr4Up5/C3IY4AI3DoEcCUc+ibcJsBYMrZJlC4GkYAI7B7CGDK2T1sD5ZlTDkHqz2wNxiBI4kAppyj0uyYco5KS+M4MQIHGAFMOQe4ccLqGqacsMKJjWEEMAI7QQBTzk5QO4zXYMo5jK2GfcYIRBgCmHIirEE3DQdTzqbQ4BMYAYzAXiGAKWevkN7v++Tk5ND1SPbbHXx/jABG4CgigCnnKLY6jhkjgBHACOwLAphy9gV2fFOMAEYAI3AUEcCUcxRbHceMEcAIYAT2BQFMOfsCO74pRgAjgBE4ighgyjmKrY5jxghgBDAC+4IAppx9gR3fFCOAEcAIHEUEMOUcxVbHMWMEMAIYgX1BAFPOvsCOb4oRwAhgBI4iAphyjmKr45gxAhgBjMC+IIApZ19gxzfFCGAEMAJHEQFMOUex1XHMGAGMAEZgXxDAlLMvsOObYgQwAhiBo4gAppyj2Oo4ZowARgAjsC8IYMrZF9jxTTECGAGMwFFE4P8BV1lmVPSpNf8AAAAASUVORK5CYII=) ### 3. Hyperparameter Optimization Quando se desenvolve uma projeta uma rede neural, muitas decisões arbitrárias são tomadas. _Quantas camadas empilhar? Quantas unidades ou filtros devem ir em cada camada? Devo usar ``relu`` ou outra função de ativação? Quanto dropout devo usar?_ Na prática, uma boa intuição de otimização de hiperparâmetros é construída através de várias experiências no decorrer do tempo. As decisões iniciais serão sempre subotimizadas até que se desenvolva uma boa intuição de como ajustar os hiperparâmetros. As ferramentas abaixo otimizam os hiperparâmetros automaticamente * Hyperopt - https://github.com/hyperopt/hyperopt * Hyperas - https://github.com/maxpumperla/hyperas ### 4. Model Ensembling Compor a predição final usando predições parciais de vários modelos. ```python preds_a = model_a.predict(x_val) preds_b = model_b.predict(x_val) preds_c = model_c.predict(x_val) preds_d = model_d.predict(x_val) final_preds = 0.5 * preds_a + 0.25 * preds_b + 0.1 * preds_c + 0.15 * preds_d ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns columns = ['Age','Gender','Total Bilirubin','Direct Bilirubin','Alkaline Phosphotase', 'Alamine Aminotransferase', 'Aspartate Aminotransferase', 'Total Protiens', 'Albumin', 'Albumin and Globulin Ratio','Selector'] data = pd.read_csv('/content/Indian Liver Patient Dataset (ILPD).csv', names=columns) data.head() data.info() ## Dealing with missing values. # OUT OF 583 ROWS ONLY 4 ROWS HAVE MISSING VALUES IN ALBUMIN AND GLOBULIN RATIO COLUMNS. ## THAT ACCOUNTS FOR ONLY 0.68% OF MISSING DATA ##THEREFORE IS THESE MISSING VALUES ARE OMITTED , THE DATA SET SHOULD NOT BE AFFECTED ## convert the values of gender from string to respective integers from sklearn.preprocessing import LabelEncoder data.loc[:,'Gender'] = LabelEncoder().fit_transform(data['Gender']) data.head() ## droping the nan rows data = data.dropna(how='any', axis=0) data.head() data.info() data.describe() sns.pairplot(data) sns.heatmap(data.corr(),annot=True) from sklearn.model_selection import train_test_split X = data.drop('Selector', axis=1) y = data['Selector'] X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.3, random_state = 42) ##Scaling the features from sklearn.preprocessing import MinMaxScaler X_train_scaled = MinMaxScaler().fit_transform(X_train) X_test_scaled = MinMaxScaler().fit_transform(X_test) #Using unscaled datafrom sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import precision_score from sklearn.metrics import f1_score from sklearn.metrics import recall_score from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 5) knn.fit(X_train, y_train) print("k-NN Classifier on unscaled test data:") print("Accuracy:", knn.score(X_test, y_test)) print("Precision:", precision_score(y_test, knn.predict(X_test))) print("Recall:", recall_score(y_test, knn.predict(X_test))) print("F-1 score:", f1_score(y_test, knn.predict(X_test))) #Using scaled data knn_scaled = KNeighborsClassifier(n_neighbors = 5) knn_scaled.fit(X_train_scaled, y_train) print("SVM Classifier on scaled test data:") print("Accuracy:", knn_scaled.score(X_test_scaled, y_test)) print("Precision:", precision_score(y_test, knn_scaled.predict(X_test_scaled))) print("Recall:", recall_score(y_test, knn_scaled.predict(X_test_scaled))) print("F-1 score:", f1_score(y_test, knn_scaled.predict(X_test_scaled))) #using normal data from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n_estimators = 20) rfc.fit(X_train, y_train) print("SVM Classifier on unscaled test data:") print("Accuracy:", rfc.score(X_test, y_test)) print("Precision:", precision_score(y_test, rfc.predict(X_test))) print("Recall:", recall_score(y_test, rfc.predict(X_test))) print("F-1 score:", f1_score(y_test, rfc.predict(X_test))) #using scaled data rfc_scaled = RandomForestClassifier(n_estimators = 20) rfc_scaled.fit(X_train_scaled, y_train) print("Random Forest Classifier on scaled test data:") print("Accuracy:", rfc_scaled.score(X_test_scaled, y_test)) print("Precision:", precision_score(y_test, rfc_scaled.predict(X_test_scaled))) print("Recall:", recall_score(y_test, rfc_scaled.predict(X_test_scaled))) print("F-1 score:", f1_score(y_test, rfc_scaled.predict(X_test_scaled))) #Using normal data from sklearn.svm import SVC svc_clf = SVC(C = 0.1, kernel = 'rbf').fit(X_train, y_train) print("SVM Classifier on unscaled test data:") print("Accuracy:", svc_clf.score(X_test, y_test)) print("Precision:", precision_score(y_test, svc_clf.predict(X_test))) print("Recall:", recall_score(y_test, svc_clf.predict(X_test))) print("F-1 score:", f1_score(y_test, svc_clf.predict(X_test))) #Using scaled data svc_clf_scaled = SVC(C = 0.1, kernel = 'rbf').fit(X_train_scaled, y_train) print("SVM Classifier on scaled test data:") print("Accuracy:", svc_clf_scaled.score(X_test_scaled, y_test)) print("Precision:", precision_score(y_test, svc_clf_scaled.predict(X_test_scaled))) print("Recall:", recall_score(y_test, svc_clf_scaled.predict(X_test_scaled))) print("F-1 score:", f1_score(y_test, svc_clf_scaled.predict(X_test_scaled))) ```
github_jupyter
[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/06_legend.ipynb) [![image](https://mybinder.org/badge_logo.svg)](https://gishub.org/leafmap-binder) **Adding custom legends to the map** Uncomment the following line to install [leafmap](https://leafmap.org) if needed. ``` # !pip install leafmap import leafmap ``` List available built-in legends. ``` legends = leafmap.builtin_legends for legend in legends: print(legend) ``` National Land Cover Database (NLCD) https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD Create an interactive map. ``` Map = leafmap.Map() ``` Add a WMS layer and built-in legend to the map. ``` url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2016_Land_Cover_L48/wms?" Map.add_wms_layer( url, layers="NLCD_2016_Land_Cover_L48", name="NLCD 2016 CONUS Land Cover", format="image/png", transparent=True, ) Map.add_legend(builtin_legend='NLCD') Map ``` Add National Wetlands Inventory to the map. ``` Map = leafmap.Map(google_map="HYBRID") url1 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands/MapServer/WMSServer?" Map.add_wms_layer( url1, layers="1", format='image/png', transparent=True, name="NWI Wetlands Vector" ) url2 = "https://www.fws.gov/wetlands/arcgis/services/Wetlands_Raster/ImageServer/WMSServer?" Map.add_wms_layer( url2, layers="0", format='image/png', transparent=True, name="NWI Wetlands Raster" ) Map.add_legend(builtin_legend="NWI") Map ``` **Add custom legends** There are two ways you can add custom legends: 1. Define legend labels and colors 2. Define legend dictionary Define legend keys and colors ``` Map = leafmap.Map() labels = ['One', 'Two', 'Three', 'Four', 'ect'] # color can be defined using either hex code or RGB (0-255, 0-255, 0-255) colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3'] # colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68, 123)] Map.add_legend(title='Legend', labels=labels, colors=colors) Map ``` Define a legend dictionary. ``` Map = leafmap.Map() url = "https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2016_Land_Cover_L48/wms?" Map.add_wms_layer( url, layers="NLCD_2016_Land_Cover_L48", name="NLCD 2016 CONUS Land Cover", format="image/png", transparent=True, ) legend_dict = { '11 Open Water': '466b9f', '12 Perennial Ice/Snow': 'd1def8', '21 Developed, Open Space': 'dec5c5', '22 Developed, Low Intensity': 'd99282', '23 Developed, Medium Intensity': 'eb0000', '24 Developed High Intensity': 'ab0000', '31 Barren Land (Rock/Sand/Clay)': 'b3ac9f', '41 Deciduous Forest': '68ab5f', '42 Evergreen Forest': '1c5f2c', '43 Mixed Forest': 'b5c58f', '51 Dwarf Scrub': 'af963c', '52 Shrub/Scrub': 'ccb879', '71 Grassland/Herbaceous': 'dfdfc2', '72 Sedge/Herbaceous': 'd1d182', '73 Lichens': 'a3cc51', '74 Moss': '82ba9e', '81 Pasture/Hay': 'dcd939', '82 Cultivated Crops': 'ab6c28', '90 Woody Wetlands': 'b8d9eb', '95 Emergent Herbaceous Wetlands': '6c9fb8', } Map.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict) Map ```
github_jupyter
``` # To check whether GPU is available from tensorflow.python.client import device_lib from warnings import filterwarnings filterwarnings('ignore') print(device_lib.list_local_devices()) # list of DeviceAttributes tf.test.is_gpu_available() # Returns true/False # Or only check for GPU's with CUDA support tf.test.is_gpu_available(cuda_only=True) import pandas as pd import numpy as np import os, time, cv2, tqdm, datetime import matplotlib.pyplot as plt from tqdm import tqdm from warnings import filterwarnings filterwarnings('ignore') SIZE = (224,224) POSITIVES_PATH_TRAIN = 'data/Train/Class1/' NEGATIVES_PATH_TRAIN = 'data/Train/Class2/' POSITIVES_PATH_VALID = 'data/Val/Class1/' NEGATIVES_PATH_VALID = 'data/Val/Class2/' # POSITIVES_PATH_TEST = # NEGATIVES_PATH_TEST = from keras.applications import VGG16 # Load the VGG model vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(SIZE[0], SIZE[1], 3)) # Freeze the layers except the last 4 layers for layer in vgg_conv.layers[:-4]: layer.trainable = False # labels enabled for fine-tuning for layer in vgg_conv.layers: print(layer, layer.trainable) from keras import models from keras import layers from keras import optimizers # Create the model def build_feat_extractor(): model = models.Sequential() # Add the vgg convolutional base model model.add(vgg_conv) # Add new layers model.add(layers.Flatten()) model.add(layers.Dense(1024, activation='relu')) model.add(layers.Dropout(0.2)) model.add(layers.Dense(256, activation = 'relu')) model.add(layers.Dense(2, activation='softmax')) return model build_feat_extractor().summary() from keras.preprocessing.image import ImageDataGenerator train_batchsize = 64 train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, fill_mode='nearest') train_generator = train_datagen.flow_from_directory('data/Train/', class_mode='categorical', batch_size=train_batchsize, target_size = SIZE) val_datagen = ImageDataGenerator( rescale=1./255, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, fill_mode='nearest') val_generator = val_datagen.flow_from_directory('data/Val/', class_mode='categorical', batch_size=train_batchsize, target_size = SIZE) # Compile the model from keras.callbacks import TensorBoard model = build_feat_extractor() model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) # Train the model model.fit_generator( train_generator, steps_per_epoch=train_generator.samples/train_generator.batch_size, validation_data = val_generator, validation_steps = val_generator.samples/val_generator.batch_size, epochs=10, verbose=2) # Save the trained model to disk model.save('weights/Feature_Extractor.h5') from keras.models import Model import keras.layers as L inp = model.input out = model.layers[-4].output feat_extractor = Model(inputs = [inp], outputs = [out]) feat_extractor.summary() feat_extractor.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) import re LOOK_BACK = 4 def data_to_lstm_format(POSITIVES_PATH, NEGATIVES_PATH, look_back = 4): data = np.array([]) labels = np.array([]) numbers = [] # POSITIVE LABELS for value in os.listdir(POSITIVES_PATH): numbers.append(int(re.findall(r'\d+', value.split('_')[2])[0])) # filter by video for numb in np.unique(numbers): frames = [] # append image name for value in os.listdir(POSITIVES_PATH): if int(re.findall(r'\d+', value.split('_')[2])[0]) == numb: frames.append(value) # sort image frame by frame number frames = sorted(frames, key = lambda x: int(re.findall(r'\d+', x.split('_')[-1].split('.')[0])[0])) image_data = np.zeros((len(frames), 1024)) # get feature vector from vgg16 for each frame and stack for index, image in enumerate(frames): img = cv2.imread(POSITIVES_PATH + image) vect = feat_extractor.predict(img.reshape(1,224,224,3)) image_data[index,:] = vect # for each frame get tensor with lookbacks stacked_data = np.zeros((len(frames), look_back, 1024)) for index in range(len(frames)): labels = np.append(labels, [1]) stacked_data[index, 0, :] = image_data[index] for lb in range(1, look_back): if index - lb >= 0: stacked_data[index, lb, :] = image_data[index - lb] else: stacked_data[index, lb, :] = np.zeros(1024) if data.shape[0] == 0: data = stacked_data else: data = np.concatenate([data, stacked_data], axis = 0) for value in os.listdir(NEGATIVES_PATH): numbers.append(int(re.findall(r'\d+', value.split('_')[2])[0])) # filter by video for numb in np.unique(numbers): frames = [] # append image name for value in os.listdir(NEGATIVES_PATH): if int(re.findall(r'\d+', value.split('_')[2])[0]) == numb: frames.append(value) # sort image frame by frame number frames = sorted(frames, key = lambda x: int(re.findall(r'\d+', x.split('_')[-1].split('.')[0])[0])) image_data = np.zeros((len(frames), 1024)) # get feature vector from vgg16 for each frame and stack for index, image in enumerate(frames): img = cv2.imread(NEGATIVES_PATH + image) vect = feat_extractor.predict(img.reshape(1,224,224,3)) image_data[index,:] = vect # for each frame get tensor with lookbacks stacked_data = np.zeros((len(frames), look_back, 1024)) for index in range(len(frames)): labels = np.append(labels, [0]) stacked_data[index, 0, :] = image_data[index] for lb in range(1, look_back): if index - lb >= 0: stacked_data[index, lb, :] = image_data[index - lb] else: stacked_data[index, lb, :] = np.zeros(1024) if data.shape[0] == 0: data = stacked_data else: data = np.concatenate([data, stacked_data], axis = 0) # one hot labels from keras.utils import to_categorical labels = to_categorical(labels) return data, labels tr_data, tr_labels = data_to_lstm_format(POSITIVES_PATH_TRAIN, NEGATIVES_PATH_TRAIN, look_back=LOOK_BACK) val_data, val_labels = data_to_lstm_format(POSITIVES_PATH_VALID, NEGATIVES_PATH_VALID, look_back=LOOK_BACK) from keras.models import Model import keras.layers as L from keras.optimizers import RMSprop num_features = 1024 def build_model(): inp = L.Input(shape = (LOOK_BACK, num_features)) """ Use CuDNNLSTM if your machine supports CUDA Training time is significantly faster compared to LSTM """ #x = L.LSTM(64, return_sequences = True)(inp) x = L.CuDNNLSTM(64, return_sequences = True)(inp) x = L.Dropout(0.2)(x) #x = L.LSTM(16)(x) x = L.CuDNNLSTM(16)(x) out = L.Dense(2, activation = 'softmax')(x) model = Model(inputs = [inp], outputs = [out]) model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) return model from keras.callbacks import TensorBoard ##https://www.tensorflow.org/tensorboard/get_started log_dir = "data/_training_logs/rnn/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = TensorBoard(log_dir=log_dir, histogram_freq=0) model = build_model() history = model.fit(tr_data, tr_labels, validation_data = (val_data, val_labels), callbacks = [tensorboard_callback], verbose = 2, epochs = 20, batch_size = 64) # Save the trained model weights to disk model.save('weights/RNN.h5') ```
github_jupyter
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) # Ensemble Kalman Filters ``` #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ``` > I am not well versed with Ensemble filters. I have implemented one for this book, and made it work, but I have not used one in real life. Different sources use slightly different forms of these equations. If I implement the equations given in the sources the filter does not work. It is possible that I am doing something wrong. However, in various places on the web I have seen comments by people stating that they do the kinds of things I have done in my filter to make it work. In short, I do not understand this topic well, but choose to present my lack of knowledge rather than to gloss over the subject. I hope to master this topic in the future and to author a more definitive chapter. At the end of the chapter I document my current confusion and questions. In any case if I got confused by the sources perhaps you also will, so documenting my confusion can help you avoid the same. The ensemble Kalman filter (EnKF) is very similar to the unscented Kalman filter (UKF) of the last chapter. If you recall, the UKF uses a set of deterministically chosen weighted sigma points passed through nonlinear state and measurement functions. After the sigma points are passed through the function, we find the mean and covariance of the points and use this as the filter's new mean and covariance. It is only an approximation of the true value, and thus suboptimal, but in practice the filter is highly accurate. It has the advantage of often producing more accurate estimates than the EKF does, and also does not require you to analytically derive the linearization of the state and measurement equations. The ensemble Kalman filter works in a similar way, except it uses a *Monte Carlo* method to choose a large numbers of sigma points. It came about from the geophysical sciences as an answer for the very large states and systems needed to model things such as the ocean and atmosphere. There is an interesting article on it's development in weather modeling in *SIAM News* [1]. The filter starts by randomly generating a large number of points distributed about the filter's initial state. This distribution is proportional to the filter's covariance $\mathbf{P}$. In other words 68% of the points will be within one standard deviation of the mean, 95% percent within two standard deviations, and so on. Let's look at this in two dimensions. We will use `numpy.random.multivariate_normal()` function to randomly create points from a multivariate normal distribution drawn from the mean (5, 3) with the covariance $$\begin{bmatrix} 32 & 15 \\ 15 & 40 \end{bmatrix}$$ I've drawn the covariance ellipse representing two standard deviations to illustrate how the points are distributed. ``` import matplotlib.pyplot as plt import numpy as np from numpy.random import multivariate_normal from filterpy.stats import (covariance_ellipse, plot_covariance_ellipse) mean = (5, 3) P = np.array([[32, 15], [15., 40.]]) x,y = multivariate_normal(mean=mean, cov=P, size=2500).T plt.scatter(x, y, alpha=0.3, marker='.') plt.axis('equal') plot_covariance_ellipse(mean=mean, cov=P, variance=2.**2, facecolor='none') ``` ## The Algorithm As I already stated, when the filter is initialized a large number of sigma points are drawn from the initial state ($\mathbf{x}$) and covariance ($\mathbf{P}$). From there the algorithm proceeds very similarly to the UKF. During the prediction step the sigma points are passed through the state transition function, and then perturbed by adding a bit of noise to account for the process noise. During the update step the sigma points are translated into measurement space by passing them through the measurement function, they are perturbed by a small amount to account for the measurement noise. The Kalman gain is computed from the We already mentioned the main difference between the UKF and EnKF - the UKF choses the sigma points deterministically. There is another difference, implied by the algorithm above. With the UKF we generate new sigma points during each predict step, and after passing the points through the nonlinear function we reconstitute them into a mean and covariance by using the *unscented transform*. The EnKF keeps propagating the originally created sigma points; we only need to compute a mean and covariance as outputs for the filter! Let's look at the equations for the filter. As usual, I will leave out the typical subscripts and superscripts; I am expressing an algorithm, not mathematical functions. Here $N$ is the number of sigma points, $\chi$ is the set of sigma points. ### Initialize Step $$\boldsymbol\chi \sim \mathcal{N}(\mathbf{x}_0, \mathbf{P}_0)$$ This says to select the sigma points from the filter's initial mean and covariance. In code this might look like ```python N = 1000 sigmas = multivariate_normal(mean=x, cov=P, size=N) ``` ### Predict Step $$ \begin{aligned} \boldsymbol\chi &= f(\boldsymbol\chi, \mathbf{u}) + v_Q \\ \mathbf{x} &= \frac{1}{N} \sum_1^N \boldsymbol\chi \end{aligned} $$ That is short and sweet, but perhaps not entirely clear. The first line passes all of the sigma points through a use supplied state transition function and then adds some noise distributed according to the $\mathbf{Q}$ matrix. In Python we might write ```python for i, s in enumerate(sigmas): sigmas[i] = fx(x=s, dt=0.1, u=0.) sigmas += multivariate_normal(x, Q, N) ``` The second line computes the mean from the sigmas. In Python we will take advantage of `numpy.mean` to do this very concisely and quickly. ```python x = np.mean(sigmas, axis=0) ``` We can now optionally compute the covariance of the mean. The algorithm does not need to compute this value, but it is often useful for analysis. The equation is $$\mathbf{P} = \frac{1}{N-1}\sum_1^N[\boldsymbol\chi-\mathbf{x}^-][\boldsymbol\chi-\mathbf{x}^-]^\mathsf{T}$$ $\boldsymbol\chi-\mathbf{x}^-$ is a one dimensional vector, so we will use `numpy.outer` to compute the $[\boldsymbol\chi-\mathbf{x}^-][\boldsymbol\chi-\mathbf{x}^-]^\mathsf{T}$ term. In Python we might write ```python P = 0 for s in sigmas: P += outer(s-x, s-x) P = P / (N-1) ``` ### Update Step In the update step we pass the sigma points through the measurement function, compute the mean and covariance of the sigma points, compute the Kalman gain from the covariance, and then update the Kalman state by scaling the residual by the Kalman gain. The equations are $$ \begin{aligned} \boldsymbol\chi_h &= h(\boldsymbol\chi, u)\\ \mathbf{z}_{mean} &= \frac{1}{N}\sum_1^N \boldsymbol\chi_h \\ \\ \mathbf{P}_{zz} &= \frac{1}{N-1}\sum_1^N [\boldsymbol\chi_h - \mathbf{z}_{mean}][\boldsymbol\chi_h - \mathbf{z}_{mean}]^\mathsf{T} + \mathbf{R} \\ \mathbf{P}_{xz} &= \frac{1}{N-1}\sum_1^N [\boldsymbol\chi - \mathbf{x}^-][\boldsymbol\chi_h - \mathbf{z}_{mean}]^\mathsf{T} \\ \\ \mathbf{K} &= \mathbf{P}_{xz} \mathbf{P}_{zz}^{-1}\\ \boldsymbol\chi & = \boldsymbol\chi + \mathbf{K}[\mathbf{z} -\boldsymbol\chi_h + \mathbf{v}_R] \\ \\ \mathbf{x} &= \frac{1}{N} \sum_1^N \boldsymbol\chi \\ \mathbf{P} &= \mathbf{P} - \mathbf{KP}_{zz}\mathbf{K}^\mathsf{T} \end{aligned} $$ This is very similar to the linear KF and the UKF. Let's just go line by line. The first line, $$\boldsymbol\chi_h = h(\boldsymbol\chi, u),$$ just passes the sigma points through the measurement function $h$. We name the resulting points $\chi_h$ to distinguish them from the sigma points. In Python we could write this as ```python sigmas_h = h(sigmas, u) ``` The next line computes the mean of the measurement sigmas. $$\mathbf{z}_{mean} = \frac{1}{N}\sum_1^N \boldsymbol\chi_h$$ In Python we write ```python z_mean = np.mean(sigmas_h, axis=0) ``` Now that we have the mean of the measurement sigmas we can compute the covariance for every measurement sigma point, and the *cross variance* for the measurement sigma points vs the sigma points. That is expressed by these two equations $$ \begin{aligned} \mathbf{P}_{zz} &= \frac{1}{N-1}\sum_1^N [\boldsymbol\chi_h - \mathbf{z}_{mean}][\boldsymbol\chi_h - \mathbf{z}_{mean}]^\mathsf{T} + \mathbf{R} \\ \mathbf{P}_{xz} &= \frac{1}{N-1}\sum_1^N [\boldsymbol\chi - \mathbf{x}^-][\boldsymbol\chi_h - \mathbf{z}_{mean}]^\mathsf{T} \end{aligned}$$ We can express this in Python with ```python P_zz = 0 for sigma in sigmas_h: s = sigma - z_mean P_zz += outer(s, s) P_zz = P_zz / (N-1) + R P_xz = 0 for i in range(N): P_xz += outer(self.sigmas[i] - self.x, sigmas_h[i] - z_mean) P_xz /= N-1 ``` Computation of the Kalman gain is straightforward $\mathbf{K} = \mathbf{P}_{xz} \mathbf{P}_{zz}^{-1}$. In Python this is ```python K = np.dot(P_xz, inv(P_zz))``` Next, we update the sigma points with $$\boldsymbol\chi = \boldsymbol\chi + \mathbf{K}[\mathbf{z} -\boldsymbol\chi_h + \mathbf{v}_R]$$ Here $\mathbf{v}_R$ is the perturbation that we add to the sigmas. In Python we can implement this with ```python v_r = multivariate_normal([0]*dim_z, R, N) for i in range(N): sigmas[i] += dot(K, z + v_r[i] - sigmas_h[i]) ``` Our final step is recompute the filter's mean and covariance. ```python x = np.mean(sigmas, axis=0) P = self.P - dot3(K, P_zz, K.T) ``` ## Implementation and Example I have implemented an EnKF in the `FilterPy` library. It is in many ways a toy. Filtering with a large number of sigma points gives us very slow performance. Furthermore, there are many minor variations on the algorithm in the literature. I wrote this mostly because I was interested in learning a bit about the filter. I have not used it for a real world problem, and I can give no advice on using the filter for the large problems for which it is suited. Therefore I will refine my comments to implementing a very simple filter. I will use it to track an object in one dimension, and compare the output to a linear Kalman filter. This is a filter we have designed many times already in this book, so I will design it with little comment. Our state vector will be $$\mathbf{x} = \begin{bmatrix}x\\ \dot{x}\end{bmatrix}$$ The state transition function is $$\mathbf{F} = \begin{bmatrix}1&1\\0&1\end{bmatrix}$$ and the measurement function is $$\mathbf{H} = \begin{bmatrix}1&0\end{bmatrix}$$ The EnKF is designed for nonlinear problems, so instead of using matrices to implement the state transition and measurement functions you will need to supply Python functions. For this problem they can be written as: ```python def hx(x): return np.array([x[0]]) def fx(x, dt): return np.dot(F, x) ``` One final thing: the EnKF code, like the UKF code, uses a single dimension for $\mathbf{x}$, not a two dimensional column matrix as used by the linear kalman filter code. Without further ado, here is the code. ``` from numpy.random import randn from filterpy.kalman import EnsembleKalmanFilter as EnKF from filterpy.kalman import KalmanFilter from filterpy.common import Q_discrete_white_noise import book_plots as bp np.random.seed(1234) def hx(x): return np.array([x[0]]) def fx(x, dt): return np.dot(F, x) F = np.array([[1., 1.],[0., 1.]]) x = np.array([0., 1.]) P = np.eye(2) * 100. enf = EnKF(x=x, P=P, dim_z=1, dt=1., N=20, hx=hx, fx=fx) std_noise = 10. enf.R *= std_noise**2 enf.Q = Q_discrete_white_noise(2, 1., .001) kf = KalmanFilter(dim_x=2, dim_z=1) kf.x = np.array([x]).T kf.F = F.copy() kf.P = P.copy() kf.R = enf.R.copy() kf.Q = enf.Q.copy() kf.H = np.array([[1., 0.]]) measurements = [] results = [] ps = [] kf_results = [] zs = [] for t in range (0,100): # create measurement = t plus white noise z = t + randn()*std_noise zs.append(z) enf.predict() enf.update(np.asarray([z])) kf.predict() kf.update(np.asarray([[z]])) # save data results.append (enf.x[0]) kf_results.append (kf.x[0,0]) measurements.append(z) ps.append(3*(enf.P[0,0]**.5)) results = np.asarray(results) ps = np.asarray(ps) plt.plot(results, label='EnKF') plt.plot(kf_results, label='KF', c='b', lw=2) bp.plot_measurements(measurements) plt.plot (results - ps, c='k',linestyle=':', lw=1, label='1$\sigma$') plt.plot(results + ps, c='k', linestyle=':', lw=1) plt.fill_between(range(100), results - ps, results + ps, facecolor='y', alpha=.3) plt.legend(loc='best'); ``` It can be a bit difficult to see, but the KF and EnKF start off slightly different, but soon converge to producing nearly the same values. The EnKF is a suboptimal filter, so it will not produce the optimal solution that the KF produces. However, I deliberately chose $N$ to be quite small (20) to guarantee that the EnKF output is quite suboptimal. If I chose a more reasonable number such as 2000 you would be unable to see the difference between the two filter outputs on this graph. ## Outstanding Questions All of this should be considered as *my* questions, not lingering questions in the literature. However, I am copying equations directly from well known sources in the field, and they do not address the discrepancies. First, in Brown [2] we have all sums multiplied by $\frac{1}{N}$, as in $$ \hat{x} = \frac{1}{N}\sum_{i=1}^N\chi_k^{(i)}$$ The same equation in Crassidis [3] reads (I'll use the same notation as in Brown, although Crassidis' is different) $$ \hat{x} = \frac{1}{N-1}\sum_{i=1}^N\chi_k^{(i)}$$ The same is true in both sources for the sums in the computation for the covariances. Crassidis, in the context of talking about the filter's covariance, states that $N-1$ is used to ensure an unbiased estimate. Given the following standard equations for the mean and standard deviation (p.2 of Crassidis), this makes sense for the covariance. $$ \begin{aligned} \mu &= \frac{1}{N}\sum_{i=1}^N[\tilde{z}(t_i) - \hat{z}(t_i)] \\ \sigma^2 &= \frac{1}{N-1}\sum_{i=1}^N\{[\tilde{z}(t_i) - \hat{z}(t_i)] - \mu\}^2 \end{aligned} $$ However, I see no justification or reason to use $N-1$ to compute the mean. If I use $N-1$ in the filter for the mean the filter does not converge and the state essentially follows the measurements without any filtering. However, I do see a reason to use it for the covariance as in Crassidis, in contrast to Brown. Again, I support my decision empirically - $N-1$ works in the implementation of the filter, $N$ does not. My second question relates to the use of the $\mathbf{R}$ matrix. In Brown $\mathbf{R}$ is added to $\mathbf{P}_{zz}$ whereas it isn't in Crassidis and other sources. I have read on the web notes by other implementers that adding R helps the filter, and it certainly seems reasonable and necessary to me, so this is what I do. My third question relates to the computation of the covariance $\mathbf{P}$. Again, we have different equations in Crassidis and Brown. I have chosen the implementation given in Brown as it seems to give me the behavior that I expect (convergence of $\mathbf{P}$ over time) and it closely compares to the form in the linear KF. In contrast I find the equations in Crassidis do not seem to converge much. My fourth question relates to the state estimate update. In Brown we have $$\boldsymbol\chi = \boldsymbol\chi + \mathbf{K}[\mathbf{z} -\mathbf{z}_{mean} + \mathbf{v}_R]$$ whereas in Crassidis we have $$\boldsymbol\chi = \boldsymbol\chi + \mathbf{K}[\mathbf{z} -\boldsymbol\chi_h + \mathbf{v}_R]$$ To me the Crassidis equation seems logical, and it produces a filter that performs like the linear KF for linear problems, so that is the formulation that I have chosen. I am not comfortable saying either book is wrong; it is quite possible that I missed some point that makes each set of equations work. I can say that when I implemented them as written I did not get a filter that worked. I define "work" as performs essentially the same as the linear KF for linear problems. Between reading implementation notes on the web and reasoning about various issues I have chosen the implementation in this chapter, which does in fact seem to work correctly. I have yet to explore the significant amount of original literature that will likely definitively explain the discrepancies. I would like to leave this here in some form even if I do find an explanation that reconciles the various differences, as if I got confused by these books than probably others will as well. ## References - [1] Mackenzie, Dana. *Ensemble Kalman Filters Bring Weather Models Up to Date* Siam News, Volume 36, Number 8, October 2003. http://www.siam.org/pdf/news/362.pdf - [2] Brown, Robert Grover, and Patrick Y.C. Hwang. *Introduction to Random Signals and Applied Kalman Filtering, With MATLAB® excercises and solutions.* Wiley, 2012. - [3] Crassidis, John L., and John L. Junkins. *Optimal estimation of dynamic systems*. CRC press, 2011.
github_jupyter
# Storing Particle Shape ## Overview ### Questions * How can I store particle shape for use with visualization tools? ### Objectives * Demonstrate logging **type_shapes** to a **GSD** file. * Explain that OVITO can read this information. ## Boilerplate code ``` import gsd.hoomd import hoomd import os fn = os.path.join(os.getcwd(), 'trajectory.gsd') ![ -e "$fn" ] && rm "$fn" ``` ## Particle Shape HPMC integrators and some anisotropic MD pair potentials model particles that have a well defined shape. You can save this shape definition to a **GSD** file for use in analysis and visualization workflows. In particular, [OVITO](https://www.ovito.org/) will read this shape information and render particles appropriately. ## Define the Simulation This section executes the hard particle simulation from a previous tutorial. See [*Introducing HOOMD-blue*](../00-Introducing-HOOMD-blue/00-index.ipynb) for a complete description of this code. ``` cpu = hoomd.device.CPU() sim = hoomd.Simulation(device=cpu, seed=2) mc = hoomd.hpmc.integrate.ConvexPolyhedron() mc.shape['octahedron'] = dict(vertices=[ (-0.5, 0, 0), (0.5, 0, 0), (0, -0.5, 0), (0, 0.5, 0), (0, 0, -0.5), (0, 0, 0.5), ]) sim.operations.integrator = mc sim.create_state_from_gsd( filename='../00-Introducing-HOOMD-blue/compressed.gsd') sim.run(0) ``` ## Logging particle shape to a GSD file The **type_shapes** loggable quantity is a representation of the particle shape for each type following the [**type_shapes** specification](https://gsd.readthedocs.io/en/stable/shapes.html) for the **GSD** file format. In HPMC simulations, the integrator provides **type_shapes**: ``` mc.loggables mc.type_shapes ``` Add the **type_shapes** quantity to a **Logger**. ``` logger = hoomd.logging.Logger() logger.add(mc, quantities=['type_shapes']) ``` Write the simulation trajectory to a **GSD** file along with the logged quantities: ``` gsd_writer = hoomd.write.GSD(filename='trajectory.gsd', trigger=hoomd.trigger.Periodic(10000), mode='xb', filter=hoomd.filter.All(), log=logger) sim.operations.writers.append(gsd_writer) ``` Run the simulation: ``` sim.run(20000) ``` As discussed in a previous section, delete the simulation so it is safe to open the GSD file for reading in the same process. ``` del sim, gsd_writer, logger, mc, cpu ``` ## Reading logged shapes from a GSD file You can access the shape from scripts using the `gsd` package: ``` traj = gsd.hoomd.open('trajectory.gsd', 'rb') ``` **type_shapes** is a special quantity available via `particles.type_shapes` rather than the `log` dictionary: ``` traj[0].particles.type_shapes ``` Open the file in OVITO and it will read the shape definition and render particles appropriately. In this section, you have logged particle shape to a GSD file during a simulation so that visualization and analysis tools can access it. The next section shows how to write formatted output. [Previous section](02-Saving-Array-Quantities.ipynb) / [Next Section](04-Writing-Formatted-Output.ipynb)
github_jupyter
# Monetary Economics: Chapter 3 From "Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth, 2nd ed" by Wynne Godley and Marc Lavoie, 2012. ## The Simplest Model with Government Money, Model SIM Assumptions * No private money, only Government money (no private banks) * No profits, *pure labor economy* * Fixed price of labor, unlimited quantity of labor, thus the economy is not supply-constrained. ## Transactions matrix for Model SIM ||1.Households|2.Production|3.Government|&Sigma;| |-------|:------:|:--------:|:--:|-----| |1.Consumption|-C|+C||0| |2.Govt expenditures||+G|-G|0| |3.[Output]||[Y]||| |4.Factor income (wages)|+WB|-WB||0| |5.Taxes|-T||+T|0| |6.Change in the stock of money|-&Delta;H||+&Delta;H|0| |&Sigma;|0|0|0|0| Definition of terms * **C** : Consumption goods demand by households * **G** : Government expenditures * **Y** : National income * **WB** : Wage bill * **T** : Taxes * **&Delta;H** : Change in cash money In this model, people (as consumers and producers of income) have been separated. ## Behavioral (transactions) matrix for Model SIM ||1.Households|2.Production|3.Government|&Sigma;| |----------|:----------:|:----------:|:---:|| |1.Consumption|-Cd|+Cs||0| |2.Govt expenditures||+Gs|-Gd|0| |3.[Output]||[Y]||| |4.Factor income (wages)|+W&bull;Ns|-W&bull;Nd||0| |5.Taxes|-Ts||+Td|0| |6.Change in the stock of money|-&Delta;Hh||+&Delta;Hs|0| |&Sigma;|0|0|0|0| Differences from previous matrix: * Each transaction has a suffix, *s*, *d*, and *h*. * *s* supply * *d* demand * *h* household cash * The Wage Bill (WB) has been separated into two parts. * *W* Wage rate * *N* employment Definition of terms * **Cd** : Consumption goods demand by households * **Cs** : Consumption goods supply by firms * **Gs** : Services supplied by the government * **Gd** : Services demanded from government * **Y** : National income * **W** : Wage rate * **Ns** : Supply of labor * **Nd** : Demand for labor * **Ts** : Taxes supplied * **Td** : Taxes demanded by government * **&Delta;Hh** : Change in cash money held by households * **&Delta;Hs** : Change in cash money supplied by the central bank ## Model SIM > From here, I will be building the model in code. > Because this is the first model, the Python code will > be explained in more detail also. **Important:** Use sympy version 0.7.5 The following piece of code is necessary to show the graphics inline for iPython notebooks. To view the graphs, matplotlib is required. ``` # This line configures matplotlib to show figures embedded in the notebook, # If you are using an old version of IPython, try using '%pylab inline' instead. %matplotlib inline from pysolve3.model import Model from pysolve3.utils import is_close,round_solution import matplotlib.pyplot as plt ``` ###### Preliminaries In order to build the model, we must first start off by importing several modules that will be used to build the model. *pysolve* is a Python module that I have developed to make it easier to specify and solve linear models. The first line will import the main Model class. The second line imports several utility functions that will prove useful. ``` from pysolve3.model import Model from pysolve3.utils import is_close,round_solution ``` ###### Create the model The first step when developing a pysolve model is to create the model. This is just an empty model for now, but we will be adding the rest of the information to this. ``` model = Model() ``` ###### Define the variables The second step is to define the (endogenous) variables. These are the variables that we are allowed to manipulate within the model. This is pretty straigtforward. As a useful step, I define the default value for all variables. This can be changed on an individual basis. This is the value that the variable will start off with if nothing is changed. ``` model.set_var_default(0) ``` Next, we create the variables used by the sim. Most of these have been explained above. ``` model.var('Cd', desc='Consumption goods demand by households') model.var('Cs', desc='Consumption goods supply') model.var('Gs', desc='Government goods, supply') model.var('Hh', desc='Cash money held by households') model.var('Hs', desc='Cash money supplied by the government') model.var('Nd', desc='Demand for labor') model.var('Ns', desc='Supply of labor') model.var('Td', desc='Taxes, demand') model.var('Ts', desc='Taxes, supply') model.var('Y', desc='Income = GDP') model.var('YD', desc='Disposable income of households'); ``` As an aside, multiple variables can be created by the following code. But the above is more descriptive. ```python model.vars('Y', 'YD', 'Ts', 'Td', 'Hs', 'Hh', 'Gs', 'Cs', 'Cd', 'Ns', 'Nd') ``` The value of the variables may also be changed mid-iteration. They will then be used to seed the value of the next iteration. For example ```python varx = model.var('x') # ... later varx.value = 22 # this will also work model.variables['x'].value = 22 ``` Aside: the semicolon ';' at the end of the last line of code is an iPython artifact, and is not needed by the python code. It is used to suppress output by the iPython interpreter. ###### Define the parameters The next step is to define the parameters. I do not differentiate between exogenous variables and parameters since both are set outside of the model. The solver will not be able to change these values. However, the user may change these values between calls to the solver. Like the variables, there is a call that may be made to set a default value for all parameters, but I will be creating the parameters with their default values. The call would look like this ```python model.set_parameter_initial(1.0) ``` In addition the parameter values could be changed like this: ```python Gd = model.param('Gd', initial=10) # ... # at some later time Gd.value = 20 # or this would work also model.parameters['Gd'].value = 20 ``` Some of the parameters (alpha1, alpha2 and theta) have not been explained yet, but will be explained when we add the equations that use them. ``` model.param('Gd', desc='Government goods, demand', default=20.) model.param('W', desc='Wage rate', default=1.) model.param('alpha1', desc='Propensity to consume out of income', default=0.6) model.param('alpha2', desc='Propensity to consume o of wealth', default=0.4) model.param('theta', desc='Tax rate', default=0.2); ``` ###### Define the equations Adding an equation is just adding the textual form of the equation. There are some restrictions. Linear systems only. ``` model.add('Cs = Cd') model.add('Gs = Gd') model.add('Ts = Td') model.add('Ns = Nd'); ``` These four equations imply that demand equals supply for this period, no supply constraints of any kind. ``` model.add('YD = (W*Ns) - Ts'); ``` Disposable income (*YD*) is the wages earned by households minus taxes. ``` model.add('Td = theta * W * Ns'); ``` Taxes are a fixed proportion (*theta*) of income. *theta* is decided by the government and is exogenous to the model. ``` model.add('Cd = alpha1*YD + alpha2*Hh(-1)'); ``` This is a consumption function, the rates at which housholds consume. This is a combination of consumption of inherited wealth (*Hh(-1)*) and post-tax income (*YD*). ``` model.add('Hs - Hs(-1) = Gd - Td'); ``` This comes from the transaction-flow matrix and represents the governments budget constraint. Government expenditures that are not paid for by taxes (*Gd-Td*), must be covered by differences in the money supply. ``` model.add('Hh - Hh(-1) = YD - Cd'); ``` The difference in the cash that households carry is the difference between their income and their consumption. ``` model.add('Y = Cs + Gs'); ``` The determination of national income. ``` model.add('Nd = Y/W'); ``` The determination of employment. We now have 11 equations and 11 unknowns. **Each of the eleven unknowns has been set on the left-hand side of an equation** (This implies that we can use the Gauss-Seidel algorithm to iterate to a solution, convergence is not guaranteed but we can try.) ###### Solve We have set the default for all of the variables to 0, and that will be used as an initial solution. ``` model.solve(iterations=100, threshold=1e-4); prev = round_solution(model.solutions[-2], decimals=1) solution = round_solution(model.solutions[-1], decimals=1) print("Y : " + str(solution['Y'])) print("T : " + str(solution['Ts'])) print("YD : " + str(solution['YD'])) print("C : " + str(solution['Cs'])) print("Hs-Hs(-1) : " + str(solution['Hs'] - prev['Hs'])) print("Hh-Hh(-1) : " + str(solution['Hh'] - prev['Hh'])) print("H : " + str(solution['Hh'])) ``` ### The code for the full model To make the model easier to manipulate, I will encapsulate model creation into a single function. ``` def create_sim_model(): model = Model() model.set_var_default(0) model.var('Cd', desc='Consumption goods demand by households') model.var('Cs', desc='Consumption goods supply') model.var('Gs', desc='Government goods, supply') model.var('Hh', desc='Cash money held by households') model.var('Hs', desc='Cash money supplied by the government') model.var('Nd', desc='Demand for labor') model.var('Ns', desc='Supply of labor') model.var('Td', desc='Taxes, demand') model.var('Ts', desc='Taxes, supply') model.var('Y', desc='Income = GDP') model.var('YD', desc='Disposable income of households') model.param('Gd', desc='Government goods, demand') model.param('W', desc='Wage rate') model.param('alpha1', desc='Propensity to consume out of income') model.param('alpha2', desc='Propensity to consume out of wealth') model.param('theta', desc='Tax rate') model.add('Cs = Cd') # 3.1 model.add('Gs = Gd') # 3.2 model.add('Ts = Td') # 3.3 model.add('Ns = Nd') # 3.4 model.add('YD = (W*Ns) - Ts') # 3.5 model.add('Td = theta * W * Ns') # 3.6, theta < 1.0 model.add('Cd = alpha1*YD + alpha2*Hh(-1)') # 3.7, 0 < alpha2 < alpha1 < 1 model.add('Hs - Hs(-1) = Gd - Td') # 3.8 model.add('Hh - Hh(-1) = YD - Cd') # 3.9 model.add('Y = Cs + Gs') # 3.10 model.add('Nd = Y/W') # 3.11 return model ``` Now we can run the simulation using the model. ``` model = create_sim_model() model.set_values({'alpha1': 0.6, 'alpha2': 0.4, 'theta': 0.2, 'Gd': 20, 'W': 1}) model.solve(iterations=100, threshold=1e-5) prev = round_solution(model.solutions[-2], decimals=1) solution = round_solution(model.solutions[-1], decimals=1) print("Y : " + str(solution['Y'])) print("T : " + str(solution['Ts'])) print("YD : " + str(solution['YD'])) print("C : " + str(solution['Cs'])) print("Hs-Hs(-1) : " + str(solution['Hs'] - prev['Hs'])) print("Hh-Hh(-1) : " + str(solution['Hh'] - prev['Hh'])) print("H : " + str(solution['Hh'])) ``` ### Steady-state solution We now generate the steady-state solution by iterating until the solutions converge. ``` steady_state = create_sim_model() steady_state.set_values({'alpha1': 0.6, 'alpha2': 0.4, 'theta': 0.2, 'Gd': 20, 'W': 1}) for _ in range(100): steady_state.solve(iterations=100, threshold=1e-5) prev_soln = steady_state.solutions[-2] soln = steady_state.solutions[-1] if is_close(prev_soln, soln, atol=1e-4): break prev = round_solution(steady_state.solutions[-2], decimals=1) solution = round_solution(steady_state.solutions[-1], decimals=1) print("Y : " + str(solution['Y'])) print("T : " + str(solution['Ts'])) print("YD : " + str(solution['YD'])) print("C : " + str(solution['Cs'])) print("Hs-Hs(-1) : " + str(solution['Hs'] - prev['Hs'])) print("Hh-Hh(-1) : " + str(solution['Hh'] - prev['Hh'])) print("H : " + str(solution['Hh'])) ``` ###### Table 3.4 We can also generate table 3.4 ``` from IPython.display import HTML import numpy from pysolve3.utils import generate_html_table data = list() for var in [('Gd', 'G'), ('Y', 'Y'), ('Ts', 'T'), ('YD', 'YD'), ('Cs', 'C')]: rowdata = list() rowdata.append(var[1]) for i in [0, 1, 2, -1]: rowdata.append(str(numpy.round(steady_state.solutions[i][var[0]], decimals=1))) data.append(rowdata) for var in [('Hs', '&Delta;Hs'), ('Hh', '&Delta;Hh')]: rowdata = list() rowdata.append(var[1]) rowdata.append(str(numpy.round(steady_state.solutions[0][var[0]], decimals=1))) for i in [1, 2, -1]: rowdata.append(str(numpy.round(steady_state.solutions[i][var[0]] - steady_state.solutions[i-1][var[0]], decimals=1))) data.append(rowdata) for var in [('Hh', 'H')]: rowdata = list() rowdata.append(var[1]) for i in [0, 1, 2, -1]: rowdata.append(str(numpy.round(steady_state.solutions[i][var[0]], decimals=1))) data.append(rowdata) s = generate_html_table(['Period', '1', '2', '3', '&infin;'], data) HTML(s) ``` ### Scenario: Model SIM, increase government expenditures ``` step_model = create_sim_model() step_model.set_values({'alpha1': 0.6, 'alpha2': 0.4, 'theta': 0.2, 'Gd': 20, 'W': 1}) # Use the steady state solution as a starting point step_model.solutions = steady_state.solutions[-2:] for i in range(45): step_model.solve(iterations=100, threshold=1e-5) if i == 2: step_model.parameters['Gd'].value += 5 ``` ###### Figure 3.1 Calculate the solution but with an permanent increase in government expenditures (+5) and starting from the steady state solution. ``` caption = ''' Figure 3.1 Impact on national income Y and the steady state solution Y*, following a permanent increase in government expenditures ($\\bigtriangleup$G = 5).''' gdata = [s['Gd']/s['theta'] for s in step_model.solutions] ydata = [s['Y'] for s in step_model.solutions] # Now graph G/theta and Y fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 1.1]) axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(97, 129) axes.plot(gdata, 'r') # plot G/theta axes.plot(ydata, linestyle='--', color='g') # plot Y # add labels plt.text(10, 126, 'Steady-state solution Y*') plt.text(15, 120, 'Income Y') fig.text(.1, -.1, caption); ``` ###### Figure 3.2 ``` caption = ''' Figure 3.2 Disposable income and consumption starting from scratch (Table 3.4)''' cdata = [s['Cd'] for s in steady_state.solutions] yddata = [s['YD'] for s in steady_state.solutions] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.0, 1.0]) axes.tick_params(top=False, right=False) axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(0, 85) axes.set_xlim(-2, 50) axes.plot(cdata, linestyle=':', color='r') # plot C axes.plot(yddata, linestyle='--', color='g') # plot YD plt.axhline(y=80, color='k') # add labels plt.text(2, 72, 'Disposable') plt.text(2, 68, 'Income YD') plt.text(10, 60, 'Consumption C') fig.text(0.1, 0, caption); ``` ###### Figure 3.3 ``` caption = ''' Figure 3.3 Wealth change and wealth level starting from scratch (Table 3.4)''' hdata = [s['Hh'] for s in steady_state.solutions] deltahdata = [s['Hh'] for s in steady_state.solutions] for i in range(1, len(steady_state.solutions)): deltahdata[i] -= hdata[i-1] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.0, 1.0]) axes.tick_params(top=False) axes.set_ylim(0, 13) axes.set_xlim(-2, 50) axes.plot(deltahdata, linestyle='--', color='b') axes2 = axes.twinx() axes2.set_ylim(0, 85) axes2.set_xlim(-2, 50) axes2.plot(hdata, 'r') # add labels plt.text(20, 16, 'Household saving') plt.text(20, 12, '(the change in the money stock)') plt.text(20, 70, 'Wealth level H (money stock)') fig.text(0.1, -0.05, caption); ``` ###### Figure 3.4 ``` caption = ''' Figure 3.4 Evolution of wealth, target wealth, consumption and disposable income following an increase in government expenditures ($\\bigtriangleup$G = 5) Model SIM ''' hdata = [s['Hh'] for s in step_model.solutions] cdata = [s['Cs'] for s in step_model.solutions] vtdata = [s['YD']*(1.-s['alpha1'])/s['alpha2'] for s in step_model.solutions] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.0, 1.0]) axes.tick_params(top=False, right=False) axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(78, 102) axes.set_xlim(-2, 50) axes.plot(hdata, linestyle='-', color='g', label='Riqueza') axes.plot(cdata, linestyle=':', color='r', linewidth=2, label='Consumo') axes.plot(vtdata, linestyle='--', color='b', label='Riqueza objetivo (e renda disponível)') plt.legend(loc=(0.35,0.2), frameon=False) fig.text(0.1, -0.05, caption); ``` ### Scenario: Model SIM, increase propensity to consume ``` alpha_model = create_sim_model() alpha_model.set_values({'alpha1': 0.6, 'alpha2': 0.4, 'theta': 0.2, 'Gd': 20, 'W': 1}) # Use the steady state solution as a starting point alpha_model.solutions = steady_state.solutions[-2:] for i in range(50): alpha_model.solve(iterations=100, threshold=1e-4) if i == 2: alpha_model.parameters['alpha1'].value = 0.7 ``` ###### Figure 3.8 We will need to generate solutions that involve a change in alpha1 (from 0.6 to 0.7). ``` caption = ''' Figure 3.8 Evolution of consumption, disposable income and wealth following an increase in the propensity to consume out of current income ($\\alpha_1$ moves from 0.6 to 0.7)''' hdata = [s['Hh'] for s in alpha_model.solutions] cdata = [s['Cs'] for s in alpha_model.solutions] vtdata = [s['YD'] for s in alpha_model.solutions] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.0, 1.0]) axes.tick_params(top=False, right=False) axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(58, 100) axes.set_xlim(-2, 50) axes.plot(hdata, linestyle='-', color='g') axes.plot(cdata, linestyle=':', color='r', linewidth=2) axes.plot(vtdata, linestyle='--', color='b') plt.text(6, 97, 'Consumption') plt.text(8, 79, 'Disposable income') plt.text(20, 62, 'Wealth') fig.text(0.1, -0.1, caption); ```
github_jupyter
ANALYSIS NOTEBOOK - DONNELLY 2019 PLOS ONE Patrick M. Donnelly University of Washington JULY 7, 2020 ``` # import necessary databases and libraries import pandas as pd import numpy as np from scipy import stats import seaborn as sns # plot inline figures %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as mpatches plt.style.use('seaborn-whitegrid') import numpy as np from scipy.stats import linregress from matplotlib import cm #from colorspacious import cspace_converter from collections import OrderedDict cmaps = OrderedDict() plt.rcParams['pdf.fonttype'] = 42 from numpy.polynomial.polynomial import polyfit # pull data from data folder in repository data = pd.read_csv('data/data.csv') # separate data into figure-specific dataframes passage_data = data[['record_id','pigs_casecontrol', 'int_session','study_name', 'first_acc', 'second_rate']] passage_diff_data = data[['pigs_casecontrol', 'study_name', 'first_acc_diff', 'second_rate_diff']] wordlist_data = data[['record_id','pigs_casecontrol', 'int_session','study_name','word_time', 'word_acc', 'word_rate', 'pseudo_time', 'pseudo_acc', 'pseudo_rate']] wordlist_acc_data = data[['record_id', 'int_session', 'pigs_casecontrol', 'study_name','pigs_word1_acc', 'pigs_word2_acc', 'pigs_pseudo1_acc', 'pigs_pseudo2_acc']] wordlist_acc_diff_data = data[['pigs_casecontrol', 'word_acc_diff', 'pseudo_acc_diff']] wordlist_rate_data = data[['pigs_casecontrol', 'study_name', 'word_rate','pseudo_rate']] matlab_data = data[['record_id', 'visit_age','int_session', 'pigs_casecontrol', 'study_name','pigs_word1_acc', 'pigs_word2_acc', 'pigs_pseudo1_acc', 'pigs_pseudo2_acc','word_acc', 'pseudo_acc', 'first_acc', 'second_rate', 'wj_brs', 'twre_index', 'ctopp_rapid', 'ctopp_pa', 'wasi_fs2', 'pigs_practice_numstories']] first_accuracy = data[['pigs_casecontrol', 'study_name','short_first_acc', 'long_first_acc', 'first_acc_diff']] second_rate = data[['pigs_casecontrol', 'study_name', 'short_second_rate', 'long_second_rate', 'second_rate_diff']] predictor_data = data[['pigs_casecontrol', 'study_name','visit_age', 'word_acc_diff', 'pseudo_acc_diff', 'first_acc_diff', 'second_rate_diff', 'ctopp_pa', 'ctopp_pm','ctopp_rapid', 'wasi_fs2']] #create new variable for plotting longitudinal line plots on violin plots word = wordlist_data.drop_duplicates().reset_index() word['violin_axis'] = np.nan for record in range(0, len(word.record_id)): if word.pigs_casecontrol[record] == 0: if word.int_session[record] == 1: word.loc[record, 'violin_axis'] = -0.10 elif word.int_session[record] == 2: word.loc[record, 'violin_axis'] = 0.10 elif word.pigs_casecontrol[record] == 1: if word.int_session[record] == 1: word.loc[record, 'violin_axis'] = 0.90 elif word.int_session[record] == 2: word.loc[record, 'violin_axis'] = 1.10 passage = passage_data.drop_duplicates().reset_index() passage['violin_axis'] = np.nan for record in range(0, len(word.record_id)): if passage.pigs_casecontrol[record] == 0: if passage.int_session[record] == 1: passage.loc[record, 'violin_axis'] = -0.10 elif passage.int_session[record] == 2: passage.loc[record, 'violin_axis'] = 0.10 elif passage.pigs_casecontrol[record] == 1: if passage.int_session[record] == 1: passage.loc[record, 'violin_axis'] = 0.90 elif passage.int_session[record] == 2: passage.loc[record, 'violin_axis'] = 1.10 #Plot figure 2 fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2, figsize = (12,12)) wordlist_acc_diff_grouped = wordlist_acc_diff_data.groupby(['pigs_casecontrol'])['word_acc_diff'].mean() wl_acc_diff_grpd_error = wordlist_acc_diff_data.groupby(['pigs_casecontrol'])['word_acc_diff'].sem() fig1 = wordlist_acc_diff_grouped.plot(kind='bar', yerr=wl_acc_diff_grpd_error, legend=False, rot=0, color=['grey', 'green'], ax=ax1) ax1.set_title("Real word decoding", fontsize=18) ax1.set_xlabel('', fontsize=18) ax1.set_ylabel('Benefit (addl words read)', fontsize=18) ax1.set_xticklabels(['Control', 'Intervention'], fontsize=18), ax1.set_ylim([-2,3]) wordlist_acc_diff_grouped = wordlist_acc_diff_data.groupby(['pigs_casecontrol'])['pseudo_acc_diff'].mean() wl_acc_diff_grpd_error = wordlist_acc_diff_data.groupby(['pigs_casecontrol'])['pseudo_acc_diff'].sem() fig2 = wordlist_acc_diff_grouped.plot(kind='bar', yerr=wl_acc_diff_grpd_error, legend=False, color=['grey', 'green'], rot=0, ax=ax2) ax2.set_title("Pseudo word decoding", fontsize=18) ax2.set_xlabel('', fontsize=16) ax2.set_ylabel('Benefit (addl words read)', fontsize=18) ax2.set_xticklabels(['Control', 'Intervention'], fontsize=18) ax2.set_ylim([-2,3]) g = sns.violinplot(x="pigs_casecontrol", y="word_acc",hue='int_session', data = word, split=True, inner=None, color='darkgrey', ax=ax3) for record in range(0, len(word.record_id.unique())): data = word[word.record_id == word.record_id.unique()[record]] data.groupby(['violin_axis'])['word_acc'].mean().plot(kind='line', colormap='copper',linewidth=1,ax=ax3) white_patch = mpatches.Patch(facecolor='white', label='Session 1', edgecolor="black") gray_patch = mpatches.Patch(facecolor='darkgrey', label='Session 2', edgecolor="black") ax3.set_title("", fontsize=18) ax3.set_xlabel("Group", fontsize=18) ax3.set_ylabel("Number correct", fontsize=18) ax3.set_xticklabels(["Control", "Intervention"], fontsize=18) ax3.legend(handles=[white_patch, gray_patch], loc='lower center', fontsize=18) g = sns.violinplot(x="pigs_casecontrol", y="pseudo_acc",hue='int_session', data = word, split=True, inner=None, color='darkgrey', ax=ax4) for record in range(0, len(word.record_id.unique())): data = word[word.record_id == word.record_id.unique()[record]] data.groupby(['violin_axis'])['pseudo_acc'].mean().plot(kind='line', colormap='copper',linewidth=1,ax=ax4) white_patch = mpatches.Patch(facecolor='white', label='Session 1', edgecolor="black") gray_patch = mpatches.Patch(facecolor='darkgrey', label='Session 2', edgecolor="black") ax4.set_title("", fontsize=18) ax4.set_xlabel("Group", fontsize=18) ax4.set_ylabel("") ax4.set_xticklabels(["Control", "Intervention"], fontsize=18) ax4.legend(handles=[white_patch, gray_patch], loc='lower center', fontsize=16) # plot figure 3 fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2, figsize = (12,12)) acc_grouped = passage_diff_data.groupby(['pigs_casecontrol'])[ 'first_acc_diff'].mean() acc_grouped_error = passage_diff_data.groupby(['pigs_casecontrol'])[ 'first_acc_diff'].sem() acc_grouped.plot(kind='bar',color=['grey', 'green'], yerr=acc_grouped_error, rot=0, ax=ax1) ax1.set_title('Passage Reading Accuracy', fontsize=16) ax1.set_xlabel('Group', fontsize=18) ax1.set_ylabel('Change in Words', fontsize=18) ax1.set_xticklabels(['Control', 'Intervention'], fontsize=18) ax1.set_ylim([0,0.075]) rate_grouped = passage_diff_data.groupby(['pigs_casecontrol'])['second_rate_diff'].mean() rate_grouped_error = passage_diff_data.groupby(['pigs_casecontrol'])['second_rate_diff'].sem() rate_grouped.plot(kind='bar', yerr=rate_grouped_error, rot=0, color=['grey', 'green'], ax=ax2) ax2.set_title('Passage Reading Rate', fontsize=18) ax2.set_xlabel('Group', fontsize=18) ax2.set_ylabel('Change in Words per second', fontsize=18) ax2.set_xticklabels(['Control', 'Intervention'], fontsize=18) ax2.set_ylim([0,0.075]) g = sns.violinplot(x="pigs_casecontrol", y='first_acc',hue='int_session', data = passage, split=True, inner=None, color='darkgrey', ax=ax3) for record in range(0, len(passage.record_id.unique())): data = passage[passage.record_id == passage.record_id.unique()[record]] data.groupby(['violin_axis'])['first_acc'].mean().plot(kind='line', colormap='copper',ax=ax3) white_patch = mpatches.Patch(facecolor='white', label='Session 1', edgecolor="black") gray_patch = mpatches.Patch(facecolor='darkgrey', label='Session 2', edgecolor="black") ax3.set_title("", fontsize=18) ax3.set_xlabel("Group", fontsize=18) ax3.set_ylabel("Proportion correct", fontsize=18) ax3.set_xticklabels(["Control", "Intervention"], fontsize=18) ax3.legend(handles=[white_patch, gray_patch], loc='upper center', fontsize=14) g = sns.violinplot(x="pigs_casecontrol", y='second_rate',hue='int_session', data = passage, split=True, inner=None, color='darkgrey', ax=ax4) for record in range(0, len(passage.record_id.unique())): data = passage[passage.record_id == passage.record_id.unique()[record]] data.groupby(['violin_axis'])['second_rate'].mean().plot(kind='line', colormap='copper',ax=ax4) white_patch = mpatches.Patch(facecolor='white', label='Session 1', edgecolor="black") gray_patch = mpatches.Patch(facecolor='darkgrey', label='Session 2', edgecolor="black") ax4.set_title("", fontsize=18) ax4.set_xlabel("Group", fontsize=18) ax4.set_ylabel("Accurate words per second", fontsize=18) ax4.set_xticklabels(["Control", "Intervention"], fontsize=18) ax4.legend(handles=[white_patch, gray_patch], loc='upper center', fontsize=14) ```
github_jupyter
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_2_Keras_gan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 7: Generative Adversarial Networks** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 7 Material * Part 7.1: Introduction to GANS for Image and Data Generation [[Video]](https://www.youtube.com/watch?v=0QnCH6tlZgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_1_gan_intro.ipynb) * **Part 7.2: Implementing a GAN in Keras** [[Video]](https://www.youtube.com/watch?v=T-MCludVNn4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_2_Keras_gan.ipynb) * Part 7.3: Face Generation with StyleGAN and Python [[Video]](https://www.youtube.com/watch?v=s1UQPK2KoBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_3_style_gan.ipynb) * Part 7.4: GANS for Semi-Supervised Learning in Keras [[Video]](https://www.youtube.com/watch?v=ZPewmEu7644&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_4_gan_semi_supervised.ipynb) * Part 7.5: An Overview of GAN Research [[Video]](https://www.youtube.com/watch?v=cvCvZKvlvq4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_5_gan_research.ipynb) ``` # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) ``` # Part 7.2: Implementing DCGANs in Keras Paper that described the type of DCGAN that we will create in this module. [[Cite:radford2015unsupervised]](https://arxiv.org/abs/1511.06434) This paper implements a DCGAN as follows: * No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. * All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. * All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. * In the LeakyReLU, the slope of the leak was set to 0.2 in all models. * we used the Adam optimizer(Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. * Additionally, we found leaving the momentum term $\beta{1}$ at the suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training. The paper also provides the following architecture guidelines for stable Deep Convolutional GANs: * Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator). * Use batchnorm in both the generator and the discriminator. * Remove fully connected hidden layers for deeper architectures. * Use ReLU activation in generator for all layers except for the output, which uses Tanh. * Use LeakyReLU activation in the discriminator for all layers. While creating the material for this module I used a number of Internet resources, some of the most helpful were: * [Deep Convolutional Generative Adversarial Network (TensorFlow 2.0 example code)](https://www.tensorflow.org/tutorials/generative/dcgan) * [Keep Calm and train a GAN. Pitfalls and Tips on training Generative Adversarial Networks](https://medium.com/@utk.is.here/keep-calm-and-train-a-gan-pitfalls-and-tips-on-training-generative-adversarial-networks-edd529764aa9) * [Collection of Keras implementations of Generative Adversarial Networks GANs](https://github.com/eriklindernoren/Keras-GAN) * [dcgan-facegenerator](https://github.com/platonovsimeon/dcgan-facegenerator), [Semi-Paywalled Article by GitHub Author](https://medium.com/datadriveninvestor/generating-human-faces-with-keras-3ccd54c17f16) The program created next will generate faces similar to these. While these faces are not perfect, they demonstrate how we can construct and train a GAN on or own. Later we will see how to import very advanced weights from nVidia to produce high resolution, realistic looking faces. Figure 7.GAN-GRID shows images from GAN training. **Figure 7.GAN-GRID: GAN Neural Network Training** ![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan-3.png "GAN Images") As discussed in the previous module, the GAN is made up of two different neural networks: the discriminator and the generator. The generator generates the images, while the discriminator detects if a face is real or was generated. These two neural networks work as shown in Figure 7.GAN-EVAL: **Figure 7.GAN-EVAL: Evaluating GANs** ![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_1.png "GAN") The discriminator accepts an image as its input and produces number that is the probability of the input image being real. The generator accepts a random seed vector and generates an image from that random vector seed. An unlimited number of new images can be created by providing additional seeds. I suggest running this code with a GPU, it will be very slow on a CPU alone. The following code mounts your Google drive for use with Google CoLab. If you are not using CoLab, the following code will not work. ``` try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ``` The following packages will be used to implement a basic GAN system in Python/Keras. ``` import tensorflow as tf from tensorflow.keras.layers import Input, Reshape, Dropout, Dense from tensorflow.keras.layers import Flatten, BatchNormalization from tensorflow.keras.layers import Activation, ZeroPadding2D from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import UpSampling2D, Conv2D from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.optimizers import Adam import numpy as np from PIL import Image from tqdm import tqdm import os import time import matplotlib.pyplot as plt ``` These are the constants that define how the GANs will be created for this example. The higher the resolution, the more memory that will be needed. Higher resolution will also result in longer run times. For Google CoLab (with GPU) 128x128 resolution is as high as can be used (due to memory). Note that the resolution is specified as a multiple of 32. So **GENERATE_RES** of 1 is 32, 2 is 64, etc. To run this you will need training data. The training data can be any collection of images. I have used various sources of data for this example over the years. Data sources sometimes become unavailable for copyright reasons. I have one sample source listed below. Simply unzip and combine to a common directory. This directory should be uploaded to Google Drive (if you are using CoLab). The constant **DATA_PATH** defines where these images are stored. One sample dataset of faces can be found here: * [Kaggle Faces Data New](https://www.kaggle.com/gasgallo/faces-data-new) ``` # Generation resolution - Must be square # Training data is also scaled to this. # Note GENERATE_RES 4 or higher # will blow Google CoLab's memory and have not # been tested extensivly. GENERATE_RES = 3 # Generation resolution factor # (1=32, 2=64, 3=96, 4=128, etc.) GENERATE_SQUARE = 32 * GENERATE_RES # rows/cols (should be square) IMAGE_CHANNELS = 3 # Preview image PREVIEW_ROWS = 4 PREVIEW_COLS = 7 PREVIEW_MARGIN = 16 # Size vector to generate images from SEED_SIZE = 100 # Configuration DATA_PATH = '/content/drive/My Drive/projects/faces' EPOCHS = 50 BATCH_SIZE = 32 BUFFER_SIZE = 60000 print(f"Will generate {GENERATE_SQUARE}px square images.") ``` Next we will load and preprocess the images. This can take awhile. Google CoLab took around an hour to process. Because of this we store the processed file as a binary. This way we can simply reload the processed training data and quickly use it. It is most efficient to only perform this operation once. The dimensions of the image are encoded into the filename of the binary file because we need to regenerate it if these change. ``` # Image set has 11,682 images. Can take over an hour # for initial preprocessing. # Because of this time needed, save a Numpy preprocessed file. # Note, that file is large enough to cause problems for # sume verisons of Pickle, # so Numpy binary files are used. training_binary_path = os.path.join(DATA_PATH, f'training_data_{GENERATE_SQUARE}_{GENERATE_SQUARE}.npy') print(f"Looking for file: {training_binary_path}") if not os.path.isfile(training_binary_path): start = time.time() print("Loading training images...") training_data = [] faces_path = os.path.join(DATA_PATH,'face_images') for filename in tqdm(os.listdir(faces_path)): path = os.path.join(faces_path,filename) image = Image.open(path).resize((GENERATE_SQUARE, GENERATE_SQUARE),Image.ANTIALIAS) training_data.append(np.asarray(image)) training_data = np.reshape(training_data,(-1,GENERATE_SQUARE, GENERATE_SQUARE,IMAGE_CHANNELS)) training_data = training_data.astype(np.float32) training_data = training_data / 127.5 - 1. print("Saving training image binary...") np.save(training_binary_path,training_data) elapsed = time.time()-start print (f'Image preprocess time: {hms_string(elapsed)}') else: print("Loading previous training pickle...") training_data = np.load(training_binary_path) ``` We will use a TensorFlow **Dataset** object to actually hold the images. This allows the data to be quickly shuffled int divided into the appropriate batch sizes for training. ``` # Batch and shuffle the data train_dataset = tf.data.Dataset.from_tensor_slices(training_data) \ .shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ``` The code below creates the generator and discriminator. Next we actually build the discriminator and the generator. Both will be trained with the Adam optimizer. ``` def build_generator(seed_size, channels): model = Sequential() model.add(Dense(4*4*256,activation="relu",input_dim=seed_size)) model.add(Reshape((4,4,256))) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Output resolution, additional upsampling model.add(UpSampling2D()) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) if GENERATE_RES>1: model.add(UpSampling2D(size=(GENERATE_RES,GENERATE_RES))) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Final CNN layer model.add(Conv2D(channels,kernel_size=3,padding="same")) model.add(Activation("tanh")) return model def build_discriminator(image_shape): model = Sequential() model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=image_shape, padding="same")) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=3, strides=2, padding="same")) model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=3, strides=2, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(512, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) return model ``` As we progress through training images will be produced to show the progress. These images will contain a number of rendered faces that show how good the generator has become. These faces will be ``` def save_images(cnt,noise): image_array = np.full(( PREVIEW_MARGIN + (PREVIEW_ROWS * (GENERATE_SQUARE+PREVIEW_MARGIN)), PREVIEW_MARGIN + (PREVIEW_COLS * (GENERATE_SQUARE+PREVIEW_MARGIN)), IMAGE_CHANNELS), 255, dtype=np.uint8) generated_images = generator.predict(noise) generated_images = 0.5 * generated_images + 0.5 image_count = 0 for row in range(PREVIEW_ROWS): for col in range(PREVIEW_COLS): r = row * (GENERATE_SQUARE+16) + PREVIEW_MARGIN c = col * (GENERATE_SQUARE+16) + PREVIEW_MARGIN image_array[r:r+GENERATE_SQUARE,c:c+GENERATE_SQUARE] \ = generated_images[image_count] * 255 image_count += 1 output_path = os.path.join(DATA_PATH,'output') if not os.path.exists(output_path): os.makedirs(output_path) filename = os.path.join(output_path,f"train-{cnt}.png") im = Image.fromarray(image_array) im.save(filename) generator = build_generator(SEED_SIZE, IMAGE_CHANNELS) noise = tf.random.normal([1, SEED_SIZE]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0]) image_shape = (GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS) discriminator = build_discriminator(image_shape) decision = discriminator(generated_image) print (decision) ``` Loss functions must be developed that allow the generator and discriminator to be trained in an adversarial way. Because these two neural networks are being trained independently they must be trained in two separate passes. This requires two separate loss functions and also two separate updates to the gradients. When the discriminator's gradients are applied to decrease the discriminator's loss it is important that only the discriminator's weights are update. It is not fair, nor will it produce good results, to adversarially damage the weights of the generator to help the discriminator. A simple backpropagation would do this. It would simultaneously affect the weights of both generator and discriminator to lower whatever loss it was assigned to lower. Figure 7.TDIS shows how the discriminator is trained. **Figure 7.TDIS: Training the Discriminator** ![Training the Discriminator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_2.png "Training the Discriminator") Here a training set is generated with an equal number of real and fake images. The real images are randomly sampled (chosen) from the training data. An equal number of random images are generated from random seeds. For the discriminator training set, the $x$ contains the input images and the $y$ contains a value of 1 for real images and 0 for generated ones. Likewise, the Figure 7.TGEN shows how the generator is trained. **Figure 7.TGEN: Training the Generator** ![Training the Generator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_3.png "Training the Generator") For the generator training set, the $x$ contains the random seeds to generate images and the $y$ always contains the value of 1, because the optimal is for the generator to have generated such good images that the discriminiator was fooled into assigning them a probability near 1. ``` # This method returns a helper function to compute cross entropy loss cross_entropy = tf.keras.losses.BinaryCrossentropy() def discriminator_loss(real_output, fake_output): real_loss = cross_entropy(tf.ones_like(real_output), real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output) total_loss = real_loss + fake_loss return total_loss def generator_loss(fake_output): return cross_entropy(tf.ones_like(fake_output), fake_output) ``` Both the generator and discriminator use Adam and the same learning rate and momentum. This does not need to be the case. If you use a **GENERATE_RES** greater than 3 you may need to tune these learning rates, as well as other training and hyperparameters. ``` generator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) discriminator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) ``` The following function is where most of the training takes place for both the discriminator and the generator. This function was based on the GAN provided by the [TensorFlow Keras exmples](https://www.tensorflow.org/tutorials/generative/dcgan) documentation. The first thing you should notice about this function is that it is annotated with the **tf.function** annotation. This causes the function to be precompiled and improves performance. This function trans differently than the code we previously saw for training. This code makes use of **GradientTape** to allow the discriminator and generator to be trained together, yet separately. ``` # Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def train_step(images): seed = tf.random.normal([BATCH_SIZE, SEED_SIZE]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(seed, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(\ gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(\ disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip( gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip( gradients_of_discriminator, discriminator.trainable_variables)) return gen_loss,disc_loss def train(dataset, epochs): fixed_seed = np.random.normal(0, 1, (PREVIEW_ROWS * PREVIEW_COLS, SEED_SIZE)) start = time.time() for epoch in range(epochs): epoch_start = time.time() gen_loss_list = [] disc_loss_list = [] for image_batch in dataset: t = train_step(image_batch) gen_loss_list.append(t[0]) disc_loss_list.append(t[1]) g_loss = sum(gen_loss_list) / len(gen_loss_list) d_loss = sum(disc_loss_list) / len(disc_loss_list) epoch_elapsed = time.time()-epoch_start print (f'Epoch {epoch+1}, gen loss={g_loss},disc loss={d_loss},'\ ' {hms_string(epoch_elapsed)}') save_images(epoch,fixed_seed) elapsed = time.time()-start print (f'Training time: {hms_string(elapsed)}') train(train_dataset, EPOCHS) generator.save(os.path.join(DATA_PATH,"face_generator.h5")) ```
github_jupyter
# Citrus Leaves Classification Problem Using Adam Optimizer ## Team Salvator Brothers ## Assignment 4-5 **----------------------------------------------------------------------------------------------** ## Importing Libraries ``` # Imports import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory from tensorflow.keras.layers.experimental.preprocessing import Rescaling ``` ## Training-Testing-Validation Dataset Splitting Splitting Given Training Data into Training, Validation Set (3:1 |OR| 75:25) Selecting Whole Given Validation data as Training Dataset ie. Data split (Tr:Te:Va - 3:1:1) ``` from keras.preprocessing.image import ImageDataGenerator as IDG from sklearn.model_selection import train_test_split idg_train = IDG( rescale=1./ 255, rotation_range=180, zoom_range=0.3, width_shift_range=0.3, height_shift_range=0.3, horizontal_flip=True, vertical_flip=True, validation_split=0.25) idg_test = IDG(rescale=1./ 255) ds_train=idg_train.flow_from_directory('../input/citrus-leaves-prepared/citrus_leaves_prepared/train',batch_size=32,shuffle=True,subset='training') ds_valid=idg_train.flow_from_directory('../input/citrus-leaves-prepared/citrus_leaves_prepared/train',batch_size=8,shuffle=True,subset='validation') ds_test=idg_test.flow_from_directory('../input/citrus-leaves-prepared/citrus_leaves_prepared/validation',batch_size=1,shuffle=True) ``` ## Model defining ``` from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import regularizers as rg from tensorflow.keras import initializers as it model = tf.keras.Sequential([ layers.Conv2D(16, (3,3), activation='relu', input_shape=(256, 256, 3),padding='same'), layers.MaxPooling2D(2, 2), layers.Conv2D(32, (3,3), activation='relu',padding='same'), layers.MaxPooling2D(2,2), layers.Conv2D(64, (3,3), activation='relu',padding='same'), layers.MaxPooling2D(2,2), layers.Conv2D(64, (3,3), activation='relu',padding='same'), layers.MaxPooling2D(2,2), layers.Conv2D(64, (3,3), activation='relu',padding='same'), layers.MaxPooling2D(2,2), layers.Flatten(), layers.Dense(512, activation='relu'), layers.Dense(4, activation='softmax') ]) model.summary() ``` ## Model Training and fitting Used Adam as optimizer, Categorical Cross Entropy as Loss Hyper Parameters * Learning Rate - 0.0001 ``` model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss='categorical_crossentropy', metrics=['accuracy','Precision','Recall'] ) history = model.fit( ds_train, validation_data=ds_valid, epochs=35 ) ``` ## Plotting the Graphs for Loss, Accuracy, Precision, Recall ``` import pandas as pd history_frame = pd.DataFrame(history.history) history_frame.loc[:, ['loss', 'val_loss']].plot() history_frame.loc[:, ['accuracy', 'val_accuracy']].plot(); history_frame.loc[:, ['precision', 'val_precision']].plot(); history_frame.loc[:, ['recall', 'val_recall']].plot(); ``` ## Evaluating the Model using the Training Data **Output** * **loss:** 0.5202 * **accuracy:** 0.7849 * **precision:** 0.8049 * **recall:** 0.7374 ``` model.evaluate(ds_train) ```
github_jupyter
# 15 - Deep Learning using keras by [Alejandro Correa Bahnsen](albahnsen.com/) and [Jesus Solano](https://github.com/jesugome) version 1.6, June 2020 ## Part of the class [AdvancedMethodsDataAnalysisClass](https://github.com/albahnsen/AdvancedMethodsDataAnalysisClass/tree/master/notebooks) This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Valerio Maggio](https://mpba.fbk.eu), Fondazione Bruno Kessler <img src="./images/keras-logo-small.jpg" width="20%" /> ## Keras: Deep Learning library for Theano and TensorFlow >Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. >It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. ref: https://keras.io/ <a name="kaggle"></a> ### Boston Housing Data ``` from sklearn.datasets import load_boston boston_dataset = load_boston() print(boston_dataset.DESCR) ``` ##### For this section we will use the Boston Housing Data. # Single Layer Neural Network ## Data Preparation ``` import pandas as pd from sklearn.datasets import load_boston import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split boston_dataset = load_boston() boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) X = boston.drop(boston.columns[-1],axis=1) Y = pd.DataFrame(np.array(boston_dataset.target), columns=['labels']) boston.head() # Split datasets. X_train, X_test , Y_train, Y_test = train_test_split(X,Y, test_size=0.3 ,random_state=22) # Normalize Data from sklearn.preprocessing import StandardScaler # Define the Preprocessing Method and Fit Training Data to it scaler = StandardScaler() scaler.fit(X) # Make X_train to be the Scaled Version of Data # This process scales all the values in all 6 columns and replaces them with the new values X_train = pd.DataFrame(data=scaler.transform(X_train), columns=X_train.columns, index=X_train.index) X_test = pd.DataFrame(data=scaler.transform(X_test), columns=X_test.columns, index=X_test.index) X_train = np.array(X_train) Y_train = np.array(Y_train) X_test = np.array(X_test) Y_test = np.array(Y_test) # As it is a regression problem the output is a neuron. output_var = Y_train.shape[1] print(output_var, ' output variables') dims = X_train.shape[1] print(dims, 'input variables') Y_train.shape ``` --- # Using Keras ``` from keras.models import Sequential from keras.layers import Dense, Activation from livelossplot import PlotLossesKeras from keras import backend as K learning_rate = 0.01 K.clear_session() print("Building model...") print('Model variables: ', dims) model = Sequential() model.add(Dense(output_var, input_shape=(dims,))) print(model.summary()) model.compile(optimizer='sgd', loss='mean_squared_error') model.fit(X_train, Y_train, verbose=2,epochs=15) ``` ### Be more specific with hyperparameters... ``` import keras.optimizers as opts K.clear_session() print("Building model...") print('Model variables: ', dims) model = Sequential() model.add(Dense(output_var, input_shape=(dims,))) op = opts.SGD(lr=learning_rate) model.compile(loss = 'mean_squared_error', optimizer = op) model.fit(X_train, Y_train, verbose=1, epochs=150, validation_data=[X_test,Y_test], callbacks=[PlotLossesKeras()]) ``` Simplicity is pretty impressive right? :) Now lets understand: <pre>The core data structure of Keras is a <b>model</b>, a way to organize layers. The main type of model is the <b>Sequential</b> model, a linear stack of layers.</pre> What we did here is stacking a Fully Connected (<b>Dense</b>) layer of trainable weights from the input to the output and an <b>Activation</b> layer on top of the weights layer. ##### Dense ```python from keras.layers.core import Dense Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None) ``` * `units`: int > 0. * `init`: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument. * `activation`: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). * `weights`: list of Numpy arrays to set as initial weights. The list should have 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively. * `kernel_regularizer`: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix. * `bias_regularizer`: instance of WeightRegularizer, applied to the bias. * `activity_regularizer`: instance of ActivityRegularizer, applied to the network output. * `kernel_constraint`: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix. * `bias_constraint`: instance of the constraints module, applied to the bias. * `use_bias`: whether to include a bias (i.e. make the layer affine rather than linear). ## (some) others `keras.core.layers` * `keras.layers.core.Flatten()` * `keras.layers.core.Reshape(target_shape)` * `keras.layers.core.Permute(dims)` ```python model = Sequential() model.add(Permute((2, 1), input_shape=(10, 64))) # now: model.output_shape == (None, 64, 10) # note: `None` is the batch dimension ``` * `keras.layers.core.Lambda(function, output_shape=None, arguments=None)` * `keras.layers.core.ActivityRegularization(l1=0.0, l2=0.0)` <img src="./images/dl_overview.png" > Credits: Yam Peleg ([@Yampeleg](https://twitter.com/yampeleg)) ##### Activation ```python from keras.layers.core import Activation Activation(activation) ``` **Supported Activations** : [https://keras.io/activations/] **Advanced Activations**: [https://keras.io/layers/advanced-activations/] ##### Optimizer If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code). Here we used <b>SGD</b> (stochastic gradient descent) as an optimization algorithm for our trainable weights. <img src="http://sebastianruder.com/content/images/2016/09/saddle_point_evaluation_optimizers.gif" width="40%"> Source & Reference: http://sebastianruder.com/content/images/2016/09/saddle_point_evaluation_optimizers.gif "Data Sciencing" this example a little bit more ===== What we did here is nice, however in the real world it is not useable because of overfitting. Lets try and solve it with cross validation. ##### Overfitting In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit has poor predictive performance, as it overreacts to minor fluctuations in the training data. <img src="https://raw.githubusercontent.com/leriomaggio/deep-learning-keras-tensorflow/master/imgs/overfitting.png"> <pre>To avoid overfitting, we will first split out data to training set and test set and test out model on the test set. Next: we will use two of keras's callbacks <b>EarlyStopping</b> and <b>ModelCheckpoint</b></pre> --- Let's see first the model we implemented ``` model.summary() from sklearn.model_selection import train_test_split from keras.callbacks import EarlyStopping, ModelCheckpoint X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.15, random_state=42) fBestModel = 'best_model.h5' early_stop = EarlyStopping(monitor='val_loss', patience=2, verbose=1) best_model = ModelCheckpoint(fBestModel, verbose=0, save_best_only=True) model.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=50, batch_size=128, verbose=True, callbacks=[best_model, early_stop]) ``` # Multi-Layer Fully Connected Networks <img src="./images/MLP.png" width="65%"> #### Forward and Backward Propagation <img src="./images/backprop.png" width="80%"> **Q:** _How hard can it be to build a Multi-Layer Fully-Connected Network with keras?_ **A:** _It is basically the same, just add more layers!_ ``` K.clear_session() print("Building model...") model = Sequential() model.add(Dense(256, input_shape=(dims,),activation='relu')) model.add(Dense(256,activation='relu')) model.add(Dense(output_var)) model.add(Activation('relu')) model.compile(optimizer='sgd', loss='mean_squared_error') model.summary() model.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=50, callbacks=[PlotLossesKeras()]) ``` What does the cost function behavior mean over the traning in the above plot? --- # Your Turn! ## Hands On - Keras Fully Connected Take couple of minutes and try to play with the number of layers and the number of parameters in the layers to get the best results. ``` K.clear_session() print("Building model...") model = Sequential() model.add(Dense(256, input_shape=(dims,),activation='relu')) # ... # ... # Play with it! add as much layers as you want! try and get better results. model.add(Dense(output_var)) model.add(Activation('relu')) model.compile(optimizer='sgd', loss='mean_squared_error') model.summary() model.fit(X_train, Y_train, validation_data = (X_val, Y_val), epochs=50, callbacks=[PlotLossesKeras()]) ```
github_jupyter
# Обучение в подкреплением в PyBullet (Keras) ``` import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' import random, numpy, math, time import matplotlib.pyplot as plt from matplotlib import animation from IPython.display import display, HTML import pybullet as pb from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.optimizers import RMSprop ``` ## Окружение \reset \step \render ``` MAX_STEPS = 1000 # максимальное количество шагов симуляции STEPS_AFTER_TARGET = 30 # количество шагов симуляции после достижения цели TARGET_DELTA = 0.2 # величина приемлемого качения возле цели (абсолютное значение) FORCE_DELTA = 0.1 # шаг измениния силы (абсолютное значение) PB_BallMass = 1 # масса шара PB_BallRadius = 0.2 # радиус шара PB_HEIGHT = 10 # максимальная высота поднятия шара MAX_FORCE = 20 # максимальная вертикальная сила пиложенная к шару MIN_FORCE = 0 # минимальнпая сила пиложенная к шару MAX_VEL = 14.2 # максимальная вертикальная скорость шара MIN_VEL = -14.2 # минимальная вертикальная скорость шара class Environment: def __init__(self): # текущее состояние окружения self.pb_z = 0 # текущая высота шара self.pb_force = 0 # текущая сила приложенная к шару self.pb_velocity = 0 # текущая вертикальная скорость шара self.z_target = 0 # целевая высота self.start_time = 0 # время начала новой игры self.steps = 0 # количество шагов после начала симуляции self.target_area = 0 # факт достежения цели self.steps_after_target = 0 # количество шагов после достежения цели # создадим симуляцию self.pb_physicsClient = pb.connect(pb.DIRECT) def reset(self): # случайные высота шара и целевая высота z_target = random.uniform(0.01, 0.99) self.z_target = PB_BallRadius + z_target*PB_HEIGHT z = random.uniform(0.05, 0.95) self.pb_z = PB_BallRadius + z*PB_HEIGHT # сброс параметров окружения pb.resetSimulation() self.target_area = 0 self.start_time = time.time() self.steps = 0 self.steps_after_target = 0 # шаг симуляции 1/60 сек. pb.setTimeStep(1./60) # поверхность floorColShape = pb.createCollisionShape(pb.GEOM_PLANE) # для GEOM_PLANE, visualShape - не отображается, будем использовать GEOM_BOX floorVisualShapeId = pb.createVisualShape(pb.GEOM_BOX,halfExtents=[100,100,0.0001], rgbaColor=[1,1,.98,1]) self.pb_floorId = pb.createMultiBody(0,floorColShape,floorVisualShapeId, [0,0,0], [0,0,0,1])# (mass,collisionShape,visualShape) # шар ballPosition = [0,0,self.pb_z] ballOrientation=[0,0,0,1] ballColShape = pb.createCollisionShape(pb.GEOM_SPHERE,radius=PB_BallRadius) ballVisualShapeId = pb.createVisualShape(pb.GEOM_SPHERE,radius=PB_BallRadius, rgbaColor=[0.25, 0.75, 0.25,1]) self.pb_ballId = pb.createMultiBody(PB_BallMass, ballColShape, ballVisualShapeId, ballPosition, ballOrientation) #(mass, collisionShape, visualShape, ballPosition, ballOrientation) #pb.changeVisualShape(self.pb_ballId,-1,rgbaColor=[1,0.27,0,1]) # указатель цели (без CollisionShape, только отображение(VisualShape)) targetPosition = [0,0,self.z_target] targetOrientation=[0,0,0,1] targetVisualShapeId = pb.createVisualShape(pb.GEOM_BOX,halfExtents=[1,0.025,0.025], rgbaColor=[0,0,0,1]) self.pb_targetId = pb.createMultiBody(0,-1, targetVisualShapeId, targetPosition, targetOrientation) # гравитация pb.setGravity(0,0,-10) # ограничим движение шара только по вертикальной оси pb.createConstraint(self.pb_floorId, -1, self.pb_ballId, -1, pb.JOINT_PRISMATIC, [0,0,1], [0,0,0], [0,0,0]) # установим действующую силу на шар, чтобы компенсировать гравитацию self.pb_force = 10 * PB_BallMass pb.applyExternalForce(self.pb_ballId, -1, [0,0,self.pb_force], [0,0,0], pb.LINK_FRAME) # return values observation = self.getObservation() reward, done = self.getReward() info = self.getInfo() return [observation, reward, done, info] # Наблюдения (возвращаются нормализованными) def getObservation(self): # расстояние до цели d_target = 0.5 + (self.pb_z - self.z_target)/(2*PB_HEIGHT) # действующая сила force = (self.pb_force-MIN_FORCE)/(MAX_FORCE-MIN_FORCE) # текущая высота шара z = (self.pb_z-PB_BallRadius)/PB_HEIGHT # текущая скорость z_velocity = (self.pb_velocity-MIN_VEL)/(MAX_VEL-MIN_VEL) state = [d_target, force, z_velocity] return state # вычисление награды за действие def getReward(self): done = False z_reward = 0 # Факт достижения цели, после чего ждем STEPS_AFTER_TARGET шагов и завершем игру. if (TARGET_DELTA >= math.fabs(self.z_target - self.pb_z)): self.target_area = 1 z_reward = 1 # Выход за пределы зоны if (self.pb_z > (PB_HEIGHT + PB_BallRadius) or self.pb_z < PB_BallRadius): done = True # Завершение игры после достижения цели if (self.target_area > 0): self.steps_after_target += 1 if (self.steps_after_target>=STEPS_AFTER_TARGET): done = True # Завершение игры по таймауту if (self.steps >= MAX_STEPS): done = True return [z_reward, done] # Дополнительная информация для сбора статистики def getInfo(self): game_time = time.time() - self.start_time if game_time: fps = round(self.steps/game_time) return {'step': self.steps, 'fps': fps} # Запуск шага симуляции согласно переданному действию def step(self, action): self.steps += 1 if action == 0: # 0 - увеличение приложеной силы self.pb_force -= FORCE_DELTA if self.pb_force < MIN_FORCE: self.pb_force = MIN_FORCE else: # 1 - уменьшение приложенной силы self.pb_force += FORCE_DELTA if self.pb_force > MAX_FORCE: self.pb_force = MAX_FORCE # изменим текущую сил и запустим шаг симуляции pb.applyExternalForce(self.pb_ballId, -1, [0,0,self.pb_force], [0,0,0], pb.LINK_FRAME) pb.stepSimulation() # обновим парамтры состояния окружения (положение и скорость шара) curPos, curOrient = pb.getBasePositionAndOrientation(self.pb_ballId) lin_vel, ang_vel= pb.getBaseVelocity(self.pb_ballId) self.pb_z = curPos[2] self.pb_velocity = lin_vel[2] # вернем наблюдения, награду, факт окончания игры и доп.информацию observation = self.getObservation() reward, done = self.getReward() info = self.getInfo() return [observation, reward, done, info] # Текущее изображение с камеры def render(self): camTargetPos = [0,0,5] # расположение цели (фокуса) камеры camDistance = 10 # дистанция камеры от цели yaw = 0 # угол рыскания относительно цели pitch = 0 # наклон камеры относительно цели roll=0 # угол крена камеры относительно цели upAxisIndex = 2 # ось вертикали камеры (z) fov = 60 # угол зрения камеры nearPlane = 0.01 # расстояние до ближней плоскости отсечения farPlane = 20 # расстояние до дальной плоскости отсечения pixelWidth = 320 # ширина изображения pixelHeight = 200 # высота изображения aspect = pixelWidth/pixelHeight; # соотношение сторон изображения # видовая матрица viewMatrix = pb.computeViewMatrixFromYawPitchRoll(camTargetPos, camDistance, yaw, pitch, roll, upAxisIndex) # проекционная матрица projectionMatrix = pb.computeProjectionMatrixFOV(fov, aspect, nearPlane, farPlane); # рендеринг изображения с камеры img_arr = pb.getCameraImage(pixelWidth, pixelHeight, viewMatrix, projectionMatrix, shadow=0, lightDirection=[0,1,1],renderer=pb.ER_TINY_RENDERER) w=img_arr[0] #width of the image, in pixels h=img_arr[1] #height of the image, in pixels rgb=img_arr[2] #color data RGB dep=img_arr[3] #depth data # вернем rgb матрицу return rgb ``` ## Память для обучающих примеров ``` MEMORY_CAPACITY = 200000 class Memory: def __init__(self): self.samples = [] # хранятся кортежи типа ( s, a, r, s_ ) def add(self, sample): self.samples.append(sample) if len(self.samples) > MEMORY_CAPACITY: self.samples.pop(0) def sample(self, n): n = min(n, len(self.samples)) return random.sample(self.samples, n) ``` ## Нейронная сеть ``` LAYER_SIZE = 512 # размер слоя STATE_CNT = 3 # количество входных пераметров (расстояние до цели + действующая сила + скорость) ACTION_CNT = 2 # количесво выходов (награда за уменьшение и увеличение силы) class Brain: def __init__(self): self.model = self._QNetwork() def _QNetwork(self): # Создадим сеть используя Keras model = Sequential() model.add(Dense(units=LAYER_SIZE, activation='relu', input_dim=STATE_CNT)) model.add(Dense(units=LAYER_SIZE, activation='relu')) model.add(Dense(units=ACTION_CNT, activation='linear')) opt = RMSprop(lr=0.00025) model.compile(loss='mse', optimizer=opt) return model # обучение по одному пакету обучающих примеров def train(self, x, y, batch_size=32, epoch=1, verbose=0): self.model.fit(x, y, batch_size=batch_size, epochs=epoch, verbose=verbose) # предсказания сети по списку начальных состояний def predict(self, s): return self.model.predict(s) # предсказания сети по одном начальному состоянию def predictOne(self, s): s = numpy.array(s) predictions = self.predict(s.reshape(1, STATE_CNT)).flatten() return predictions ``` ## Агент ``` GAMMA = 0.98 # фактор дисконтирования MAX_EPSILON = 0.5 # максимальная вероятность выбора случайного действия MIN_EPSILON = 0.1 # минимальная вероятность выбора случайного действия LAMBDA = 0.001 # параметр определяющий скорость уменьшения вероятности выбора случайного действия BATCH_SIZE = 32 # размер обучающего пакета class Agent: def __init__(self): self.brain = Brain() # Нейронная сеть для обучения self.memory = Memory() # Хранилище обучающих примеров self.epsilon = MAX_EPSILON # Определяет вероятность выбора случайного действия # выбор действия def act(self, s): if random.random() < self.epsilon: return random.randint(0, ACTION_CNT - 1) # выбираем случайное действие else: return numpy.argmax(self.brain.predictOne(s)) # выбираем оптимальное действие # изменение состояния агента def observe(self, sample, game_num): # sample = (s, a, r, s_) self.memory.add(sample) self.epsilon = MIN_EPSILON + (MAX_EPSILON-MIN_EPSILON)*math.exp(-LAMBDA*game_num) # обучение по случайному пакету обучающих примеров (batch) def train(self): batch = self.memory.sample(BATCH_SIZE) batchLen = len(batch) if batchLen<BATCH_SIZE: # будем обучаться только если есть достаточное количество примеров в памяти return # начальные состояния из пакета states = numpy.array([ o[0] for o in batch ]) # начальные состояния из пакета states_ = numpy.array([ o[3] for o in batch ]) # выгоды для начальных состояний p = agent.brain.predict(states) # выгоды для конечных состояний p_ = agent.brain.predict(states_) # сформируем пустой обучающий пакет x = numpy.zeros((batchLen, STATE_CNT)) y = numpy.zeros((batchLen, ACTION_CNT)) # заполним пакет for i in range(batchLen): o = batch[i] s = o[0]; a = o[1]; r = o[2]; s_ = o[3] t = p[i] # выгоды действий для начального состояния # обновим выгоду только для совершенного действия, для неиспользованных действий выгоды останутся прежними t[a] = r + GAMMA * numpy.amax(p_[i]) # вычислим новую выгоду действия используя награду и максимальную выгоду конечного состояния # сохраним значения в batch x[i] = s y[i] = t # обучим сеть по данному пакету self.brain.train(x, y) ``` ## Статистика ``` class Stats(): def __init__(self): self.stats={"game_num": [],"rewards": [], "success_steps": [], "fps": [], "steps":[], "epsilon":[]} def save_stat(self, R, info, epsilon, game_num): self.stats["rewards"].append(R) self.stats["success_steps"].append(R/STEPS_AFTER_TARGET) self.stats["game_num"].append(game_num) self.stats["epsilon"].append(epsilon) self.stats["steps"].append(info["step"]) self.stats["fps"].append(info["fps"]) def show_stat(self): # отобраим процент удачных шагов за опыт plt.plot(self.stats["game_num"], self.stats["success_steps"], "b.") # отобразим сглаженный график x, y = self.fit_data(self.stats["game_num"], self.stats["success_steps"]) plt.plot(x, y, "r-") # второй вариант сглаживания # plt.plot(numpy.linspace(self.stats["game_num"][0], self.stats["game_num"][-1],50), numpy.average(numpy.array_split(self.stats["success_steps"][:-1], 50),1), "g-") plt.show() # Полиномиальное сглаживание def fit_data(self, x, y): z = numpy.polyfit(x, y, 3) f = numpy.poly1d(z) # новые данные размерностью 50 x_new = numpy.linspace(x[0], x[-1], 50) y_new = f(x_new) return [x_new, y_new] ``` ## MAIN ``` %matplotlib inline MAX_GAMES = 50000 # максимальное количество игр RENDER_PERIOD = 0 # период генерации видео с опытом (0 для отключения) env = Environment() agent = Agent() stats = Stats() for game_num in range(MAX_GAMES): print ("Game %d:" % game_num) render_imgs = [] observation, r, done, info = env.reset() s = observation R = r if RENDER_PERIOD and (game_num % RENDER_PERIOD == 0): plt.subplots() while True: # возьмем оптимальное действие на основе текущего состояния a = agent.act(s) # запустим шаг симуляции observation, r, done, info = env.step(a) s_ = observation # новое состояние # сохраним состояние агента agent.observe((s, a, r, s_), game_num) # обучим сеть по случайносу batch-у agent.train() s = s_ R += r # сохраним изображение, если необходимо if RENDER_PERIOD and game_num % RENDER_PERIOD == 0: rgb = env.render() render_imgs.append([plt.imshow(rgb, animated=True)]) if done: break #time.sleep(1./130) print("Total reward:", R, " FPS:", info['fps']) # сохраним статистику stats.save_stat(R, info, agent.epsilon, game_num) # сформируем анимацию игры и графики статистики обучения if len(render_imgs): render_start = time.time() # plt.rcParams['animation.ffmpeg_path'] = 'C:\FFmpeg\bin\ffmpeg.exe' ani = animation.ArtistAnimation(plt.gcf(), render_imgs, interval=10, blit=True,repeat_delay=1000) plt.close() anima = ani.to_html5_video() display(HTML(anima)) # статистика if game_num != 0: plt.subplots(figsize=(10,4)) stats.show_stat() plt.close() render_stop = time.time() print ("render time: %f sec.\n---\n" % (render_stop - render_start)) ```
github_jupyter
# Getting info on Priming experiment dataset that's needed for modeling ## Info: * __Which gradient(s) to simulate?__ * For each gradient to simulate: * Infer total richness of starting community * Get distribution of total OTU abundances per fraction * Number of sequences per sample * Infer total abundance of each target taxon # User variables ``` baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/' workDir = os.path.join(baseDir, 'exp_info') otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt' otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt' metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt' #otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta' #otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt' #genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/' ``` # Init ``` import glob %load_ext rpy2.ipython %%R library(ggplot2) library(dplyr) library(tidyr) library(gridExtra) library(fitdistrplus) if not os.path.isdir(workDir): os.makedirs(workDir) ``` # Loading OTU table (filter to just bulk samples) ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 1:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head ``` # Which gradient(s) to simulate? ``` %%R -w 900 -h 400 tbl.h.s = tbl.h %>% group_by(sample) %>% summarize(total_count = sum(count)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) + geom_point() + facet_grid(isotope ~ treatment) + theme( text = element_text(size=16) ) %%R tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort ``` #### Notes Samples to simulate * Isotope: * 12C vs 13C * Treatment: * 700 * Days: * 14 * 28 * 45 ``` %%R # bulk soil samples for gradients to simulate samples.to.use = c( "X12C.700.14.05.NA", "X12C.700.28.03.NA", "X12C.700.45.01.NA", "X13C.700.14.08.NA", "X13C.700.28.06.NA", "X13C.700.45.01.NA" ) ``` # Total richness of starting (bulk-soil) community Method: * Total number of OTUs in OTU table (i.e., gamma richness) * Just looking at bulk soil samples ## Loading just bulk soil ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(ends_with('.NA')) tbl$OTUId = rownames(tbl) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 1:(ncol(tbl)-1)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R -w 800 tbl.s = tbl.h %>% filter(count > 0) %>% group_by(sample, isotope, treatment, day, rep, fraction) %>% summarize(n_taxa = n()) ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) + geom_point() + facet_grid(isotope ~ treatment) + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank() ) %%R -w 800 -h 350 # filter to just target samples tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use) ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) + geom_bar(stat='identity') + facet_grid(. ~ isotope) + labs(y = 'Number of taxa') + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank() ) %%R message('Bulk soil total observed richness: ') tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print ``` ### Number of taxa in all fractions corresponding to each bulk soil sample * Trying to see the difference between richness of bulk vs gradients (veil line effect) ``` %%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') %>% select(-ends_with('.NA')) tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R # basename of fractions samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use) samps = tbl.h$sample %>% unique fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE)) for (n in names(fracs)){ n.frac = length(fracs[[n]]) cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n') } %%R # function for getting all OTUs in a sample n.OTUs = function(samples, otu.long){ otu.long.f = otu.long %>% filter(sample %in% samples, count > 0) n.OTUs = otu.long.f$OTUId %>% unique %>% length return(n.OTUs) } num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h) num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame colnames(num.OTUs) = c('n_taxa') num.OTUs$sample = rownames(num.OTUs) num.OTUs %%R tbl.s.f %>% as.data.frame %%R # joining with bulk soil sample summary table num.OTUs$data = 'fractions' tbl.s.f$data = 'bulk_soil' tbl.j = rbind(num.OTUs, tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>% mutate(isotope = gsub('X|\\..+', '', sample), sample = gsub('\\.[0-9]+\\.NA', '', sample)) tbl.j %%R -h 300 -w 800 ggplot(tbl.j, aes(sample, n_taxa, fill=data)) + geom_bar(stat='identity', position='dodge') + facet_grid(. ~ isotope, scales='free_x') + labs(y = 'Number of OTUs') + theme( text = element_text(size=16) # axis.text.x = element_text(angle=90) ) ``` # Distribution of total sequences per fraction * Number of sequences per sample * Using all samples to assess this one * Just fraction samples __Method:__ * Total number of sequences (total abundance) per sample ### Loading OTU table ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R -h 400 tbl.h.s = tbl.h %>% group_by(sample) %>% summarize(total_seqs = sum(count)) p = ggplot(tbl.h.s, aes(total_seqs)) + theme_bw() + theme( text = element_text(size=16) ) p1 = p + geom_histogram(binwidth=200) p2 = p + geom_density() grid.arrange(p1,p2,ncol=1) ``` ### Distribution fitting ``` %%R -w 700 -h 350 plotdist(tbl.h.s$total_seqs) %%R -w 450 -h 400 descdist(tbl.h.s$total_seqs, boot=1000) %%R f.n = fitdist(tbl.h.s$total_seqs, 'norm') f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm') f.ll = fitdist(tbl.h.s$total_seqs, 'logis') #f.c = fitdist(tbl.s$count, 'cauchy') f.list = list(f.n, f.ln, f.ll) plot.legend = c('normal', 'log-normal', 'logistic') par(mfrow = c(2,1)) denscomp(f.list, legendtext=plot.legend) qqcomp(f.list, legendtext=plot.legend) %%R gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend) %%R summary(f.ln) ``` #### Notes: * best fit: * lognormal * mean = 10.113 * sd = 1.192 ## Does sample size correlate to buoyant density? ### Loading OTU table ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) %>% select(-starts_with('X0MC')) tbl = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% mutate(sample = gsub('^X', '', sample)) tbl %>% head %%R # summarize tbl.s = tbl %>% group_by(sample) %>% summarize(total_count = sum(count)) tbl.s %>% head(n=3) ``` ### Loading metadata ``` %%R -i metaDataFile tbl.meta = read.delim(metaDataFile, sep='\t') tbl.meta %>% head(n=3) ``` ### Determining association ``` %%R -w 700 tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample')) ggplot(tbl.j, aes(Density, total_count, color=rep)) + geom_point() + facet_grid(Treatment ~ Day) %%R -w 600 -h 350 ggplot(tbl.j, aes(Density, total_count)) + geom_point(aes(color=Treatment)) + geom_smooth(method='lm') + labs(x='Buoyant density', y='Total sequences') + theme_bw() + theme( text = element_text(size=16) ) ``` ## Number of taxa along the gradient ``` %%R tbl.s = tbl %>% filter(count > 0) %>% group_by(sample) %>% summarize(n_taxa = sum(count > 0)) tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample')) tbl.j %>% head(n=3) %%R -w 900 -h 600 ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) + #geom_area(stat='identity', alpha=0.5, position='dodge') + geom_point() + geom_line() + labs(x='Buoyant density', y='Number of taxa') + facet_grid(Treatment ~ Day) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) ``` #### Notes: * Many taxa out to the tails of the gradient. * It seems that the DNA fragments were quite diffuse in the gradients. # Total abundance of each target taxon: bulk soil approach * Getting relative abundances from bulk soil samples * This has the caveat of likely undersampling richness vs using all gradient fraction samples. * i.e., veil line effect ``` %%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(matches('OTUId'), ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R # long table format w/ selecting samples of interest tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>% filter(sample %in% samples.to.use, count > 0) tbl.h %>% head %%R message('Number of samples: ', tbl.h$sample %>% unique %>% length) message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length) %%R tbl.hs = tbl.h %>% group_by(OTUId) %>% summarize( total_count = sum(count), mean_count = mean(count), median_count = median(count), sd_count = sd(count) ) %>% filter(total_count > 0) tbl.hs %>% head ``` ### For each sample, writing a table of OTU_ID and count ``` %%R -i workDir setwd(workDir) samps = tbl.h$sample %>% unique %>% as.vector for(samp in samps){ outFile = paste(c(samp, 'OTU.txt'), collapse='_') tbl.p = tbl.h %>% filter(sample == samp, count > 0) write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F) message('Table written: ', outFile) message(' Number of OTUs: ', tbl.p %>% nrow) } ``` # Making directories for simulations ``` p = os.path.join(workDir, '*_OTU.txt') files = glob.glob(p) baseDir = os.path.split(workDir)[0] newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files] newDirs = [os.path.join(baseDir, x) for x in newDirs] for newDir,f in zip(newDirs, files): if not os.path.isdir(newDir): print 'Making new directory: {}'.format(newDir) os.makedirs(newDir) else: print 'Directory exists: {}'.format(newDir) # symlinking file linkPath = os.path.join(newDir, os.path.split(f)[1]) if not os.path.islink(linkPath): os.symlink(f, linkPath) ``` # Rank-abundance distribution for each sample ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(matches('OTUId'), ends_with('.NA')) tbl %>% ncol %>% print tbl[1:4,1:4] %%R # long table format w/ selecting samples of interest tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>% filter(sample %in% samples.to.use, count > 0) tbl.h %>% head %%R # ranks of relative abundances tbl.r = tbl.h %>% group_by(sample) %>% mutate(perc_rel_abund = count / sum(count) * 100, rank = row_number(-perc_rel_abund)) %>% unite(day_rep, day, rep, sep='-') tbl.r %>% as.data.frame %>% head(n=3) %%R -w 900 -h 350 ggplot(tbl.r, aes(rank, perc_rel_abund)) + geom_point() + # labs(x='Buoyant density', y='Number of taxa') + facet_wrap(~ day_rep) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) ``` ## Taxon abundance range for each sample-fraction ``` %%R -i otuTableFile tbl = read.delim(otuTableFile, sep='\t') # filter tbl = tbl %>% select(-ends_with('.NA')) %>% select(-starts_with('X0MC')) tbl = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% mutate(sample = gsub('^X', '', sample)) tbl %>% head %%R tbl.ar = tbl %>% #mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>% #mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>% group_by(sample) %>% mutate(rel_abund = count / sum(count)) %>% summarize(abund_range = max(rel_abund) - min(rel_abund)) %>% ungroup() %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.ar %>% head(n=3) %%R -w 800 tbl.ar = tbl.ar %>% mutate(fraction = as.numeric(fraction)) ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) + geom_point() + geom_line() + labs(x='Buoyant density', y='relative abundanc range') + facet_grid(treatment ~ day) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) ``` # Total abundance of each target taxon: all fraction samples approach * Getting relative abundances from all fraction samples for the gradient * I will need to calculate (mean|max?) relative abundances for each taxon and then re-scale so that cumsum = 1 ``` %%R -i otuTableFile # loading OTU table tbl = read.delim(otuTableFile, sep='\t') %>% select(-ends_with('.NA')) tbl.h = tbl %>% gather('sample', 'count', 2:ncol(tbl)) %>% separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) tbl.h %>% head %%R # basename of fractions samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use) samps = tbl.h$sample %>% unique fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE)) for (n in names(fracs)){ n.frac = length(fracs[[n]]) cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n') } %%R # function for getting mean OTU abundance from all fractions OTU.abund = function(samples, otu.long){ otu.rel.abund = otu.long %>% filter(sample %in% samples, count > 0) %>% ungroup() %>% group_by(sample) %>% mutate(total_count = sum(count)) %>% ungroup() %>% mutate(perc_abund = count / total_count * 100) %>% group_by(OTUId) %>% summarize(mean_perc_abund = mean(perc_abund), median_perc_abund = median(perc_abund), max_perc_abund = max(perc_abund)) return(otu.rel.abund) } ## calling function otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h) otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund)) otu.rel.abund %>% head %%R -h 600 -w 900 # plotting otu.rel.abund.l = otu.rel.abund %>% gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund) otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value) ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) + geom_point(shape='O', alpha=0.7) + scale_y_log10() + facet_grid(abund_stat ~ sample) + theme_bw() + theme( text = element_text(size=16), axis.text.x = element_blank(), legend.position = 'none' ) ``` ### For each sample, writing a table of OTU_ID and count ``` %%R -i workDir setwd(workDir) # each sample is a file samps = otu.rel.abund.l$sample %>% unique %>% as.vector for(samp in samps){ outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_') tbl.p = otu.rel.abund %>% filter(sample == samp, mean_perc_abund > 0) write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F) cat('Table written: ', outFile, '\n') cat(' Number of OTUs: ', tbl.p %>% nrow, '\n') } ``` # A broader taxon distribution with increased abundance? * Overloading molecules at one spot in the gradient, leading to more diffusion?
github_jupyter