code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# ### Imputation Methods and Resources
#
# One of the most common methods for working with missing values is by imputing the missing values. Imputation means that you input a value for values that were originally missing.
#
# It is very common to impute in the following ways:
# 1. Impute the **mean** of a column.<br><br>
#
# 2. If you are working with categorical data or a variable with outliers, then use the **mode** of the column.<br><br>
#
# 3. Impute 0, a very small number, or a very large number to differentiate missing values from other values.<br><br>
#
# 4. Use knn to impute values based on features that are most similar.<br><br>
#
# In general, you should try to be more careful with missing data in understanding the real world implications and reasons for why the missing values exist. At the same time, these solutions are very quick, and they enable you to get models off the ground. You can then iterate on your feature engineering to be more careful as time permits.
#
# Let's take a look at how some of them work. Chris' content is again very helpful for many of these items - and you can access it [here](https://chrisalbon.com/). He uses the [sklearn.preprocessing library](http://scikit-learn.org/stable/modules/preprocessing.html). There are also a ton of ways to fill in missing values directly using pandas, which can be found [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html)
#
# Create the dataset you will be using for this notebook using the code below.
#
# +
import pandas as pd
import numpy as np
import ImputationMethods as t
df = pd.DataFrame({'A':[np.nan, 2, np.nan, 0, 7, 10, 15],
'B':[3, 4, 5, 1, 2, 3, 5],
'C':[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
'D':[np.nan, True, np.nan, False, True, False, np.nan],
'E':['Yes', 'No', 'Maybe', np.nan, np.nan, 'Yes', np.nan]})
df
# -
# #### Question 1
#
# **1.** Use the dictionary below to label the columns as the appropriate data type.
# +
a = 'categorical'
b = 'quantitative'
c = 'we cannot tell'
d = 'boolean - can treat either way'
question1_solution = {'Column A is': b, #letter here,
'Column B is': b, #letter here,
'Column C is': c, #letter here,
'Column D is': d, #letter here,
'Column E is':a #letter here
}
# Check your answer
t.var_test(question1_solution)
# -
# #### Question 2
#
# **2.** Are there any columns or rows that you feel comfortable dropping in this dataframe?
# +
a = "Yes"
b = "No"
should_we_drop = a
#Check your answer
t.can_we_drop(should_we_drop)
# -
# Use this cell to drop any columns or rows you feel comfortable dropping based on the above
new_df = df.drop('C', axis=1)
# #### Question 3
#
# **3.** Using **new_df**, I wrote a lambda function that you can use to impute the mean for the columns of your dataframe using the **apply** method. Use as many cells as you need to correctly fill in the dictionary **impute_q3** to answer a few questions about your findings.
# +
fill_mean = lambda col: col.fillna(col.mean())
try:
new_df.apply(fill_mean, axis=0)
except:
print('That broke...')
# +
# Check what you need to answer the questions below
# -
new_df[['A','B','D']].apply(fill_mean, axis=0)
# +
a = "fills with the mean, but that doesn't actually make sense in this case."
b = "gives an error."
c = "is no problem - it fills the NaN values with the mean as expected."
impute_q3 = {'Filling column A': c, #letter here,
'Filling column D': a, #letter here,
'Filling column E': b #letter here
}
#Check your answer
t.impute_q3_check(impute_q3)
# -
# #### Question 4
#
# **4.** Given the results above, it might make more sense to fill some columns with the mode. Write your own function to fill a column with the mode value, and use it on the two columns that might benefit from this type of imputation. Use the dictionary **impute_q4** to answer some questions about your findings.
# +
#Similar to the above write a function and apply it to compte the mode for each column
#If you get stuck, here is a helpful resource https://stackoverflow.com/questions/42789324/pandas-fillna-mode
# -
new_df
new_df.apply(lambda col: col.fillna(col.mode()[0]), axis=0)
# +
a = "Did not impute the mode."
b = "Imputes the mode."
impute_q4 = {'Filling column A':a, #letter here,
'Filling column D': a,#letter here,
'Filling column E': b #letter here
}
#Check your answer
t.impute_q4_check(impute_q4)
# -
# You saw two of the most common ways to impute values in this notebook, and hopefully, you realized that even these methods have complications. Again, these methods can be a great first step to get your models off the ground, but there are potentially detrimental aspects to the bias introduced into your models using these methods.
| lessons/CRISP_DM/07 Imputation Methods and Resources -.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import json
with open('resultados_definitivos/DepthTree.json') as file:
output = json.load(file)
threads=output['views'][0]['distrib'] #CAMBIAR POR 'distrib'
threads
sizes=[]; maxThreads=[]
for line in threads:
if(line['max thread'] != 0):
sizes.append(line['size'])
maxThreads.append(line['max thread'])
fig=plt.figure()
sns.regplot(x=sizes ,y=maxThreads, color='darkorange')
plt.xlabel('Tamaño')
plt.ylabel('Hilo máximo')
plt.xlim([0,915])
plt.ylim([0,915])
plt.title('Distribución del tamaño y el hilo máximo por cada isla')
fig.savefig('resultados_definitivos/correcciones/DepthTree.jpg')
fig1=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([0,50])
plt.ylim([0,50])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del cero al cincuenta')
fig1.savefig('resultados_definitivos/DepthTree/DepthTree1.jpg')
fig2=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([51,100])
plt.ylim([0,100])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del cincuenta y uno al cien')
fig2.savefig('resultados_definitivos/DepthTree/DepthTree2.jpg')
fig3=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([101,200])
plt.ylim([0,150])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del ciento uno al doscientos')
fig3.savefig('resultados_definitivos/DepthTree/DepthTree3.jpg')
fig4=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([201,400])
plt.ylim([0,175])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del doscientos uno al cuatrocientos')
fig4.savefig('resultados_definitivos/DepthTree/DepthTree4.jpg')
fig5=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([401,600])
plt.ylim([0,175])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del cuatrocientos uno al seiscientos')
fig5.savefig('resultados_definitivos/DepthTree/DepthTree5.jpg')
fig6=plt.figure()
first = sns.regplot(x=sizes ,y=maxThreads, x_estimator=np.mean, color='darkorange')
plt.xlim([601,915])
plt.ylim([0,175])
plt.xlabel('Tamaño')
plt.ylabel('Thread máximo')
plt.title('Del seiscientos uno al novecientos quince')
fig6.savefig('resultados_definitivos/DepthTree/DepthTree6.jpg')
| DepthTree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (Lab21)
# language: python
# name: lab21
# ---
# # RUSBOOST Classifier With ELMO
#
# **ROC-AUC:** 0.86849
# **F1-score:** 0.16953
import pandas as pd
import numpy as np
from imblearn.ensemble import RUSBoostClassifier
from sklearn.multiclass import OneVsRestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
df = pd.read_csv('data/train.csv')
label_cols = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
df['none'] = 1-df[label_cols].max(axis=1)
x = np.loadtxt('data/toxic_elmo_matrix.out', delimiter=',')
y = df.iloc[:, 2:8]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=13)
# +
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(OneVsRestClassifier(RUSBoostClassifier()))
param_grid = {
'onevsrestclassifier__estimator__algorithm': ['SAMME', 'SAMME.R'],
'onevsrestclassifier__estimator__sampling_strategy': ['majority', 'not minority', 'not majority'],
'onevsrestclassifier__estimator__n_estimators': [10, 50, 100, 250],
'onevsrestclassifier__estimator__learning_rate': [0.25, 0.5, 0.75, 1]
}
grid = GridSearchCV(pipe, param_grid, cv=3, scoring='roc_auc', verbose=10, n_jobs=-2)
grid.fit(X_train, y_train)
# -
grid.best_params_
grid.score(X_test, y_test)
# +
from sklearn.metrics import f1_score, recall_score
y_pred = grid.predict(X_test)
# -
f1_score(y_test, y_pred, average = 'micro')
| model notebooks/RUSBoost/RUSBoost_ELMO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="EDpkz0GlRCpL"
# [](http://rpi.analyticsdojo.com)
# <center><h1>Introduction to Spark</h1></center>
# <center><h3><a href = 'http://rpi.analyticsdojo.com'>rpi.analyticsdojo.com</a></h3></center>
# + [markdown] id="RvDUNVYbRCpL"
# # Introduction to Spark
# Adopted from work by <NAME>:
# https://github.com/phelps-sg/python-bigdata
# This work is licensed under the Creative Commons Attribution 4.0 International license agreement.
#
# + [markdown] id="UXW_YfsuRCpL"
# ### Reference
# - [Spark Documentation](http://spark.apache.org/docs/latest/)
# - [Spark Programming Guide](http://spark.apache.org/docs/latest/programming-guide.html)
# - [DataBricks Login](https://community.cloud.databricks.com)
# - [Pyspark](https://github.com/jupyter/docker-stacks)
# To install pyspark
#
# ```
# # !pip install pyspark
# ```
# + id="RdF5pwrdRIPU" executionInfo={"status": "ok", "timestamp": 1607026957537, "user_tz": 300, "elapsed": 37803, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="20bbcd8b-4e89-46ac-d015-a6b4acc5c565" colab={"base_uri": "https://localhost:8080/"}
# !pip install pyspark
# + [markdown] id="2HomtYO-RCpL"
#
# ### Overview
# - History
# - Data Structures
# - Using Apache Spark with Python
#
# + [markdown] id="qICPQY4RRCpL"
# ## History
#
# - Apache Spark was first released in 2014.
#
# - It was originally developed by [<NAME>](http://people.csail.mit.edu/matei) as a class project, and later a PhD dissertation, at University of California, Berkeley.
#
# - In contrast to Hadoop, Apache Spark:
#
# - is easy to install and configure.
# - provides a much more natural *iterative* workflow
#
# + [markdown] id="FrVmm-gIRCpL"
# ## Resilient Distributed Datasets (RDD)
#
# - The fundamental abstraction of Apache Spark is a read-only, parallel, distributed, fault-tolerent collection called a resilient distributed datasets (RDD).
#
# - When working with Apache Spark we iteratively apply functions to every elelement of these collections in parallel to produce *new* RDDs.
#
# - For the most part, you can think/use RDDs like distributed dataframes.
#
# + [markdown] id="eDpPSboSRCpL"
# ## Resilient Distributed Datasets (RDD)
#
# - Properties resilient distributed datasets (RDDs):
# - The data is distributed across nodes in a cluster of computers.
# - No data is lost if a single node fails.
# - Data is typically stored in HBase tables, or HDFS files.
# - The `map` and `reduce` functions can work in *parallel* across
# different keys, or different elements of the collection.
#
# - The underlying framework (e.g. Hadoop or Apache Spark) allocates data and processing to different nodes, without any intervention from the programmer.
# + [markdown] id="bso2HU5aRCpL"
# ## Word Count Example
#
# - In this simple example, the input is a set of URLs, each record is a document. <br> <br> <br>
#
# - **Problem: Compute how many times each word has occurred across data set.**
# + [markdown] id="QxXj1N-_RCpL"
# ## Word Count: Map
#
#
# The input to $\operatorname{map}$ is a mapping:
# - Key: URL
# - Value: Contents of document <br>
# $\left< document1, to \; be \; or \; not \; to \; be \right>$
#
#
# - In this example, our $\operatorname{map}$ function will process a given URL, and produces a mapping:
# - So our original data-set will be transformed to:
#
# $\left< to, 1 \right>$
# $\left< be, 1 \right>$
# $\left< or, 1 \right>$
# $\left< not, 1 \right>$
# $\left< to, 1 \right>$
# $\left< be, 1 \right>$
# + [markdown] id="GdfvZyLPRCpL"
# ## Word Count: Reduce
#
#
# - The reduce operation groups values according to their key, and then performs areduce on each key.
#
# - The collections are partitioned across different storage units, therefore.
#
# - Map-Reduce will fold the data in such a way that it minimises data-copying across the cluster.
#
# - Data in different partitions are reduced separately in parallel.
#
# - The final result is a reduce of the reduced data in each partition.
#
# - Therefore it is very important that our operator *is both commutative and associative*.
#
# - In our case the function is the `+` operator
#
# $\left< be, 2 \right>$
# $\left< not, 1 \right>$
# $\left< or, 1 \right>$
# $\left< to, 2 \right>$
#
# + [markdown] id="pHF5txhqRCpL"
# ## Map-Reduce on a Cluster of Computers
#
# - The code we have written so far will *not* allow us to exploit parallelism from multiple computers in a [cluster](https://en.wikipedia.org/wiki/Computer_cluster).
#
# - Developing such a framework would be a very large software engineering project.
#
# - There are existing frameworks we can use:
# - [Apache Hadoop](https://hadoop.apache.org/)
# - [Apache Spark](https://spark.apache.org/)
#
# - This notebook covers Apache Spark.
# + [markdown] id="JLXN1NN6RCpL"
# ## Apache Spark
#
# - Apache Spark provides an object-oriented library for processing data on the cluster.
#
# - It provides objects which represent resilient distributed datasets (RDDs).
#
# - RDDs behave a bit like Python collections (e.g. lists).
#
# - However:
# - the underlying data is distributed across the nodes in the cluster, and
# - the collections are *immutable*.
# + [markdown] id="mYkLJa_jRCpL"
# ## Apache Spark and Map-Reduce
#
# - We process the data by using higher-order functions to map RDDs onto *new* RDDs.
#
# - Each instance of an RDD has at least two *methods* corresponding to the Map-Reduce workflow:
# - `map`
# - `reduceByKey`
#
# - These methods work in the same way as the corresponding functions we defined earlier to work with the standard Python collections.
#
# - There are also additional RDD methods in the Apache Spark API including ones for SQL.
#
# + [markdown] id="-m5AJ5ETRCpL"
# ## Word-count in Apache Spark
#
#
# + id="Y8rHLG8aRCpL" executionInfo={"status": "ok", "timestamp": 1607026982912, "user_tz": 300, "elapsed": 351, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="53326e98-9d49-42d3-95a4-06462875239d" colab={"base_uri": "https://localhost:8080/"}
words = "to be or not to be".split()
words
# + [markdown] id="Qfy0x3Y7RCpP"
# ### The `SparkContext` class
#
# - When working with Apache Spark we invoke methods on an object which is an instance of the `pyspark.context.SparkContext` context.
#
# - Typically, (such as when running on DataBricks) an instance of this object will be created automatically for you and assigned to the variable `sc`.
#
# - The `parallelize` method in `SparkContext` can be used to turn any ordinary Python collection into an RDD;
# + id="khysYuCMRCpP" executionInfo={"status": "ok", "timestamp": 1607027001208, "user_tz": 300, "elapsed": 6422, "user": {"displayName": "", "photoUrl": "", "userId": ""}}
#Don't Execute this on Databricks
#To be used if executing via docker
import pyspark
sc = pyspark.SparkContext('local[*]')
# + id="BAM8BEEtRCpP" executionInfo={"status": "ok", "timestamp": 1607027008594, "user_tz": 300, "elapsed": 336, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="0b9be4fe-306c-45cc-e077-a76b1a8d1cde" colab={"base_uri": "https://localhost:8080/"}
words_rdd = sc.parallelize(words)
words_rdd
# + [markdown] id="SJiwXwu1RCpP"
# ### Mapping an RDD
#
# - Now when we invoke the `map` or `reduceByKey` methods on `my_rdd` we can set up a parallel processing computation across the cluster.
# + id="Djk9lXXnRCpP" executionInfo={"status": "ok", "timestamp": 1607027008818, "user_tz": 300, "elapsed": 554, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="8ca4a870-de26-407f-c319-46dc4116f29b" colab={"base_uri": "https://localhost:8080/"}
word_tuples_rdd = words_rdd.map(lambda x: (x, 1))
word_tuples_rdd
# + [markdown] id="gX9bZgvTRCpP"
# ### Collecting the RDD
# - Notice that we do not have a result yet.
#
# - The computation is not performed until we request the final result to be *collected*.
#
# - We do this by invoking the `collect()` method.
#
# - Be careful with the `collect` method, as all data you are collecting must fit in memory.
#
# - The `take` method is similar to `collect`, but only returns the first $n$ elements.
#
# + id="43wnW8mKRCpP" executionInfo={"status": "ok", "timestamp": 1607027010367, "user_tz": 300, "elapsed": 2099, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="39ae4aab-3898-4661-886c-2d1d72dbddba" colab={"base_uri": "https://localhost:8080/"}
word_tuples_rdd.collect()
# + id="1oNwaBVgRCpP" executionInfo={"status": "ok", "timestamp": 1607027010750, "user_tz": 300, "elapsed": 2477, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="acde1875-79e8-4343-dc79-51d00c0ff2ff" colab={"base_uri": "https://localhost:8080/"}
word_tuples_rdd.take(4)
# + [markdown] id="NnBPbTIgRCpP"
# ### Reducing an RDD
#
# - However, we require additional processing to reduce the data using the word key.
# + id="Vf9SnfzMRCpP" executionInfo={"status": "ok", "timestamp": 1607027010751, "user_tz": 300, "elapsed": 2474, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="1d6aa030-f811-4146-84b3-538e8f2e3ffc" colab={"base_uri": "https://localhost:8080/"}
word_counts_rdd = word_tuples_rdd.reduceByKey(lambda x, y: x + y)
word_counts_rdd
# + [markdown] id="F11Je6MORCpP"
# - Now we request the final result:
# + id="w6SpERTpRCpP" executionInfo={"status": "ok", "timestamp": 1607027011901, "user_tz": 300, "elapsed": 3620, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="fae3ae6e-663d-4b30-9ed0-2b31f15a1198" colab={"base_uri": "https://localhost:8080/"}
word_counts = word_counts_rdd.collect()
word_counts
# + [markdown] id="63pGYWbdRCpQ"
# ### Lazy evaluation
#
# - It is only when we invoke `collect()` that the processing is performed on the cluster.
#
# - Invoking `collect()` will cause both the `map` and `reduceByKey` operations to be performed.
#
# - If the resulting collection is very large then this can be an expensive operation.
#
# + id="K-Q8v9EXRCpQ" executionInfo={"status": "ok", "timestamp": 1607027012067, "user_tz": 300, "elapsed": 3782, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="f010988a-823e-4ee0-8ff3-c04bf4d36aba" colab={"base_uri": "https://localhost:8080/"}
word_counts_rdd.take(2)
# + [markdown] id="T6oHciE-RCpQ"
# ### Connecting MapReduce in Single Command
# - Can string together `map` and `reduce` commands.
# - Not executed until it is collected.
# + id="XLCZsMwERCpQ" executionInfo={"status": "ok", "timestamp": 1607027012411, "user_tz": 300, "elapsed": 4120, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="348a5bb6-226c-42f3-c10d-bb856d3b46be" colab={"base_uri": "https://localhost:8080/"}
text = "to be or not to be".split()
rdd = sc.parallelize(text)
counts = rdd.map(lambda word: (word, 1)).reduceByKey(lambda x, y: x + y)
counts.collect()
# + [markdown] id="7kw0M1P2RCpQ"
# ## Additional RDD transformations
#
# - Apache Spark offers many more methods for operating on collections of tuples over and above the standard Map-Reduce framework:
#
# - Sorting: `sortByKey`, `sortBy`, `takeOrdered`
# - Mapping: `flatMap`
# - Filtering: `filter`
# - Counting: `count`
# - Set-theoretic: `intersection`, `union`
# - Many others: [see the Transformations section of the programming guide](https://spark.apache.org/docs/latest/programming-guide.html#transformations)
#
# + [markdown] id="INvfPnBtRCpQ"
# ## Creating an RDD from a text file
#
# - In the previous example, we created an RDD from a Python collection.
#
# - This is *not* typically how we would work with big data.
#
# - More commonly we would create an RDD corresponding to data in an
# HBase table, or an HDFS file.
#
# - The following example creates an RDD from a text file on the native filesystem (ext4);
# - With bigger data, you would use an HDFS file, but the principle is the same.
#
# - Each element of the RDD corresponds to a single *line* of text.
# + id="CmJ8ysFjRCpQ" executionInfo={"status": "ok", "timestamp": 1607027012952, "user_tz": 300, "elapsed": 4658, "user": {"displayName": "", "photoUrl": "", "userId": ""}}
genome = sc.textFile('../input/iris.csv')
# + [markdown] id="z4EMdAynRCpQ"
# ## Calculating $\pi$ using Spark
#
# - We can estimate an approximate value for $\pi$ using the following Monte-Carlo method:
#
#
# 1. Inscribe a circle in a square
# 2. Randomly generate points in the square
# 3. Determine the number of points in the square that are also in the circle
# 4. Let $r$ be the number of points in the circle divided by the number of points in the square, then $\pi \approx 4 r$.
#
# - Note that the more points generated, the better the approximation
#
# See [this tutorial](https://computing.llnl.gov/tutorials/parallel_comp/#ExamplesPI).
# + id="FZ64dWB8RCpQ" executionInfo={"status": "ok", "timestamp": 1607027014823, "user_tz": 300, "elapsed": 6525, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="0806cef3-c7f1-4ab3-cf59-8ff341f667b0" colab={"base_uri": "https://localhost:8080/"}
import numpy as np
def sample(p):
#here x,y are the x,y coordinate
x, y = np.random.random(), np.random.random()
#Because the circle is of
return 1 if x*x + y*y < 1 else 0
NUM_SAMPLES = 1000000
count = sc.parallelize(range(0, NUM_SAMPLES)).map(sample) \
.reduce(lambda a, b: a + b)
#Area = 4*PI*r
r = float(count) / float(NUM_SAMPLES)
r
print ("Pi is approximately %f" % (4.0 * r))
# + id="BxTUS3shRCpQ" executionInfo={"status": "ok", "timestamp": 1607027014823, "user_tz": 300, "elapsed": 6522, "user": {"displayName": "", "photoUrl": "", "userId": ""}}
| site/_build/html/_sources/notebooks/10-big-data/02-intro-spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ttsTF
# language: python
# name: ttstf
# ---
# EDIT: path to training log folder
config_path = '~/logs/TransformerTTS/standard_my_session'
project_path = '~/workspace/TransformerTTS/'
# +
import sys
sys.path.append(project_path)
from utils.config_manager import ConfigManager
from utils.audio import reconstruct_waveform
import IPython.display as ipd
# -
config_loader = ConfigManager(config_path)
model = config_loader.load_model()
sentence = 'Scientists at the CERN laboratory, say they have discovered a new particle.'
out = model.predict(sentence)
# Convert spectrogram to wav (with griffin lim)
wav = reconstruct_waveform(out['mel'].numpy().T, config=config_loader.config)
ipd.display(ipd.Audio(wav, rate=config_loader.config['sampling_rate']))
# Export for WaveRNN
import numpy as np
from pathlib import Path
WaveRNN_path = Path('~/workspace/WaveRNN/')
np.save(WaveRNN_path / 'scientists.npy', (out['mel'].numpy().T+4.)/8.)
| notebooks/Prediction for WaveRNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] _execution_state="busy" _uuid="7235180c9df4e07d958acbf9273a1095d4846a38" _cell_guid="48f8772c-34cb-456b-946e-7172e63b0b58" id="fpFZtH6PIEo6" colab_type="text"
# ## US Adult Census data relating income to social factors such as Age, Education, race etc.
#
# ## Prepared by: <NAME>
#
# ### 27-02-2019
#
# The Us Adult income dataset was extracted by <NAME> from the 1994 US Census Database. The data set consists of anonymous information such as occupation, age, native country, race, capital gain, capital loss, education, work class and more. Each row is labelled as either having a salary greater than ">50K" or "<=50K".
#
# The goal here is to train a binary classifier on the training dataset to predict the column income_bracket which has two possible values ">50K" and "<=50K" and evaluate the accuracy of the classifier with the test dataset.
#
# Note that the dataset is made up of **categorical and continuous features.** It also contains missing values The **categorical columns are: workclass, education, marital_status, occupation, relationship, race, gender, native_country**
#
# The continuous columns are: age, education_num, capital_gain, capital_loss, hours_per_week
#
# This Dataset was obtained from the UCI repository, it can be found on [link](http://mlr.cs.umass.edu/ml/machine-learning-databases/adult/)
#
# USAGE This dataset is well suited to developing and testing wide linear classifiers, deep neutral network classifiers and a combination of both. For more info on Combined Deep and Wide Model classifiers, refer to the [Research Paper by Google](https://arxiv.org/abs/1606.07792)
#
# Refer to this kernel for sample usage : https://www.kaggle.com/johnolafenwa/wage-prediction
#
# + [markdown] id="ZpqLWEc7wCB_" colab_type="text"
# ##Loading Packages
# + id="l4bjBQhOIkbU" colab_type="code" outputId="38b08294-689f-4258-bebc-bab6953c3bcd" colab={"base_uri": "https://localhost:8080/", "height": 224}
# access kaggle datasets
# !pip install kaggle
# + _execution_state="idle" _uuid="064594d03d51cd31aca828fd8093137bcfe0b5ec" _cell_guid="728beaab-e129-4274-92a9-6aef745452d0" id="BwZUfx4sIEo8" colab_type="code" colab={}
# for data manipulation
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
import re
# + [markdown] id="8EIPHmnGwGXi" colab_type="text"
# ##Import Dataset
# + id="i-7AQdIlIl_B" colab_type="code" outputId="9c05e909-7318-43d0-ecb2-69497974c63c" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY> "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 71}
from google.colab import files
uploaded = files.upload()
# + id="Nh6VKSPdIqi1" colab_type="code" outputId="d49d4e81-b464-4b7d-a369-87607aab5d91" colab={"base_uri": "https://localhost:8080/", "height": 51}
# !ls
# + id="P6br6NITIrUl" colab_type="code" outputId="751b740f-2f9e-4223-8233-98c47cdfc2aa" colab={"base_uri": "https://localhost:8080/", "height": 68}
# !unzip us-census-data.zip
# + id="qWHWyg1pPjMe" colab_type="code" colab={}
train = pd.read_csv('adult-training.csv', names = columns)
test = pd.read_csv('adult-test.csv', names = columns)
columns = ['Age','Workclass','fnlgwt','Education','Education Num','Marital Status',
'Occupation','Relationship','Race','Sex','Capital Gain','Capital Loss',
'Hours/Week','Country','Above/Below 50K']
# + [markdown] id="ZSddSZtBg9CS" colab_type="text"
# ## Exploratory Data Analysis
# + id="Vr4m--USMP3_" colab_type="code" outputId="2aa04518-a17d-4b71-aae6-352e116c92ec" colab={"base_uri": "https://localhost:8080/", "height": 377}
# to see features
train.head()
# + _uuid="00e4e3b6476dabc905f2335e42d53fb7c1dd45c1" _cell_guid="8fc706d1-fe59-4f58-8cad-38e066882da7" id="4rvC6VLfIEpD" colab_type="code" colab={}
# defining function for estimating missing values in each columns
def missing_value(df):
miss = []
col_list = df.columns
for i in col_list:
missing = df[i].isnull().sum()
miss.append(missing)
list_of_missing = pd.DataFrame(list(zip(col_list,miss)))
return list_of_missing
# + _uuid="4f64f169d5fd1e55c75125f2f8715b763d8e640a" id="rJaZtXB5IEpF" colab_type="code" outputId="59146119-9ba2-4eed-fd9d-37207aea759c" colab={"base_uri": "https://localhost:8080/", "height": 514}
missing_value(test)
missing_value(train)
# + id="6H0I3EtHSMtb" colab_type="code" outputId="2ed9a54b-6670-46ce-a8e4-81da3103dc3f" colab={"base_uri": "https://localhost:8080/", "height": 136}
# categorical feature
train['Relationship'].value_counts()
# + _execution_state="idle" _uuid="4003d845cb63bf97a6740a393a5866df11f2c84f" _cell_guid="04ea8eb3-d685-4d68-bc0d-95baa939cdfa" id="vvCd3vSXIEpL" colab_type="code" outputId="df9854e6-09df-4333-aa61-e153ac017568" colab={"base_uri": "https://localhost:8080/", "height": 289}
# categorical feature
test['Occupation'].value_counts()
# + [markdown] _execution_state="idle" _uuid="da278a3f5a16aec85c69edbe4e812dd3aafa701c" _cell_guid="01becc35-bd02-4c55-98ae-7e06ecb21c9f" id="tiByVbIzIEpO" colab_type="text"
# ### Data Cleaning
# + id="GzWFlBhsheAJ" colab_type="code" outputId="aa21ca45-bc69-4187-ee18-617fceb94dcf" colab={"base_uri": "https://localhost:8080/", "height": 377}
train.head()
# + _execution_state="idle" _uuid="18798000dd3a63c234733dfaec1eeeb3800ad3fb" _cell_guid="0e6c8ded-0141-4c93-8950-904b96159e47" id="8brvOXfHIEpP" colab_type="code" outputId="bea60a3a-f08c-4882-d7c5-1ffc04990b2c" colab={"base_uri": "https://localhost:8080/", "height": 51}
# deleted the firt rows of the both dataframes since these are errorneous
train.drop(train.index[0]).head()
test.drop(test.index[0]).head()
print(train.shape)
print(test.shape)
# + _execution_state="idle" _uuid="b17e405b6a106b2dfa624c6240a6e87fb88c9b07" _cell_guid="6f6d353b-6a64-4305-9f78-512739f111ff" id="jEoLkc9eIEpR" colab_type="code" colab={}
# a swiss knife tool for separating string columns to separate list and numerical to separate list
all_data = [train, test]
str_list = []
for data in all_data:
for colname, colvalue in data.iteritems():
if type(colvalue[1]) == str:
str_list.append(colname)
num_list = data.columns.difference(str_list)
# + _execution_state="idle" _uuid="11e318a3f8a981d546a0b174e840bac380b4b078" _cell_guid="22580197-d853-4b72-8bfd-3c50e9d1ddf2" id="vvmojQw-IEpX" colab_type="code" outputId="9710da45-6b16-4274-dde0-3fcaa46eafdc" colab={"base_uri": "https://localhost:8080/", "height": 289}
# no null values its good
print(test.isnull().sum())
# + [markdown] _execution_state="idle" _uuid="45f713e9b29a7801ad26240b73463aa2009a1272" _cell_guid="fb6d8371-b483-425c-92b7-cdd2e268c424" id="SW3M3f9-IEpa" colab_type="text"
# Looking at the unique values for the categorical columns, it seems there are is special charatcers ' ?'
# inserted in the dataframe in place of null values we have to remove it and replace it wiyh nan and deleting the rows with Nan values..
# Picture is now clear. As you can see missing values in the WorkClass and Occupation are quite high. We will delete the missing value.
# + _execution_state="idle" _uuid="691f9138893529e334ca30e53d816b5a0a954bbe" _cell_guid="653e148c-cd3c-45d5-a7e1-dc6475c723c5" id="bhRwpwmLIEpb" colab_type="code" colab={}
# the code will replace the special character to nan
for data in all_data:
for i in data.columns:
data[i].replace(' ?', np.nan, inplace = True)
data.dropna(inplace = True)
# + _uuid="e3ce1f14328835e256ba76b807e162e352eb0cbe" _cell_guid="d7319094-7774-4edc-8b08-2d5459c4ccc1" id="0Ilc4fjvIEpf" colab_type="code" outputId="924fe12f-437f-4c46-d1c1-b9c743dbe347" colab={"base_uri": "https://localhost:8080/", "height": 289}
test.isnull().sum()
# + [markdown] _uuid="db33030de9b83d16b40252ece310f3887ec1b6e7" _cell_guid="73025aa6-1039-4a81-a139-8c2efbc856a2" id="vZhTRrU4IEpi" colab_type="text"
# ### Feature Engineering
# + [markdown] _execution_state="idle" _uuid="53a06421a812798164c3620d4f0bc8ede18cf133" _cell_guid="27fe48b7-0ad0-4b90-8b2f-86975a15ba13" id="woZHyOPoIEpj" colab_type="text"
# Creating the target variable
# + _execution_state="idle" _uuid="69dac1f1a744e47bbd7d6fca6e6a0a9195f99d3b" _cell_guid="2b3cd09d-0206-4c4b-8a15-0fc88a5d29dc" id="aAIdNhTMIEpj" colab_type="code" colab={}
# defining the target variable
for data in all_data:
data['target'] = data['Above/Below 50K'].apply(lambda x: x.replace('.', ''))
data['target'] = data['target'].apply(lambda x: x.strip())
data['target'] = data['target'].apply(lambda x: 1 if x == '>50K' else 0)
data.drop(['Above/Below 50K'], axis = 1, inplace = True)
# + _execution_state="idle" _uuid="c26f8c02048037a5732fd963e2e5255750ba6583" _cell_guid="a4239239-159c-4478-b3a2-8fc66c221805" id="SLGd90AOIEpm" colab_type="code" outputId="993b9690-b527-4145-f4dc-cec12decbe19" colab={"base_uri": "https://localhost:8080/", "height": 34}
train.target.sum()/len(train)
# + [markdown] _uuid="e4d879d2d3319c36507b9eb0ecd54998574d26f2" _cell_guid="3b33bc9e-4f09-41a6-a3a5-339ad5ddc41c" id="921OwN_3IEpo" colab_type="text"
# Education and number of work hour/week looks to be a great variables deciding income. Lets create categories to further enhance its effect. We will create low, medium and high education. I will derive a generic function for creating bins
# + _execution_state="idle" _uuid="3f466543865982bb9517b12f6a54b21c6e3c5703" _cell_guid="b48debae-a79f-4975-9b8e-a9c8aa976408" id="XdYZF0pZIEpq" colab_type="code" colab={}
# data can be train or test
# var name is variable name: should be passed as strings within ('')
# bins is list of numeric values like [0,6,10,11]
# group names is list of groups you want to create in list form
def bin_var(data, var, bins, group_names):
bin_value = bins
group = group_names
data[var + 'Cat'] = pd.cut(train[var], bin_value, labels = group)
# + [markdown] _uuid="0416f19a5dfe088d5fcc4b4a56ea539c981f4f6b" _cell_guid="b29bb91d-c404-4504-a5d4-2bff882e14a7" id="9wmBfVOeIEpt" colab_type="text"
# Education can be divided into three groups 0-6 low and 7-11 medium and above that high
# + _execution_state="idle" _uuid="fb83d23a9a208984efd19bda65fbc9fcc84e06d1" _cell_guid="3e03a85e-4e82-4860-95d4-17c8ee8f663b" id="EM3U-2BeIEpv" colab_type="code" colab={}
bin_var(train, 'Education Num', [0,6,11,16], ['Low', 'Medium', 'High'])
bin_var(test, 'Education Num', [0,6,11,16], ['Low', 'Medium', 'High'])
# + _execution_state="idle" _uuid="23dfe22381e9a2d4d921145d7dff9f3e5160ac65" _cell_guid="e24ab0a4-8f3b-4a80-9526-c65803a5fb1c" id="kyS20NshIEp0" colab_type="code" outputId="96399c72-321a-437b-eb59-8de3599cee4b" colab={"base_uri": "https://localhost:8080/", "height": 173}
pd.crosstab(train['Education NumCat'], train['target'])
# + [markdown] _uuid="5980ac3d92101597d758d054037a1ee75e9f78ab" _cell_guid="371b4873-d583-4f11-a7ce-b8083569e4e9" id="xHTXWNraIEp6" colab_type="text"
# The same way we can bin the Hours/Week variable. Initial exploation suggest that 40 hours is the most fequent value which make sense that it is 8hr/day. Hence we will bin this variable around this value.
# + _execution_state="idle" _uuid="0522ff4e4ef9afe81ae0b712b56e699a8d6d544b" _cell_guid="69f373cb-9afd-4e94-860f-efed9ebcfb16" id="hhxmURTUIEp7" colab_type="code" colab={}
bin_var(train, 'Hours/Week', [0,35,40,60,100], ['Low', 'Medium', 'High','VeryHigh'])
bin_var(test, 'Hours/Week', [0,35,40,60,100], ['Low', 'Medium', 'High','VeryHigh'])
# + _execution_state="idle" _uuid="fc1aee522fbae7d32dff4827c30c1d00d3b53acf" _cell_guid="4b6080a1-30ac-4ff1-a415-9fe0bc2edfd0" id="_W6AlvDgIEp-" colab_type="code" outputId="ffce308c-6354-4065-c29f-e7609a720794" colab={"base_uri": "https://localhost:8080/", "height": 235}
pd.crosstab(train['Hours/WeekCat'], train['target'], margins = True)
# + [markdown] _uuid="5670a6f74cf03c538ac03a6965145ff5741c1f3f" _cell_guid="2017360d-9eed-4f73-a901-4e1a07b09510" id="VTzCtQuRIEqB" colab_type="text"
# Classifying the occupation into Highly Skilled and low Skilled
# + _uuid="548a7da88827e78f653ed8674d2a3045adb0a48f" _cell_guid="3bfee7ef-b00e-47b7-95cc-1c833e99ed01" id="PpAiZRi2IEqC" colab_type="code" colab={}
occu = pd.crosstab(train['Occupation'], train['target'], margins = True).reset_index()
# + _uuid="4a338bcad5ca2cd318adab52cd53e69816e64afa" _cell_guid="381b18da-83f8-4391-adf4-7e52f7babf27" id="dx2ea9RMIEqI" colab_type="code" colab={}
def occup(x):
if re.search('managerial', x):
return 'Highskill'
elif re.search('specialty',x):
return 'Highskill'
else:
return 'Lowskill'
# + _uuid="06246ac6c4fa77ba807fcfc7bb0db0944d6a303b" _cell_guid="94ac1958-4906-4fea-a29c-8db64f2cd3f6" id="KJwdwL4KIEqM" colab_type="code" colab={}
train['Occupa_cat'] = train.Occupation.apply(lambda x: x.strip()).apply(lambda x: occup(x))
test['Occupa_cat'] = test.Occupation.apply(lambda x: x.strip()).apply(lambda x: occup(x))
# + _uuid="d034a805595ae095daa38bce46ab90e246b3ce15" _cell_guid="2e6e8f40-0806-4481-bc00-1304163509ad" id="fusqSMv5IEqO" colab_type="code" outputId="c55d886a-4b22-429f-e8c7-8fc2a9e15ed4" colab={"base_uri": "https://localhost:8080/", "height": 68}
train['Occupa_cat'].value_counts()
# + [markdown] _uuid="4ed9aca78cfaec49e312d60add06ccbb0dcaf3c6" _cell_guid="21ce2839-8247-408e-aa4d-66d25d9164d3" id="20PLYVQbIEqQ" colab_type="text"
# Same way we can bin the Age variable. The minimum age in train is 17 and max is 90. We can categorize them as young, middle_aged and old
# + _execution_state="idle" _uuid="f922988c9a19dbc2728789bc4e804f012fd30b0b" _cell_guid="54f005cc-a2c4-4543-ab05-680c3a0d3c2b" id="JIctFNxfIEqS" colab_type="code" colab={}
bin_var(test, 'Age', [17,30,55,100], ['Young', 'Middle_aged', 'Old'])
# + _execution_state="idle" _uuid="26a707c2af2447d8d54c57ab1c846351c997608c" _cell_guid="9d8b3776-61f7-432e-8640-3fa5ed0c151f" id="0DYjdtVvIEqU" colab_type="code" colab={}
bin_var(train, 'Age', [17,30,55,100], ['Young', 'Middle_aged', 'Old'])
# + [markdown] _uuid="0661ce0f8bb4c2640bfd9caad8b0f2ded8f6f4d4" _cell_guid="67b956cc-a256-45f8-8d2b-2abf9733e783" id="Co7YNNHAIEqW" colab_type="text"
# Marital status can also be binned
# + _execution_state="idle" _uuid="e9269a0074e656f5ff0aeb32d7921cc2e917fecb" _cell_guid="e4799804-ddea-42ee-b2fa-e9c63def77ce" id="D3U_4jV8IEqY" colab_type="code" colab={}
train['Marital Status_cat']=train['Marital Status'].apply(lambda x: 'married' if x.startswith('Married',1) else 'Single')
test['Marital Status_cat']=test['Marital Status'].apply(lambda x: 'married' if x.startswith('Married',1) else 'Single')
# + [markdown] _uuid="c6573db03ab289c578f9aba96f1add1e5ab8458d" _cell_guid="28c31d81-ca67-42f6-b46f-1fc28008b647" id="pLiJY2f4IEqd" colab_type="text"
# Race has been binned into White and others
# + _uuid="76537904b5c5c94ecf899952ea2860b3dcfa291c" _cell_guid="92d83b90-02b6-4266-b278-3f6c626c9287" id="Yf0UCrCLIEqd" colab_type="code" outputId="bf82864a-d722-4950-d748-b7792055e93b" colab={"base_uri": "https://localhost:8080/", "height": 266}
pd.crosstab(train['Race'], train['target'], margins = True)
# + _uuid="abce68a699f1cb4639b60d368bbcf644280997b7" _cell_guid="5cc005f9-ecc5-432b-98de-afb5ee4780fc" id="U-M-c7IzIEqh" colab_type="code" colab={}
train['Race_cat'] = train['Race'].apply(lambda x: x.strip())
train['Race_cat'] = train['Race_cat'].apply(lambda x: 'White' if x == 'White' else 'Other')
test['Race_cat'] = test['Race'].apply(lambda x: x.strip())
test['Race_cat'] = test['Race_cat'].apply(lambda x: 'White' if x == 'White' else 'Other')
# + [markdown] _uuid="a398748cf711816adca3f2c4f30100243ceb9e47" _cell_guid="73ae3f02-1c49-41d5-9d73-8ec649261f7e" id="AqoWzE_hIEql" colab_type="text"
# Work Class is divided into three categories Private, Selfemployed, gov and others
# + _uuid="3b92b5a8de348bb7c6a3a3d17314f819d1843b56" _cell_guid="fc57ecd6-3114-4770-8f68-735103dbe725" id="fXZCdvdOIEqm" colab_type="code" outputId="6e73c8fe-432c-42f9-cf83-71262efbbcd7" colab={"base_uri": "https://localhost:8080/", "height": 153}
train['Workclass'].value_counts()
# + _uuid="d5f31941207f681348d0eea583370461da4458c3" _cell_guid="59b45ea3-eb1c-40c6-af05-955b86a0b9d3" id="3KKUVAI3IEqr" colab_type="code" colab={}
def workclas(x):
if re.search('Private', x):
return 'Private'
elif re.search('Self', x):
return 'selfempl'
elif re.search('gov', x):
return 'gov'
else:
return 'others'
# + _uuid="e533d1a67d96535a99cef087a889de70c889294d" _cell_guid="8abae262-efdf-4026-8269-4a68f3624ee3" id="89lQC_1CIEqw" colab_type="code" colab={}
train['WorfClass_cat'] = train.Workclass.apply(lambda x: x.strip()).apply(lambda x: workclas(x))
test['WorfClass_cat'] = test.Workclass.apply(lambda x: x.strip()).apply(lambda x: workclas(x))
# + _uuid="d76b56ef9ee15fc005ff2dbba8b7a4b8a03358f4" _cell_guid="cac99a07-b93d-4963-b2df-01fea345ae2a" id="QS8I9EVtIEqz" colab_type="code" outputId="470243a1-b5f2-4ce0-f4dd-054ea7d531f0" colab={"base_uri": "https://localhost:8080/", "height": 102}
train['WorfClass_cat'].value_counts()
# + [markdown] _execution_state="idle" _uuid="bbf80a112427fa34fb6e39ff7659cba81276d591" _cell_guid="c9f5c406-d2c8-4941-ab93-50722e8b32c8" id="oqNQXb-gIEq1" colab_type="text"
# * Assigning the targte to Y variable
# + _execution_state="idle" _uuid="2c231624f5e1bb2601bcf4197f760049a54fea53" _cell_guid="423f7734-839d-4440-b310-605a5dc19f56" id="DAl5MgbRIEq2" colab_type="code" colab={}
# assigning the target to Y variable
Y_tr = train['target']
Y_te = test['target']
# + _execution_state="idle" _uuid="541335780045d44489cc47d3d1dd249771cedc6d" _cell_guid="8a865048-5007-41c2-a5a4-854493c3d281" id="smEQFduTIEq4" colab_type="code" colab={}
# since target is already assigned I Will drop the target from the train and test along with other unnecessary variables
train.drop(['Education','Occupation','Race','Education Num','Age', 'Hours/Week', 'Marital Status','target','fnlgwt','Workclass', 'Capital Gain','Capital Loss', 'Country'], axis = 1, inplace = True)
test.drop(['Education','Occupation','Race','Education Num','Age', 'Hours/Week', 'Marital Status','Workclass','target','fnlgwt', 'Capital Gain','Capital Loss', 'Country'], axis = 1, inplace = True)
# + [markdown] _execution_state="idle" _uuid="6587af079bda665eec8b795eaef946f998a37286" _cell_guid="a5a34ec6-8995-47be-99de-6e3d30820e7a" id="tcDnOIoBIEq7" colab_type="text"
# I will now create dummies for the categorical variables
# + _execution_state="idle" _uuid="4926d7dc152ce9ee2cac743da3be6de2f3bd218f" _cell_guid="bdf9f602-7ea2-4397-9966-a0e504630f28" id="oZtHT6kwIErA" colab_type="code" colab={}
str_list = ['WorfClass_cat','Education NumCat', 'AgeCat', 'Race_cat',
'Hours/WeekCat',
'Marital Status_cat',
'Occupa_cat',
'Relationship',
'Sex']
train_set = pd.get_dummies(train, columns=str_list)
test_set = pd.get_dummies(test, columns=str_list)
# + [markdown] _uuid="4a3396de9625cf3944cbb4677d3c415796897490" _cell_guid="3f489ca6-bf06-448f-8ef7-322608b3f27d" id="7cRYCUV-IErF" colab_type="text"
# ### Feature Selection Using Variance Threshold
# + [markdown] _execution_state="idle" _uuid="07e406e5741344c24abbc17a74ccc44d0db312c3" _cell_guid="8290030e-a1df-4ff7-9dc9-854de848370d" id="wHLNfDBuIErG" colab_type="text"
# Variance Threshold is a univariate approach to feature selection. It removes all features whose variance doesn’t meet some threshold. By default, it removes all zero-variance features, i.e. features that have the same value in all samples.
# As an example, suppose that we have a dataset with boolean features, and we want to remove all features that are either one or zero (on or off) in more than 80% of the samples. Boolean features are Bernoulli random variables, and the variance of such variables is given by
# The below approach removes variable which have more than 80% values are either 0 or 1
# + _execution_state="idle" _uuid="e363c25ef23de68ad918887d6800158bbf4d608a" _cell_guid="6581304c-1f07-4505-a4d7-4ad9cdf2c6b3" id="O6-XALzbIErG" colab_type="code" outputId="40808c07-68f6-4902-ab79-508d64c512a7" colab={"base_uri": "https://localhost:8080/", "height": 221}
train_set.columns
# + _execution_state="idle" _uuid="ebd295d304b1a7314b879f8e1e562284fd7fe164" _cell_guid="49e531a6-ea78-4924-a513-57413bfb92b4" id="R8ty7gqQIErL" colab_type="code" colab={}
from sklearn.feature_selection import VarianceThreshold
def variance_threshold_select(df, thresh = 0.0, na_replacement = -999):
# Make a deep copy of the dataframe
df1 = df.copy(deep = True)
# passing Threshold
selector = VarianceThreshold(thresh)
# fill NA values as VarianceThreshold cannot deal with those
selector.fit(df1.fillna(na_replacement))
# get new df with columns deleted that have NA values
df2 = df.loc[:,selector.get_support(indices = False)]
return df2
# + _execution_state="idle" _uuid="3404a03229de1e00649b13e23f4ac5ab6ddd7c3a" _cell_guid="529e03bd-ed82-4aff-93a3-68f2a08149e5" id="Xd5cZ6xZIErQ" colab_type="code" colab={}
df2 = variance_threshold_select(train_set, thresh = .8* (1 - .8))
# + _uuid="434f767d3bcc4411e2cf05ea83cbcfe5db241660" _cell_guid="02777058-c5e9-4c4a-8725-b6fc2a043d91" id="eFMLON8zIErT" colab_type="code" outputId="7a01e606-4fc9-4c1f-92f2-464e857fe0ef" colab={"base_uri": "https://localhost:8080/", "height": 136}
print(df2.columns)
# + [markdown] _uuid="f5e98f2c09aa50660e1bdc311a601156d067d5b1" _cell_guid="c84c5ba0-e94d-4325-9009-52aa0eb9f756" id="cp-FCHsUIErV" colab_type="text"
# As you can see below the number of columns have been reduced to 15 because of the the variance threshold. The removed columns have the same value in 80% of the observations
# + _execution_state="idle" _uuid="efaf8def22c83ce48a969481d228011d76f6ad30" _cell_guid="d1d33c1d-14a0-4620-8b50-27f54a488411" id="id2IT8N-IErW" colab_type="code" colab={}
# creates list of columns
col_tr = df2.columns
# creates list of columns for test
col_te = test_set.columns
# creates array of values of features
X_tr=df2.values
# subseting the test dataset to get the same variable as train and
X_te = test_set[col_tr].values
# + _execution_state="idle" _uuid="dc87167864072661b64ff0cbcf1c1a5e465c1153" _cell_guid="4c33e61c-3b88-4d76-9941-b64ea9d1ccac" id="fij_ZYyQIEra" colab_type="code" outputId="0d273124-da9a-4c70-cf48-ad3613eb54a3" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(col_tr)
# + [markdown] _execution_state="idle" _uuid="0020e034ebd95980e44ac2917a833ac717f8c536" _cell_guid="d6bd3d96-95d1-4a9c-9f4b-426f889831a9" id="mr3H8TBOIErd" colab_type="text"
# ### Modelling Process
# + _execution_state="idle" _uuid="5a8bf68d62506115b72d872a1422d15f8a6d347a" _cell_guid="d9119c73-b068-462f-8364-810bc59cb111" id="hpB7fTIUIErd" colab_type="code" colab={}
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
# else:
# print('Confusion matrix, without normalization')
# print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
def show_data(cm, print_res = 0):
tp = cm[1,1]
fn = cm[1,0]
fp = cm[0,1]
tn = cm[0,0]
if print_res == 1:
print('Precision = {:.3f}'.format(tp/(tp+fp)))
print('Recall (TPR) = {:.3f}'.format(tp/(tp+fn)))
print('Fallout (FPR) = {:.3e}'.format(fp/(fp+tn)))
return tp/(tp+fp), tp/(tp+fn), fp/(fp+tn)
# + _execution_state="idle" _uuid="91a1e8640683aecc32170e1f122afd69a4129432" _cell_guid="52a0100b-1b2c-4163-92c6-7fa3c725311b" id="YN-7UtRkIErg" colab_type="code" outputId="d00a10db-f8be-4bcd-f8a5-2ba18e2bf44f" colab={"base_uri": "https://localhost:8080/", "height": 520}
lrn = LogisticRegression(penalty = 'l1', C = .001, class_weight = 'balanced')
lrn.fit(X_tr, Y_tr)
y_pred = lrn.predict(X_te)
cm = confusion_matrix(Y_te, y_pred)
if lrn.classes_[0] == 1:
cm = np.array([[cm[1,1], cm[1,0]], [cm[0,1], cm[0,0]]])
plot_confusion_matrix(cm, ['0', '1'], )
pr, tpr, fpr = show_data(cm, print_res = 1)
# + _uuid="d85591f6a7e14c6c2f62f748845afeb97b745d20" _cell_guid="5cfe65a5-9126-4e62-aac4-2c7b5aab0ac9" id="Pj_gAW_DIEri" colab_type="code" outputId="8951320b-e935-48ea-9913-92a69aa12c2b" colab={"base_uri": "https://localhost:8080/", "height": 51}
from sklearn.metrics import precision_score, \
recall_score, confusion_matrix, classification_report, \
accuracy_score, f1_score
print('Accuracy:', accuracy_score(Y_te, y_pred))
print('F1 score:', f1_score(Y_te,y_pred))
# + [markdown] _uuid="f35a7224e074fd8459dbf5ce890e94024b316b94" _cell_guid="aac582b7-2f18-4e92-a607-fa2070065a6d" id="q8gru7nBIErm" colab_type="text"
# I achieved the accuracy of 72% with recall of 74% on test which is quite good.
# + [markdown] _uuid="58a774aa7c8893e44cf5319ed37312e15740b3be" _cell_guid="9ca6d8f2-4813-4961-b599-b4dbc374e61a" id="6XTsuBK5IErn" colab_type="text"
# ### Understanding Important features for High and Low Paying Jobs
# + _uuid="698ed4ff3453a1475178294ddaa21f1676839c32" _cell_guid="daf894f5-2431-4012-8b34-23ace27ea476" id="Nwa16DpQIEro" colab_type="code" outputId="a965dba6-2597-4d89-cdbd-704c0270a09b" colab={"base_uri": "https://localhost:8080/", "height": 527}
# understanding the coefficients
coff = pd.DataFrame(lrn.coef_).T
col = pd.DataFrame(col_tr).T
print(coff)
print(col)
# + [markdown] _uuid="757a669b46de847b45001a907703d68a233f8ec8" _cell_guid="316cb1bd-e0c1-42b5-a923-fa883648b333" id="RD3xCbrMIEru" colab_type="text"
# As you can see above positive coefficients are for high paying and neagtive for low paying.As you can see High number of years of education, high workhour/week are importand for getting high salary and vice versa
# + _uuid="3e52ce41f0326666a096a7e66216070c11dfc497" _cell_guid="0f2c6bbb-1e3c-416d-95b9-2e2f82748ea7" id="MB0RSScpIErv" colab_type="code" outputId="af17a14c-ac3a-4ef1-e461-f1cdeac8076f" colab={"base_uri": "https://localhost:8080/", "height": 581}
from sklearn.feature_selection import RFE, f_regression
# stop the search when only the last feature is left
rfe = RFE(lrn, n_features_to_select=10, verbose = 3)
rfe.fit(X_tr,Y_tr)
list(zip(map(lambda x: round(x, 4), rfe.ranking_), col_tr))
| Data Science Lit/US_Adult_Income_(Logistic_Regression).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # New Blog
# > Trying out fastai's fastpages blogging platform
#
# - toc: false
# - badges: false
# - comments: false
# - categories: [fastpages]
# - image: images/new_blog.jpg
# ## Obviously, this is a test
#
# I'm going to give [fastpages](https://github.com/fastai/fastpages) as a blogging platform a try. It sounds attractive: I can just save my jupyter notebooks in a folder, and fastpages takes care of the rest. Let's see how it works out.
| _notebooks/2021-04-20-New Blog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''speechbrain_ENV'': conda)'
# language: python
# name: python3
# ---
# # Test hugging face pretrained language id model
import torchaudio
from speechbrain.pretrained import EncoderClassifier
import random
import os
from tqdm import tqdm
classifier = EncoderClassifier.from_hparams(source="speechbrain/lang-id-commonlanguage_ecapa", savedir="/home/ruibin/speechbrain/data/pretrained_models/lang-id-commonlanguage_ecapa")
# +
# Italian Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-it.wav')
print(text_lab)
# French Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-fr.wav')
print(text_lab)
# -
db_path = "/home/ruibin/speechbrain/data/keda/wav"
# search for spkr in db_path
spkr_list = [os.path.join(db_path, spkr) for spkr in os.listdir(db_path)]
# choose a random wav from a random spkr
spkr_path = random.choice(spkr_list)
wav_path = os.path.join(spkr_path, random.choice(os.listdir(spkr_path)))
out_prob, score, index, text_lab = classifier.classify_file(wav_path)
print(os.path.split(wav_path)[-1], text_lab)
# # Test my finetune lid model
# +
import torchaudio
from speechbrain.pretrained import EncoderClassifier
import os
save_path = "/home/ruibin/speechbrain/data/keda_lid_results/ECAPA-TDNN/exp3/save/CKPT+2022-01-07+12-25-09+00"
hparams_path = "/home/ruibin/speechbrain/tools/lang_reco/lid_infer_hparams.yaml"
classifier = EncoderClassifier.from_hparams(source=save_path, hparams_file=hparams_path, savedir=save_path)
# +
from speechbrain.utils.data_utils import get_all_files
db_path = "/home/ruibin/speechbrain/data/keda/lid_wav"
# search for all wav in db_path that have 'test' in its path
wav_list = get_all_files(db_path, match_and=[".wav", "test"])
# +
wav_path = random.choice(wav_list)
out_prob, score, index, text_lab = classifier.classify_file(wav_path)
print(wav_path, text_lab)
# remove all wav link in /home/ruibin/speechbrain/tools/lang_reco
os.system("rm /home/ruibin/speechbrain/tools/lang_reco/*.wav")
# -
# # Test server
import os, requests, json
lid_server = "http://localhost:12346/lid"
# url = "/home/ruibin/speechbrain/data/keda/lid_wav/巴西葡语/test/13589/9b11336e-ec4e-48bf-bace-dff152b03678.wav"
url = "/mnt/nas/stardust-data/Clients/中科大/Projects/Temp/2021_10_23/10月26日交付/新数据/Portuguese/Portuguese_KJSM_XC_XLJ_2021022_1"
r = requests.post(lid_server,
data={'url': url})
result = r.json()
| tools/lang_reco/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/warwickdatascience/beginners-python/blob/master/session_eight/session_eight_exercises.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# <center>Spotted a mistake? Report it <a href="https://github.com/warwickdatascience/beginners-python/issues/new">here</a></center>
# # Beginner's Python—Session Eight Homework Exercises
# _As we are moving deeper into the world of Python, the tasks we can complete with our skills can become more and more complicated. For that reason, there are fewer exercises to complete this week but of a slightly harder standard. Don't be put off; these are meant to challenge you._
# ## Reading From a Text File
# Download the text file linked [here](https://gist.githubusercontent.com/kalinchernev/486393efcca01623b18d/raw/daa24c9fea66afb7d68f8d69f0c4b8eeb9406e83/countries), containing a list of of countries and upload this to Colab
# Import the file as a list
# Create an empty dictionary `first_letter_counts` to keep track of how many times a country starts with each letter. Loop through the countries, extracting there first letter with `country_name[0]` to populate this dictionary
# Read the file `heights.csv` from the this session's [resource folder](https://github.com/warwickdatasciencesociety/beginners-python/blob/master/session-eight/resources) into a list of lines. Loop through the each line and run `name, height = line.split(', ')` to extract each value. Use these to create a dictionary mapping people to their height (Note: you'll have to convert the height to an integer or float first)
# ## Reading Text Files Sequentially
# Read the first 100 names from [this](https://www.usna.edu/Users/cs/roche/courses/s15si335/proj1/files.php%3Ff=names.txt&downloadcode=yes) list of names. Append these to a list called `names`
# ## Writing to Text Files
# Take the names list above and loop through it, writing the reverse of each name to a new file `reversed_names.txt`. Remember, you can reverse a string using `my_string[::-1]`
# Append the reverse of your name to this list
# Create a list of square numbers called `squares`, stored as strings. Write this to `squares.txt` without using a loop by using `'\n'.join(squares)`
# ## Raising Errors
# Have a scan through [this list](https://www.tutorialsteacher.com/python/error-types-in-python) of the standard error types in Python. Remember, it is also possible to define your own errors, though this is out of scope for this course
# Create a function called `contains_z()`. This should accept an argument, check that it's a string (else raise a `TypeError`) and then return whether the input contained the letter 'z'
# Create a function `divide(a, b)` which divides two floats `a` by `b`, with `b` defaulting to one. First check that these are indeed both floats, else raise a `TypeError`. Then, check that `b` is non-zero else raise a `ValueError` with message "cannot divide by zero". If both these conditions are met, return `a / b`
# ## Built-in Modules
# Use the `time` function from the `time` module to time how many seconds it takes to run the following function on your computer (hint: before running the function, store the starting time in a variable `start` and compare this to the time at the end)
def long_running_func():
n = 2
for __ in range(10 ** 6):
n = n ** 1000 % (10 * 9)
# Search for the documentation on `random.choice`. Use this to select a random name from the list defined above
# Import `random` from `random` as `rnd`. Use this to create a function `biased_coin_flip(p)` which simulates a coin having probablity $0 < p < 1$ of coming up heads. First check that `p` is a float and between zero and one. Then compare `rnd()` to `p`. If `rnd()` is the smaller of the two, return "Heads", else return "Tails"
# Run the above functions a few times to verify that it is biased
# Pat yourself on the back. You've made it to the end!
| session-eight/session_eight_exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #Introduction to the Research Environment
#
# The research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.
# ##Code Cells vs. Text Cells
#
# As you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left.
# ##Executing a Command
#
# A code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.
# %matplotlib inline
2 + 2
# Sometimes there is no result to be printed, as is the case with assignment.
X = 2
# Remember that only the result from the last line is printed.
2 + 2
3 + 3
# However, you can print whichever lines you want using the `print` statement.
print 2 + 2
3 + 3
# ##Knowing When a Cell is Running
#
# While a cell is running, a `[*]` will display on the left. When a cell has yet to be executed, `[ ]` will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook `[5]`. Try on this cell and note it happening.
#Take some time to run something
c = 0
for i in range(100):
c = c + i
c
# ##Importing Libraries
#
# The vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.
#
# Notice that you can rename libraries to whatever you want after importing. The `as` statement allows this. Here we use `np` and `pd` as aliases for `numpy` and `pandas`. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.
# +
import numpy as np
import pandas as pd
# This is a plotting library for pretty pictures.
import matplotlib.pyplot as plt
# -
# ##Tab Autocomplete
#
# Pressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.
#
# Try placing your cursor after the `.` and pressing tab.
np.random.
# ##Getting Documentation Help
#
# Placing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
# +
# np.random.normal?
# -
# ##Sampling
#
# We'll sample some random data using a function from `numpy`.
# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution.
X = np.random.normal(0, 1, 100)
# ##Plotting
#
# We can use the plotting library we imported as follows.
plt.plot(X)
# ###Squelching Line Output
#
# You might have noticed the annoying line of the form `[<matplotlib.lines.Line2D at 0x7f72fdbc1710>]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
plt.plot(X);
# ###Adding Axis Labels
#
# No self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.
# +
X = np.random.normal(0, 1, 100)
X2 = np.random.normal(0, 1, 100)
plt.plot(X);
plt.plot(X2);
plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.
plt.ylabel('Returns')
plt.legend(['X', 'X2']);
# -
# ##Generating Statistics
#
# Let's use `numpy` to take some simple statistics.
np.mean(X)
np.std(X)
# ##Getting Real Pricing Data
#
# Randomly sampled data can be great for testing ideas, but let's get some real data. We can use `get_pricing` to do that. You can use the `?` syntax as discussed above to get more information on `get_pricing`'s arguments.
data = get_pricing('MSFT', start_date='2012-1-1', end_date='2015-6-1')
# Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.
data
# This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please [click here](http://pandas.pydata.org/pandas-docs/stable/10min.html).
X = data['price']
# Because there is now also date information in our data, we provide two series to `.plot`. `X.index` gives us the datetime index, and `X.values` gives us the pricing values. These are used as the X and Y coordinates to make a graph.
plt.plot(X.index, X.values)
plt.ylabel('Price')
plt.legend(['MSFT']);
# We can get statistics again on real data.
np.mean(X)
np.std(X)
# ##Getting Returns from Prices
#
# We can use the `pct_change` function to get returns. Notice how we drop the first element after doing this, as it will be `NaN` (nothing -> something results in a NaN percent change).
R = X.pct_change()[1:]
# We can plot the returns distribution as a histogram.
plt.hist(R, bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['MSFT Returns']);
# Get statistics again.
np.mean(R)
np.std(R)
# Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.
plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20)
plt.xlabel('Return')
plt.ylabel('Frequency')
plt.legend(['Normally Distributed Returns']);
# ##Generating a Moving Average
#
# `pandas` has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic.
# Take the average of the last 60 days at each timepoint.
MAVG = pd.rolling_mean(X, window=60)
plt.plot(X.index, X.values)
plt.plot(MAVG.index, MAVG.values)
plt.ylabel('Price')
plt.legend(['MSFT', '60-day MAVG']);
# This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.
| Quantopian Notebooks/Cloned+from+%22Introduction+to+Research%22.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/raahatg21/IMDB-Dataset-Sentiment-Analysis-with-Keras/blob/master/IMDB_RNN_1.ipynb)
# + [markdown] id="6-KqeRU3JR26" colab_type="text"
# # IMDB Dataset: Sentiment Analysis
# + [markdown] id="EWj7CzoqJWyq" colab_type="text"
# **Using LSTM Layers, and Word Embeddings. 89.56% Validation Accuracy. 87.9% Testing Accuracy.**
# + id="KTL-dNycJJvr" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# + id="JmeaLOY4Jfr0" colab_type="code" colab={}
from keras.datasets import imdb
from keras import models
from keras import layers
from keras.preprocessing.sequence import pad_sequences
# + id="Jf5XtyZyKM1K" colab_type="code" colab={}
max_features = 10000 # Only include top 10,000 words in the vocabulary
maxlen = 500 # Cut off each review after 500 words
batch_size = 32
# + id="Trigxal7KiJM" colab_type="code" colab={}
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words = max_features)
# + id="7DfJswz2Krhz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bc30ee72-dfe4-4663-9029-68d9729eed2b"
X_train.shape, X_test.shape
# + id="v0D-L9EgKweh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1582433b-0b48-4da8-eab0-3494f349f805"
y_train[1], y_train[2], y_test[4]
# + id="4mhQgzzLK1Wz" colab_type="code" colab={}
# Padding the data so that each sequence is of exactly 500 words
X_train = pad_sequences(X_train, maxlen = maxlen)
X_test = pad_sequences(X_test, maxlen = maxlen)
# + id="4l8fxw7wLFQa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d4090482-c6ba-4085-9332-e7974ccf06e3"
X_train.shape, X_test.shape
# + id="CXjY2rLDLKfY" colab_type="code" colab={}
# Building the Model
model = models.Sequential()
model.add(layers.Embedding(max_features, 32, input_length = maxlen))
model.add(layers.Dropout(0.2))
model.add(layers.CuDNNLSTM(32)) # return_sequences = False
model.add(layers.Dense(1, activation = 'sigmoid'))
# + [markdown] id="k5JX8a7rLtI5" colab_type="text"
# Since CuDNN doesn't support dropout and recurrent_dropout, we couldn't write the following:
#
# ```
# model.add(layers.CuDNNLSTM(32, dropout = 0.1, recurrent_dropout = 0.5))
# ```
# Hence, our model will be inferior in case of small datasets:
# https://github.com/keras-team/keras/issues/8935
#
# + id="OWbAd9LKLmsj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 278} outputId="1767397d-ba2b-4f67-f41d-444d3e22c1c3"
model.summary()
# + id="TX5WXkfQMQJ9" colab_type="code" colab={}
# Compiling the Model
model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop', metrics = ['acc'])
# + id="LxUCCNLpMYV1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 555} outputId="e4e61715-d9a8-4ace-9656-3b684f556a3c"
# Training
history = model.fit(X_train, y_train, batch_size = batch_size, epochs = 15, validation_split = 0.2)
# + id="KRnbubPqMkFA" colab_type="code" colab={}
loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['acc']
val_acc = history.history['val_acc']
# + id="gLyrVaZHMyUL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="1b595036-5238-4954-cf01-201de260b680"
# Plotting Training and Validation Loss
epochs = range(1, 16)
plt.plot(epochs, loss, 'yo', label = 'Training Loss')
plt.plot(epochs, val_loss, 'y', label = 'Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + id="HiZwatx-M231" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="21a4a807-5573-4e36-f42a-f9852505ddad"
# Plotting Training and Validation Accuracy
plt.plot(epochs, acc, 'co', label = 'Training Accuarcy')
plt.plot(epochs, val_acc, 'c', label = 'Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# + id="ovnIT6M9M5B2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="89248704-fb92-4c47-82cf-53a96fd2b6f6"
# Testing
model.evaluate(X_test, y_test)
| IMDB_RNN_8790.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Topics Modeling using Mallet (through gensim wrapper)
# ## Initialization
# ### Preliminaries & Configurations
import os
import sys
import string
import numpy as np
import datetime
import pandas as pd
import json
import re
import nltk
import gensim
import seaborn as sns
import matplotlib.pyplot as plt
# #!pip install pyLDAvis
# #!pip install panel
# #!pip install pycld2
import pyLDAvis
import pyLDAvis.gensim
import panel as pn
import pycld2 as cld2
# # !pip install openpyxl
pn.extension() # This can cause Save to error "Requested Entity to large"; Clear this cell's output after running
None
MALLET_ROOT = '/home/jovyan'
mallet_home = os.path.join(MALLET_ROOT, 'mark/Systems/mallet-2.0.8')
mallet_path = os.path.join(mallet_home, 'bin', 'mallet')
mallet_stoplist_path = os.path.join(mallet_home, 'stoplists', 'en.txt')
ROOT = '..'
# Configurations
datafile_date = '2020-04-10-v7'
basedir = ROOT + f'/data/interim/{datafile_date}/'
# parser = 'moana'
parser = 'scispacy'
parser_model = 'spacy-en_core_sci_lg'
# Inputs
datafile = f'{basedir}{datafile_date}-covid19-combined-abstracts-tokens-{parser_model}.jsonl'
text_column_name = 'abstract_clean'
tokens_column_name = f'abstract_tokens_{parser}'
ent_column_name = f'abstract_ent_{parser}'
json_args = {'orient': 'records', 'lines': True}
# Other configurations
MODIFIED_LDAVIS_URL = 'https://cdn.jsdelivr.net/gh/roamanalytics/roamresearch@master/BlogPosts/CORD19_topics/ldavis.v1.0.0-roam.js'
random_seed = 42
model_build_workers = 4
# Outputs
outdir = ROOT + f'/results/{datafile_date}/'
model_out_dir = ROOT + f'/models/topics-abstracts-{datafile_date}-{parser}/'
model_path = model_out_dir + 'mallet_models/'
gs_model_path = model_path + 'gs_models/'
gs_model_path_prefix = gs_model_path + f'{datafile_date}-covid19-combined-abstracts-'
out_json_args = {'date_format': 'iso', **json_args}
web_out_dir = outdir + f'topics-abstracts-{datafile_date}-{parser}-html/'
if not os.path.exists(datafile):
print(datafile + ' does not exist')
sys.exit()
out_path_mode = 0o777
os.makedirs(model_out_dir, mode = out_path_mode, exist_ok = True)
os.makedirs(model_path, mode = out_path_mode, exist_ok = True)
os.makedirs(gs_model_path, mode = out_path_mode, exist_ok = True)
os.makedirs(outdir, mode = out_path_mode, exist_ok = True)
os.makedirs(web_out_dir, mode = out_path_mode, exist_ok = True)
# +
import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.WARNING)
logging.getLogger("gensim").setLevel(logging.WARNING)
# -
with open(mallet_stoplist_path, 'r') as fp:
stopwords = set(fp.read().split())
len(stopwords)
stopwords.update([
'doi', 'preprint', 'copyright', 'peer', 'reviewed', 'org', 'https', 'et', 'al', 'author', 'figure',
'rights', 'reserved', 'permission', 'used', 'using', 'biorxiv', 'fig', 'fig.', 'al.',
'di', 'la', 'il', 'del', 'le', 'della', 'dei', 'delle', 'una', 'da', 'dell', 'non', 'si'
]) # from https://www.kaggle.com/danielwolffram/topic-modeling-finding-related-articles
len(stopwords)
# ### Read in text and create corpus
original_df = pd.read_json(datafile, **json_args)
documents = original_df[text_column_name]
orig_tokens = original_df[tokens_column_name]
if 'keyterms' in original_df.columns:
# keyterms = original_df['keyterms'].apply(lambda x: [k.lower() for k in x])
keyterms = original_df['keyterms'].apply(lambda lst: ['_'.join(k.lower().split()) for k in lst])
else:
keyterms = None
if ent_column_name in original_df.columns:
ents = original_df['abstract_ent_scispacy'].apply(lambda lst: ['_'.join(k.lower().split()) for k in lst if len(k.split()) > 1])
else:
ents = None
len(documents)
# +
punctuation = string.punctuation + "”“–" # remove both slanted double-quotes
# leave '#$%*+-/<=>'
nonnumeric_punctuation = r'!"&()\,.:;?@[]^_`{|}~' + "'" + "'""”“–’" + ' '
def normalize_token(token):
if token in nonnumeric_punctuation:
return None
if token in stopwords:
return None
if token == token.upper():
return token
return token.lower()
def normalize_token_list(tokens):
result = []
for tok in tokens:
ntok = normalize_token(tok)
if ntok:
result.append(ntok)
return result
# -
nonnumeric_punctuation
texts = orig_tokens.apply(normalize_token_list)
dictionary = gensim.corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
sorted(dictionary.values())[:5]
# ## Topic model collections -- vary corpus and n
# ### Prepare corpus collections (various options)
corpora = {}
# corpora['text'] = corpus
# ##### Filter by language
def predict_language(text):
try:
isReliable, _, details = cld2.detect(text, isPlainText=True)
except cld2.error:
return ('ERROR', 0, '')
if isReliable:
lang_prob = details[0][2]
if lang_prob > 70:
return (details[0][1], lang_prob, text)
elif lang_prob == 0:
return ('', 0, '')
# abstract likely in two languages
_, _, details, vectors = cld2.detect(text, isPlainText=True,
returnVectors=True, hintLanguage='en')
en_text = ''
for vec in vectors:
if vec[3] == 'en':
en_text += text[vec[0] : vec[0]+vec[1]]
return ('en-extract', lang_prob, en_text)
else:
return ('', 0, '')
predicted_lang = pd.DataFrame.from_records(documents.apply(predict_language), columns=('lang', 'lang_prob', 'text'), index=documents.index)
predicted_lang_en_mask = predicted_lang['lang'].isin(['en', 'en-extract'])
(~ predicted_lang_en_mask).sum()
texts_en = texts.where(predicted_lang_en_mask, None)
texts_en = texts.apply(lambda x: x if x is not None else [])
# ##### Filter scispacy ents
from collections import Counter
if ents is not None:
ents_counter = Counter()
for x in ents.iteritems():
for w in x[1]:
ents_counter[w] += 1
ents_common = [k for k, c in ents_counter.items() if c >= 5]
len(ents_common)
# ##### Extended token sets
dictionary = gensim.corpora.Dictionary(texts)
if ents is not None:
dictionary.add_documents([ents_common])
# Several combinations attempted, but 'text-ents' was most useful
if ents is not None:
# corpora['text-ents'] = (texts + ents).apply(dictionary.doc2bow)
corpora['text-ents-en'] = (texts_en + ents).apply(dictionary.doc2bow)
corpora.keys()
# ### HTML Templates
# +
html_template = '''
<!DOCTYPE html>
<html>
<meta charset="UTF-8">
<head>
<title>{0}</title>
{1}
</head>
<body>
<h2>{0}</h2>
{2}
</body>
</html>
'''
html_style = '''
<style>
table {
font-family: "Trebuchet MS", Arial, Helvetica, sans-serif;
border-collapse: collapse;
width: 100%;
}
td, th {
border: 1px solid #ddd;
padding: 8px;
}
tr:nth-child(even){background-color: #f2f2f2;}
tr:hover {background-color: #ddd;}
th {
padding-top: 12px;
padding-bottom: 12px;
text-align: left;
background-color: #0099FF;
color: white;
}
</style>
'''
html_style_cols = '''
<style>
table {
font-family: "Trebuchet MS", Arial, Helvetica, sans-serif;
border-collapse: collapse;
width: 100%;
}
td, th {
border: 1px solid #ddd;
padding: 8px;
}
td:nth-child(even){background-color: #f2f2f2;}
td:hover {background-color: #ddd;}
th {
padding-top: 12px;
padding-bottom: 12px;
text-align: left;
background-color: #0099FF;
color: white;
}
</style>
'''
# -
# ### Build models
num_topics = [80] # number of topics
cmallet = {}
for c in corpora.keys():
cmallet[c] = {}
for i in num_topics:
print('Building model for %s (%s topic)' % (c,i))
prefix = os.path.join(model_path, c, str(i), '')
os.makedirs(prefix, mode = out_path_mode, exist_ok = True)
cmallet[c][i] = gensim.models.wrappers.ldamallet.LdaMallet(mallet_path, corpora[c], id2word=dictionary, optimize_interval=10,
prefix=prefix, workers=model_build_workers,
num_topics=i, iterations=2500, random_seed=random_seed)
# #### Save cmallet
for c in cmallet.keys():
for i in cmallet[c].keys():
cmallet[c][i].save(f'{gs_model_path_prefix}gensim-mallet-model_{c}_{i}.pkl4',
separately=[], sep_limit=134217728, pickle_protocol=4)
print(f'{gs_model_path_prefix}gensim-mallet-model_{c}_{i}.pkl4')
# ### Plot
vis_data = {}
gensim_lda_model = {}
for c in cmallet.keys():
vis_data[c] = {}
gensim_lda_model[c] = {}
for i in cmallet[c].keys():
gensim_lda_model[c][i] = gensim.models.wrappers.ldamallet.malletmodel2ldamodel(cmallet[c][i])
vis_data[c][i] = pyLDAvis.gensim.prepare(gensim_lda_model[c][i], corpora[c],
dictionary=cmallet[c][i].id2word, mds='tsne')
pyLDAvis.save_json(vis_data[c][i], outdir + f'pyldavis_{c}_{i}.json')
print(outdir + f'pyldavis_{c}_{i}.json')
ofdir = web_out_dir + f'{c}-{i}/'
os.makedirs(ofdir, mode = out_path_mode, exist_ok = True)
pyLDAvis.save_html(vis_data[c][i], ofdir + f'pyldavis_{c}_{i}.html',
ldavis_url=MODIFIED_LDAVIS_URL)
print(web_out_dir + f'{c}-{i}/pyldavis_{c}_{i}.html')
# #### Save Gensim Mallet Models
for c in gensim_lda_model.keys():
for i in gensim_lda_model[c].keys():
gensim_lda_model[c][i].save(f'{gs_model_path_prefix}gensim-lda-model_{c}_{i}.pkl4',
separately=[], sep_limit=134217728, pickle_protocol=4)
print(f'{gs_model_path_prefix}gensim-lda-model_{c}_{i}.pkl4')
# #### Save _Relevant_ terms for topics (from pyLDAviz)
num_terms = 50
def sorted_terms(data, topic=1, rlambda=1, num_terms=30):
"""Returns a dataframe using lambda to calculate term relevance of a given topic."""
tdf = pd.DataFrame(data.topic_info[data.topic_info.Category == 'Topic' + str(topic)])
if rlambda < 0 or rlambda > 1:
rlambda = 1
stdf = tdf.assign(relevance=rlambda * tdf['logprob'] + (1 - rlambda) * tdf['loglift'])
rdf = stdf[['Term', 'relevance']]
if num_terms:
return rdf.sort_values('relevance', ascending=False).head(num_terms).set_index(['Term'])
else:
return rdf.sort_values('relevance', ascending=False).set_index(['Term'])
topic_lists = {}
for corp, cdict in vis_data.items():
for numtops in cdict.keys():
model_topic_lists_dict = {}
for topnum in range(numtops):
s = sorted_terms(vis_data[corp][numtops], topnum + 1, rlambda=.5, num_terms=num_terms)
terms = s.index
model_topic_lists_dict['Topic ' + str(topnum + 1)] = np.pad(terms, (0, num_terms - len(terms)),
'constant', constant_values='')
topic_lists[corp + '-' + str(numtops)] = pd.DataFrame(model_topic_lists_dict)
topic_lists.keys()
# +
# # !pip install openpyxl
# -
# Save relevant topics - write to xlsx (one corp-numtopics per sheet)
with pd.ExcelWriter(outdir + f'topics-relevant-words-abstracts-{datafile_date}-{num_terms}terms.xlsx') as writer:
for sheetname, dataframe in topic_lists.items():
dataframe.to_excel(writer, sheet_name=sheetname)
print(outdir + f'topics-relevant-words-abstracts-{datafile_date}-{num_terms}terms.xlsx')
# #### Save Relevant Topics as html
# Save relevant topics - write to html
out_topics_html_dir = web_out_dir
for corp_numtopics, dataframe in topic_lists.items():
os.makedirs(out_topics_html_dir + corp_numtopics, mode = out_path_mode, exist_ok = True)
ofname = out_topics_html_dir + corp_numtopics + '/' + 'relevant_terms.html'
with open(ofname, 'w') as ofp:
column_tags = [f'<a href="Topic_{i+1:02d}.html" target="_blank">{name}</a>'
for i, name in enumerate(dataframe.columns)]
temp_df = dataframe.copy()
temp_df.columns = column_tags
temp_df = temp_df.applymap(lambda x: ' '.join(x.split('_')))
temp_df = temp_df.set_index(np.arange(1, len(temp_df) + 1))
html_table = temp_df.to_html(escape=False)
html_str = html_template.format('Most Relevant Terms per Topic', html_style_cols, html_table)
ofp.write(html_str)
print(ofname)
# +
# topic_lists['text-ents-80']
# -
# ### Create dataframes of topic model collections
ctopicwords_df = {}
for c in cmallet.keys():
ctopicwords_df[c] = {}
for i in cmallet[c].keys():
ctopicwords_df[c][i] = pd.read_table(cmallet[c][i].ftopickeys(), header=None, names=['id', 'weight', 'wordlist'])
REMOVED = []
def normalize_topic_words(words):
results = []
for w in words:
if w in nonnumeric_punctuation:
pass
elif w[-1] == 's' and w[:-1] in words:
# remove plural
REMOVED.append(w)
elif w != w.lower() and w.lower() in words:
# remove capitalized
REMOVED.append(w)
else:
results.append(w)
return results
# Clean words
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i]['wordlist'] = ctopicwords_df[c][i]['wordlist'].apply(lambda x: ' '.join(normalize_topic_words(x.split())))
# +
# set(REMOVED)
# -
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i].drop(['id'], axis=1, inplace=True)
ctopicwords_df[c][i]['topwords'] = ctopicwords_df[c][i].wordlist.apply(lambda x: ' '.join(x.split()[:3]))
ctopicwords_df[c][i]['topten'] = ctopicwords_df[c][i].wordlist.apply(lambda x: ' '.join(x.split()[:10]))
if True: # use pyLDAvis order
rank_order_new_old = vis_data[c][i].to_dict()['topic.order']
rank_order_old_new = [None] * len(rank_order_new_old)
for new, old in enumerate(rank_order_new_old):
rank_order_old_new[old - 1] = new
ctopicwords_df[c][i]['rank'] = np.array(rank_order_old_new) + 1
else:
ctopicwords_df[c][i]['rank'] = ctopicwords_df[c][i].weight.rank(ascending=False)
ctopicwords_df[c][i]['topicnum'] = ctopicwords_df[c][i].apply(lambda row: ('t%02d' % row['rank']), axis=1)
ctopicwords_df[c][i]['label'] = ctopicwords_df[c][i].apply(lambda row: row['topicnum'] + ' ' + row['topwords'], axis=1)
# doctopics
cdoctopics_df = {}
for c in cmallet.keys():
cdoctopics_df[c] = {}
for n in cmallet[c].keys():
cdoctopics_df[c][n] = pd.read_table(cmallet[c][n].fdoctopics(), header=None, names=['id']+[i for i in range(n)])
cdoctopics_df[c][n].drop(['id'], axis=1, inplace=True)
cdoctopics_df[c][n].head()
# Reorder topics
for c in cdoctopics_df.keys():
for n in cdoctopics_df[c].keys():
# (include top 3 topics in name) cdoctopics_df[c][n] = cdoctopics_df[c][n].T.join(ctopicwords_df[c][n][['rank', 'label']]).set_index('label').sort_values('rank').drop(['rank'], axis=1).T
cdoctopics_df[c][n] = cdoctopics_df[c][n].T.join(ctopicwords_df[c][n][['rank', 'topicnum']]).set_index('topicnum').sort_values('rank').drop(['rank'], axis=1).T
cdoctopics_df[c][n].T.index.rename('topic', inplace=True)
# cdoctopics_df[c][n].head()
# ### Save documents
# Save topicwords
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i].sort_values('rank').to_csv(outdir + 'topickeys_sorted_%s_%d.txt' % (c, i), index_label='original_order')
print(outdir + 'topickeys_sorted_%s_%d.txt' % (c, i))
# ctopicwords_df[c][i].sort_values('rank').to_excel('out/topickeys_sorted_%s_%d.xlsx' % (c, i), index_label='original_order')
# Save doctopics
for c in cdoctopics_df.keys():
for n in cdoctopics_df[c].keys():
cdoctopics_df[c][n].to_csv(outdir + 'doctopic_%s_%d.csv' % (c, n), index_label='original_order')
print(outdir + 'doctopic_%s_%d.csv' % (c, n))
sims_names = ['scispacy', 'specter']
sims_columns = [f'sims_{x}_cord_uid' for x in sims_names]
assert all(x in original_df.columns for x in sims_columns)
assert 'cord_uid' in original_df.columns
def helper_get_sims_html_ids(sim_uids, cord_uid_topic_num, cord_uid_cite_ad):
result = []
for uid in sim_uids:
topic_num = cord_uid_topic_num.get(uid)
cite_ad = cord_uid_cite_ad.get(uid)
if cite_ad and topic_num:
result.append(f'<a href="Topic_{topic_num}.html#{uid}">{cite_ad}</a>')
return ', '.join(result)
original_df['abstract_mentions_covid'].sum()
# Prepare to save docs by topics
predominant_doc_dfd = {}
predominant_doc_df = original_df[['cite_ad', 'title', 'authors', 'publish_year', 'publish_time',
'dataset', 'abstract_mentions_covid',
'pmcid', 'pubmed_id', 'doi', 'cord_uid', 'sha', 'abstract_clean']
+ sims_columns
].copy()
sims_mapping_cord_uid_sd = {}
predominant_doc_df['publish_time'] = predominant_doc_df['publish_time'].dt.strftime('%Y-%m-%d')
for c in cdoctopics_df.keys():
predominant_doc_dfd[c] = {}
sims_mapping_cord_uid_sd[c] = {}
for n in cdoctopics_df[c].keys():
predominant_doc_dfd[c][n] = {}
sims_mapping_cord_uid_sd[c][n] = {}
predominant_doc_df['predominant_topic'] = cdoctopics_df[c][n].idxmax(axis=1)
predominant_doc_df['predominant_topic_num'] = predominant_doc_df['predominant_topic'].str.split().apply(lambda x: x[0][1:])
predominant_doc_df['major_topics'] = cdoctopics_df[c][n].apply(lambda r: {f't{i + 1:02d}': val for i, val in enumerate(r) if val >= 0.3}, axis=1)
for sim_col in sims_columns:
sims_mapping_cord_uid_sd[c][n][sim_col] = {}
sims_mapping_cord_uid_sd[c][n][sim_col]['topic_num'] = predominant_doc_df[['cord_uid', 'predominant_topic_num']].set_index('cord_uid')['predominant_topic_num']
sims_mapping_cord_uid_sd[c][n][sim_col]['cite_ad'] = predominant_doc_df[['cord_uid', 'cite_ad']].set_index('cord_uid')['cite_ad']
for i, topic_name in enumerate(cdoctopics_df[c][n].columns):
temp_df = predominant_doc_df[(predominant_doc_df['major_topics'].apply(lambda x: topic_name in x))].copy()
temp_df['topic_weight'] = temp_df.major_topics.apply(lambda x: x.get(topic_name))
temp_df = temp_df.sort_values(['topic_weight'], axis=0, ascending=False)
predominant_doc_dfd[c][n][i] = temp_df
# Save docs by topics - write to json and tsv
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
outfile_central_docs_base = outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}'
temp_dfs = []
for i, dataframe in predominant_doc_dfd[c][n].items():
temp_df = dataframe[['title', 'authors', 'publish_year', 'publish_time', 'cord_uid', 'dataset', 'sha', 'abstract_clean']].reset_index()
temp_df['Topic'] = i + 1
temp_dfs.append(temp_df)
result_df = pd.concat(temp_dfs)
print(outfile_central_docs_base + '.{jsonl, txt}')
result_df.to_json(outfile_central_docs_base + '.jsonl', **out_json_args)
result_df.to_csv(outfile_central_docs_base + '.txt', sep='\t')
# Save docs by topics - write to excel
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
print(outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}.xlsx')
with pd.ExcelWriter(outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}.xlsx') as writer:
for i in predominant_doc_dfd[c][n].keys():
sheetname = f'Topic {i+1}'
predominant_doc_dfd[c][n][i].drop(columns=['abstract_clean', 'cite_ad', 'major_topics',
'predominant_topic', 'predominant_topic_num']
).to_excel(writer, sheet_name=sheetname)
# prep similarity columns for html
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
for sim_name, sims_col in zip(sims_names, sims_columns):
cord_uid_topic_num = sims_mapping_cord_uid_sd[c][n][sim_col]['topic_num'].to_dict()
cord_uid_cite_ad = sims_mapping_cord_uid_sd[c][n][sim_col]['cite_ad'].to_dict()
for i in predominant_doc_dfd[c][n].keys():
predominant_doc_dfd[c][n][i][f'Similarity {sim_name}'] = (predominant_doc_dfd[c][n][i][sims_col]
.apply(lambda x: helper_get_sims_html_ids(x, cord_uid_topic_num, cord_uid_cite_ad)))
# Modify dataframe for html
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
for i in predominant_doc_dfd[c][n].keys():
predominant_doc_dfd[c][n][i]['pmcid'] = predominant_doc_dfd[c][n][i]['pmcid'].apply(lambda xid: f'<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['pubmed_id'] = predominant_doc_dfd[c][n][i]['pubmed_id'].apply(lambda xid: f'<a href="https://www.ncbi.nlm.nih.gov/pubmed/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['doi'] = predominant_doc_dfd[c][n][i]['doi'].apply(lambda xid: f'<a href="https://doi.org/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['abstract_mentions_covid'] = predominant_doc_dfd[c][n][i]['abstract_mentions_covid'].apply(lambda x: 'Y' if x else 'N')
predominant_doc_dfd[c][n][i].columns = [' '.join(c.split('_')) for c in predominant_doc_dfd[c][n][i].columns]
from pandas.io.formats import format as fmt
from pandas.io.formats.html import HTMLFormatter
from typing import Any, Optional
class MyHTMLFormatter(HTMLFormatter):
"Add html id to th for rows"
def __init__(self, html_id_col_name, *args, **kwargs):
super(MyHTMLFormatter, self).__init__(*args, **kwargs)
self.html_id_col_name = html_id_col_name
def write_th(
self, s: Any, header: bool = False, indent: int = 0, tags: Optional[str] = None
) -> None:
if not header and self.html_id_col_name and self.html_id_col_name in self.frame.columns:
try:
key = int(s.strip())
except ValueError:
key = None
if key and key in self.frame.index:
html_id = self.frame.loc[key, self.html_id_col_name]
if html_id:
if tags:
tags += f'id="{html_id}";'
else:
tags = f'id="{html_id}";'
super(MyHTMLFormatter, self).write_th(s, header, indent, tags)
# Save doc by topics - write to html
# out_topics_html_dir = outdir + f'topics-central-docs-abstracts-{datafile_date}-html/'
out_topics_html_dir = web_out_dir
os.makedirs(out_topics_html_dir, mode = out_path_mode, exist_ok = True)
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
ofdir = out_topics_html_dir + f'{c}-{n}/'
os.makedirs(ofdir, mode = out_path_mode, exist_ok = True)
print(ofdir)
for i in predominant_doc_dfd[c][n].keys():
ofname = ofdir + f'Topic_{i+1:02d}.html'
with open(ofname, 'w') as ofp:
html_df = (predominant_doc_dfd[c][n][i]
.drop(columns=['sha', 'major topics', 'abstract clean',
'predominant topic', 'predominant topic num']
+ [' '.join(c.split('_')) for c in sims_columns])
.copy()
.set_index(np.arange(1, len(predominant_doc_dfd[c][n][i])+1)))
# html_table = html_df.to_html(escape=False)
df_formatter = fmt.DataFrameFormatter(escape=False, frame=html_df, index=True, bold_rows=True)
html_formatter = MyHTMLFormatter('cord uid', formatter=df_formatter)
# html_formatter = HTMLFormatter(formatter=df_formatter)
html_table = html_formatter.get_result()
html_str = html_template.format(f'Topic {i+1:02d}', html_style, html_table)
ofp.write(html_str)
| BlogPosts/CORD19_topics/cord19-2020-04-10-v7/notebooks/2020-04-10-covid19-topics-gensim-mallet-scispacy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''ssc-paper'': conda)'
# name: python3
# ---
# # Introduction
#
# In this notebook we present a set of basic tests of the implementations
# of null models provided by `pathcensus` package. All the null models
# are also tested against an automated suite of unit test, but we additionally
# provide the below examples as the notebook format is argurably much easier
# to follow. We use `igraph` package to generate graphs.
#
# We defined all models following the formulas and terminology introduced in:
#
# > [1] <NAME>., <NAME>., & <NAME>. (2015).
# > Unbiased sampling of network ensembles.
# > New Journal of Physics, 17(2), 023052. https://doi.org/10.1088/1367-2630/17/2/023052
#
# and:
#
# > [2] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2021). Fast and scalable likelihood maximization for Exponential Random Graph Models with local constraints. Scientific Reports, 11(1), 15227. https://doi.org/10.1038/s41598-021-93830-4
#
#
#
# +
import random
import numpy as np
import igraph as ig
from pathcensus.nullmodels import UBCM, UECM
from pathcensus.utils import rowsums, relclose
def add_random_weights(graph):
graph = graph.copy()
graph.es["weight"] = np.random.randint(1, 11, (graph.ecount(),))
return graph
def make_er_graph(n, dbar):
p = dbar / (n-1)
return ig.Graph.Erdos_Renyi(n, p=p, directed=False)
def make_rgg(n, dbar):
radius = np.sqrt(dbar/(np.pi*(n-1)))
return ig.Graph.GRG(n, radius=radius, torus=True)
# Global parameters
# -----------------
N_NODES = 100 # number of nodes in random graphs
KBAR = 10 # expected average degree in random graphs
RTOL = 1e-1 # relative tolerance when comparing simulated and expected values
N_SAMPLES = 1000 # number of samples using for stochastic testing of expectations
# -
# ## Undirected Binary Configuration Model (UBCM)
#
# This is a soft (canonical) configuration model for undirected, unweighted
# networks. It is defined in Sec. 3.1 and Eq. (8) in [1].
#
# For this model we will test whether node degrees are indeed reproduced
# in expectation, which is exactly what the model should do. We will test
# this on two small random graph with very different structure:
#
# 1. Erdős–Rényi random graph
# 2. Random geometric graph
#
# Both graphs will have $100$ nodes and average degrees equal to $10$ approximately.
# ### ER random graph
# +
random.seed(303)
graph = make_er_graph(N_NODES, KBAR)
degseq = np.array(graph.degree())
ubcm = UBCM(graph)
ubcm.fit()
# -
## TEST ANALYTICAL EXPECTED DEGREES
relclose(ubcm.ED, degseq, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in ubcm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, degseq, rtol=RTOL)
# -
# ### Random geometric graph
# +
random.seed(304)
graph = make_rgg(N_NODES, KBAR)
degseq = np.array(graph.degree())
ubcm = UBCM(graph)
ubcm.fit()
# -
## TEST ANALYTICAL EXPECTED DEGREES
relclose(ubcm.ED, degseq, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in ubcm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, degseq, rtol=RTOL)
# -
# ## Undirected Enhanced Configuration Model
#
# This null model constrains both expected degree sequence and strength
# sequence. We test it again against ER and RGG networks, but this time
# we also add random edge weights between $1$ and $10$.
#
# ### ER random graph
# +
random.seed(305)
graph = make_er_graph(N_NODES, KBAR)
graph = add_random_weights(graph)
D = np.array(graph.degree())
S = np.array(graph.strength(weights="weight"))
uecm = UECM(graph)
uecm.fit()
# -
## TEST ANALYTICAL EXPECTED DEGREES
relclose(uecm.ED, D, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in uecm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
randomized.data[:] = 1
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, D, rtol=RTOL)
# -
## TEST ANALYTICAL EXPECTED STRENGTHS
relclose(uecm.ES, S, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in uecm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, S, rtol=RTOL)
# -
# ### Random geometric graph
# +
random.seed(306)
graph = make_rgg(N_NODES, KBAR)
graph = add_random_weights(graph)
D = np.array(graph.degree())
S = np.array(graph.strength(weights="weight"))
uecm = UECM(graph)
uecm.fit()
# -
## TEST ANALYTICAL EXPECTED DEGREES
relclose(uecm.ED, D, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in uecm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
randomized.data[:] = 1
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, D, rtol=RTOL)
# -
## TEST ANALYTICAL EXPECTED STRENGTHS
relclose(uecm.ES, S, rtol=RTOL)
# +
## TEST EXPECTATION THROUGH SAMPLING
expected = np.zeros_like(degseq, dtype=float)
for randomized in uecm.sample(N_SAMPLES):
# Sample graph realizations are adjacency matrices
expected += rowsums(randomized)
expected = expected / N_SAMPLES
relclose(expected, S, rtol=RTOL)
| examples/2-null-model-tests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pytorch WeatherBench
# Based on: https://github.com/pangeo-data/WeatherBench/blob/master/src/train_nn.py
#
# +
import sys
sys.path.append('/'.join(sys.path[0].split('/')[:-1]))
import os
import xarray as xr
import numpy as np
import time
import matplotlib.pyplot as plt
import torch
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
from modules.data import WeatherBenchDataset2d,
from modules.utils import load_test_data
from modules.models import CNN2dPeriodic
from modules.test import create_predictions_2D, compute_weighted_rmse
# -
def train_model(model, device, train_generator, epochs, lr, validation_data, patience):
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=lr, eps=1e-7, weight_decay=0, amsgrad=False)
min_val_loss = 1e15
wait = 0
stopped_epoch = 0
stop_training = False
train_losses = []
val_losses = []
for epoch in range(epochs):
time1 = time.time()
val_loss = 0
train_loss = 0
model.train()
for batch_idx, (batch, labels) in enumerate(train_generator):
# Transfer to GPU
batch, labels = batch.to(device), labels.to(device)
batch_size = batch.shape[0]
# Model
output = model(batch)
loss = criterion(output, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss = train_loss + loss.item() * batch_size
train_loss = train_loss / (len(train_generator.dataset))
train_losses.append(train_loss)
model.eval()
with torch.set_grad_enabled(False):
for batch, labels in validation_data:
# Transfer to GPU
batch, labels = batch.to(device), labels.to(device)
batch_size = batch.shape[0]
output = model(batch)
val_loss = val_loss + criterion(output, labels).item() * batch_size
val_loss = val_loss / (len(validation_data.dataset))
val_losses.append(val_loss)
time2 = time.time()
# Print stuff
print('Epoch: {e:3d}/{n_e:3d} - loss: {l:.3f} - val_loss: {v_l:.5f} - time: {t:2f}'
.format(e=epoch+1, n_e=epochs, l=train_loss, v_l=val_loss, t=time2-time1))
if (val_loss - min_val_loss) < 0:
min_val_loss = val_loss
wait = 0
else:
if wait >= patience:
stopped_epoch = epoch + 1
stop_training = True
wait += 1
if stop_training:
print('Epoch {e:3d}: early stopping'.format(e=stopped_epoch))
return train_losses, val_losses
return train_losses, val_losses
# +
datadir = "../../data/5.625deg/"
lr=1e-4
activation='elu'
dr=0
batch_size=128
patience=3
train_years=('1979', '2015')
valid_years=('2016', '2016')
test_years=('2017', '2018')
gpu=1
iterative=False
vars = ['z', 't']
kernel_size = 5
# -
# ## 3 day prediction
# +
training_weatherbench = np.load('../data/weatherbench_training.npy')
train_loss_w = training_weatherbench[0]
val_loss_w = training_weatherbench[1]
model_save_fn = "../../data/predictions/models/torch_fccnn_3d.h5"
pred_save_fn = "../../data/predictions/torch_fccnn_3d.nc"
lead_time = 72
# Test data
valid = load_test_data(f'{datadir}', lead_time)
# +
# 1. Open dataset and create data generators
z = xr.open_mfdataset(f'{datadir}geopotential_500/*.nc', combine='by_coords')
t = xr.open_mfdataset(f'{datadir}temperature_850/*.nc', combine='by_coords')
ds = xr.merge([z, t], compat='override') # Override level. discarded later anyway.
ds_train = ds.sel(time=slice(*train_years))
ds_valid = ds.sel(time=slice(*valid_years))
ds_test = ds.sel(time=slice(*test_years))
dic = {var: None for var in vars}
dataset_train = WeatherBenchDataset2d(ds_train, dic, lead_time)
dataset_valid = WeatherBenchDataset2d(ds_valid, dic, lead_time, mean=dataset_train.mean, std=dataset_train.std)
dataset_test = WeatherBenchDataset2d(ds_test, dic, lead_time, mean=dataset_train.mean, std=dataset_train.std)
dg_train_torch = DataLoader(dataset_train, batch_size=batch_size, shuffle=True, num_workers=1)
dg_valid_torch = DataLoader(dataset_valid, batch_size=batch_size, shuffle=False, num_workers=1)
dg_test_torch = DataLoader(dataset_test, batch_size=batch_size, shuffle=False, num_workers=1)
# Build model and put on GPU
model_torch = CNN2dPeriodic(in_channels=2, out_channels=2, kernel_size=5)
device = torch.device("cuda:{}".format(0))
model_torch = model_torch.to(device)
# Train model
train_loss, val_loss = train_model(model_torch, device, dg_train_torch, epochs=100, lr=lr,
validation_data=dg_valid_torch, patience=patience)
print(f'Saving model weights: {model_save_fn}')
torch.save(model_torch.state_dict(), model_save_fn)
# +
f, axs = plt.subplots(1, 2, figsize=(15,5), sharey=True)
axs[0].plot(train_loss, label='Training loss')
axs[0].plot(val_loss, label='Validation loss')
axs[0].set_xlabel('Epochs')
axs[0].set_ylabel('MSE Loss')
axs[0].legend()
axs[0].set_title("Pytorch WeatherBench training and validation losses")
axs[1].plot(train_loss_w, label='Training loss')
axs[1].plot(val_loss_w, label='Validation loss')
axs[1].set_xlabel('Epochs')
axs[1].legend()
axs[1].set_title("Tensorflow WeatherBench training and validation losses")
plt.show()
# +
# Create predictions
pred_torch = create_predictions_2D(model_torch, dg_test_torch, mean=dataset_train.mean, std=dataset_train.std)
# Print score in real units
print(compute_weighted_rmse(pred_torch, valid).load())
| notebooks/pytorch_weatherbench.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
from PIL import Image
from pylab import *
from numpy import *
import os, sys
sys.path.append('..')
import imtools
gray()
im = array(Image.open('../images/empire.jpg').convert('L'))
imshow(im)
# -
im2, cdf = imtools.histeq(im)
imshow(Image.fromarray(im2))
# 可以看出直方图均衡化增强了图像的对比度。
| Chapter01/1.3.4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
import sklearn.metrics as sm
import statsmodels.formula.api as smf
diabetes=pd.read_csv(r"C:\Users\Ehetesham\OneDrive\Desktop\Final Year Project Diabetes\diabetes_prediction.csv")
diabetes.head()
print("dimension of diabetes data: {}".format(diabetes.shape))
data.isnull().any
print(diabetes.groupby('Outcome').size())
sns.countplot(diabetes['Outcome'],label="Count")
a = diabetes["Pregnancies"]
b = diabetes["Outcome"]
plt.scatter(a,b,color='red')
plt.title('Pregnancies vs Outcome')
plt.xlabel('Pregnancies')
plt.ylabel('Outcome')
plt.show()
a = diabetes["Glucose"]
b = diabetes["Outcome"]
plt.scatter(a,b,color='red')
plt.title('Glucose vs Outcome')
plt.xlabel('Glucose')
plt.ylabel('Outcome')
plt.show()
# # KNN Algorithm
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(diabetes.loc[:, diabetes.columns != 'Outcome'], diabetes['Outcome'], stratify=diabetes['Outcome'], random_state=66)
from sklearn.neighbors import KNeighborsClassifier
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# build the model
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
knn.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(knn.score(X_train, y_train))
# record test set accuracy
test_accuracy.append(knn.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
plt.savefig('knn_compare_model')
knn = KNeighborsClassifier(n_neighbors=9)
knn.fit(X_train, y_train)
print('Accuracy of K-NN classifier on training set: {:.2f}'.format(knn.score(X_train, y_train)))
print('Accuracy of K-NN classifier on test set: {:.2f}'.format(knn.score(X_test, y_test)))
# # Logistic Regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression().fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg.score(X_test, y_test)))
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print("Training set accuracy: {:.3f}".format(logreg001.score(X_train, y_train)))
print("Test set accuracy: {:.3f}".format(logreg001.score(X_test, y_test)))
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("Training set accuracy: {:.3f}".format(logreg100.score(X_train, y_train)))
print("Test set accuracy: {:.3f}".format(logreg100.score(X_test, y_test)))
diabetes_features = [x for i,x in enumerate(diabetes.columns) if i!=8]
plt.figure(figsize=(8,6))
plt.plot(logreg.coef_.T, 'o', label="C=1")
plt.plot(logreg100.coef_.T, '^', label="C=100")
plt.plot(logreg001.coef_.T, 'v', label="C=0.001")
plt.xticks(range(diabetes.shape[1]), diabetes_features, rotation=90)
plt.hlines(0, 0, diabetes.shape[1])
plt.ylim(-5, 5)
plt.xlabel("Feature")
plt.ylabel("Coefficient magnitude")
plt.legend()
plt.savefig('log_coef')
# # Decision Tree
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test)))
tree = DecisionTreeClassifier(max_depth=3, random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test)))
print("Feature importances:\n{}".format(tree.feature_importances_))
def plot_feature_importances_diabetes(model):
plt.figure(figsize=(8,6))
n_features = 8
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), diabetes_features)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
plot_feature_importances_diabetes(tree)
plt.savefig('feature_importance')
# # Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, random_state=0)
rf.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(rf.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(rf.score(X_test, y_test)))
rf1 = RandomForestClassifier(max_depth=3, n_estimators=100, random_state=0)
rf1.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(rf1.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(rf1.score(X_test, y_test)))
plot_feature_importances_diabetes(rf)
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
import sklearn.metrics as metrics
fpr, tpr, threshold = metrics.roc_curve(y_test, y_pred)
roc_auc = metrics.auc(fpr, tpr)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# # Gradient Boosting
from sklearn.ensemble import GradientBoostingClassifier
gb = GradientBoostingClassifier(random_state=0)
gb.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(gb.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(gb.score(X_test, y_test)))
gb1 = GradientBoostingClassifier(random_state=0, max_depth=1)
gb1.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(gb1.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(gb1.score(X_test, y_test)))
gb2 = GradientBoostingClassifier(random_state=0, learning_rate=0.01)
gb2.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(gb2.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(gb2.score(X_test, y_test)))
plot_feature_importances_diabetes(gb1)
# # Support Vector Machine
from sklearn.svm import SVC
svc = SVC()
svc.fit(X_train, y_train)
print("Accuracy on training set: {:.2f}".format(svc.score(X_train, y_train)))
print("Accuracy on test set: {:.2f}".format(svc.score(X_test, y_test)))
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
svc = SVC()
svc.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.2f}".format(svc.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.2f}".format(svc.score(X_test_scaled, y_test)))
svc = SVC(C=1000)
svc.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(
svc.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(svc.score(X_test_scaled, y_test)))
# # Multi Layer Percpetron
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(random_state=42)
mlp.fit(X_train, y_train)
print("Accuracy on training set: {:.2f}".format(mlp.score(X_train, y_train)))
print("Accuracy on test set: {:.2f}".format(mlp.score(X_test, y_test)))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
mlp = MLPClassifier(random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(
mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
mlp = MLPClassifier(max_iter=1000, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(
mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
mlp = MLPClassifier(max_iter=1000, alpha=1, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(
mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
plt.figure(figsize=(20, 5))
plt.imshow(mlp.coefs_[0], interpolation='none', cmap='viridis')
plt.yticks(range(8), diabetes_features)
plt.xlabel("Columns in weight matrix")
plt.ylabel("Input feature")
plt.colorbar()
| Prediction of Diabetes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SQL Queries with Minio (& PowerBI)
#
# Minio implements the [S3 SELECT API](https://docs.min.io/docs/minio-select-api-quickstart-guide.html). It is not effective for creating joins or other relational database tricks, but it's phenomenal at extracting exactly the data that you need, so that your queries are blazingly fast.
#
#
# For reference on how to use this SQL flavour, look at
#
# [The AWS reference](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-glacier-select-sql-reference-select.html)
#
#
# *Note: Amazon S3 Select does not support whole-object compression for Parquet objects.*
# [Source](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.select_object_content)
#
# **NOTE: The examples here use JSON, but CSV is better suited to large datasets, performing 10x faster in my experiment.**
# ## Connect to storage
# +
import daaas_storage_boto3 as storage
s3 = storage.get_minimal_client()
BUCKET = "shared"
# -
# # Fast SQL Extractions and pandas (can be used with PowerBI)
#
# Minio implements the S3 Select API, which reads a minimal amount of data off of disk. This makes the queries very fast, even on large tables. Also, you can read the data straight out of a file, without creating or managing a complex database.
#
# **PowerBI**: You can use these snippets to load pandas dataframes into PowerBI. Check out [the PowerBI tutorial](https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts). **note:** this only works with pandas, not arrow. So use `storage.pandas_from_json`. **Do not use** `storage.arrow_from_json`.
# ## Query your data with SQL (.csv.gz)
# +
# %%time
r = s3.select_object_content(
Bucket=BUCKET,
Key='/blair-drummond/sql-example/TotalPopulation.csv.gz',
ExpressionType='SQL',
# Note, there's no ';' at the end.
Expression="""
SELECT PopTotal,PopDensity FROM s3object s
WHERE s.Location like '%Canada%'
""",
InputSerialization={
'CSV': {
# Use this if your CSV file has a header. Else set to "NONE".
"FileHeaderInfo": "USE",
'RecordDelimiter': '\n',
'FieldDelimiter': ',',
},
# Remove this if the file isn't compressed.
'CompressionType': 'GZIP',
},
OutputSerialization={'JSON': {}},
#OutputSerialization={'CSV': {'RecordDelimiter': '\n', 'FieldDelimiter': ','}},
)
df = storage.pandas_from_json(r)
#df = storage.pandas_from_csv(r)
df.head()
# -
# ## Query your data with SQL (.parquet)
#
# **NOTE: If you're running this on PowerBI, you'll need either pyarrow or fastparquet installed.**
#
# **Note:** You should not compress your parquet files[^1]!!! They can be larger compressed, and the S3 Select API does not support querying them.
#
# [^1]: Unless you use SNAPPY. But BZIP2 and GZIP are not supported.
# +
# %%time
r = s3.select_object_content(
Bucket=BUCKET,
Key='/blair-drummond/sql-example/TotalPopulation.parquet',
ExpressionType='SQL',
Expression="SELECT * FROM s3object s WHERE s.Location like '%Canada%'",
InputSerialization={
'Parquet': {},
'CompressionType': 'NONE',
},
OutputSerialization={'JSON': {}},
)
df = storage.pandas_from_json(r)
df.head()
# -
# ## Query your data with SQL (.csv)
#
# Note, you'll probably get **significant** storage savings if you compress your csv files. (Read: 10gb -> 500mb, for example). So if it's under your control, it's recommended to use gzip. The S3 Select API that we're using also has some support for BZIP2. *(You can also use SNAPPY on `.parquet` files)*
#
# [S3 Select Compression Support](https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-s3-announces-new-features-for-s3-select/)
# +
# %%time
r = s3.select_object_content(
Bucket=BUCKET,
Key='/blair-drummond/sql-example/TotalPopulation.csv',
ExpressionType='SQL',
# Note, there's no ';' at the end.
Expression="""
SELECT PopTotal,PopDensity FROM s3object s
WHERE s.Location like '%Canada%'
""",
InputSerialization={
'CSV': {
# Use this if your CSV file has a header. Else set to "NONE".
"FileHeaderInfo": "USE",
},
# Remove this if the file isn't compressed.
# 'CompressionType': 'GZIP',
},
# JSON is easier to work with than csv, unless you
# have a massive amount of data.
OutputSerialization={'JSON': {}},
)
df = storage.pandas_from_json(r)
# -
# ## NOTE: Json v.s. CSV
#
# JSON transmits more data than CSV, so **if performance is key, use csv**.
#
# **The disadvantage of CSV, is that the S3 API for CSV doesn't return you column names.**
#
# However, you can run a small JSON query, then manually stitch together the column names.
#
# Compare the times below.
# +
# %%time
r = s3.select_object_content(
Bucket=BUCKET,
Key='/blair-drummond/sql-example/TotalPopulation.csv.gz',
ExpressionType='SQL',
# Note, there's no ';' at the end.
Expression="""
SELECT PopTotal,PopDensity FROM s3object s
""",
InputSerialization={
'CSV': {
# Use this if your CSV file has a header. Else set to "NONE".
"FileHeaderInfo": "USE",
'RecordDelimiter': '\n',
'FieldDelimiter': ',',
},
# Remove this if the file isn't compressed.
'CompressionType': 'GZIP',
},
OutputSerialization={'JSON': {}},
#OutputSerialization={'CSV': {'RecordDelimiter': '\n', 'FieldDelimiter': ','}},
)
df = storage.pandas_from_json(r)
#df = storage.pandas_from_csv(r)
df.head()
# +
# %%time
r = s3.select_object_content(
Bucket=BUCKET,
Key='/blair-drummond/sql-example/TotalPopulation.csv.gz',
ExpressionType='SQL',
# Note, there's no ';' at the end.
Expression="""
SELECT PopTotal,PopDensity FROM s3object s
""",
InputSerialization={
'CSV': {
# Use this if your CSV file has a header. Else set to "NONE".
"FileHeaderInfo": "USE",
'RecordDelimiter': '\n',
'FieldDelimiter': ',',
},
# Remove this if the file isn't compressed.
'CompressionType': 'GZIP',
},
#OutputSerialization={'JSON': {}},
OutputSerialization={'CSV': {'RecordDelimiter': '\n', 'FieldDelimiter': ','}},
)
#df = storage.pandas_from_json(r)
df = storage.pandas_from_csv(r)
df.head()
# -
# # Beyond Pandas: Arrow v.s. Pandas, CSV v.s. Parquet
#
# *Apache Arrow* is a newer tool designed for efficient data storage and retrieval. It's how Pandas opens `.parquet` files. We're going to do some benchmarking here, and we'll look at an experiment with the following variables.
#
# 1. File Format
# - `.csv`
# - `.csv.gz`
# - `.parquet`
#
# 2. Query type
# - *Row Extraction*
# - *Column Extraction*
#
# 3. Python Tool
# - `pyarrow`
# - `pandas`
#
#
# We're going to test every combination of these, to see how they work with eachother.
# +
################################################
### Arrow versus Pandas ###
################################################
import time
def timing(f):
""" Discard the output of the function, but get the time. """
def wrap(*args):
time1 = time.time()
f(*args)
time2 = time.time()
# milliseconds
ms = (time2-time1)*1000.0
return ms
return wrap
funcs = {
'arrow' : timing(storage.arrow_from_json),
'pandas' : timing(storage.pandas_from_json)
}
# +
################################################
### Row versus Column Operations ###
################################################
## Query is adjusted so that roughly
## the same amount of data is scanned.
sql = {
# nrow = 4420, ncol = 2; nrow*ncol = 8840
'column' : """
SELECT PopTotal,PopDensity FROM s3object s
LIMIT 4420
""",
# nrow = 884, ncol = 10; nrow*ncol = 8840
'row' : """
SELECT * FROM s3object s
WHERE s.Location like '%Canada%'
"""
}
# +
################################################
### File Format ###
################################################
## Note that csv.gz is smaller than parquet!
def exp_csv(sql_query):
return s3.select_object_content(
Bucket=BUCKET,
# File size = 21 mb
Key='/blair-drummond/sql-example/TotalPopulation.csv',
ExpressionType='SQL',
Expression=sql_query,
InputSerialization={'CSV': {"FileHeaderInfo": "USE"}},
OutputSerialization={'JSON': {}}
)
def exp_csv_gz(sql_query):
return s3.select_object_content(
Bucket=BUCKET,
# File size = 5.6 mb
Key='/blair-drummond/sql-example/TotalPopulation.csv.gz',
ExpressionType='SQL',
Expression=sql_query,
InputSerialization={
'CSV': {"FileHeaderInfo": "USE"},
'CompressionType': 'GZIP',
},
OutputSerialization={'JSON': {}}
)
def exp_parquet(sql_query):
return s3.select_object_content(
Bucket=BUCKET,
# File size = 6.8 mb
Key='/blair-drummond/sql-example/TotalPopulation.parquet',
ExpressionType='SQL',
Expression=sql_query,
InputSerialization={'Parquet': {}},
OutputSerialization={'JSON': {}}
)
formats = {
'csv' : exp_csv,
'csv.gz' : exp_csv_gz,
'parquet' : exp_parquet
}
# -
# ## Run the experiment!
import pandas as pd
# +
### By Column
col_exp = lambda backend,file: funcs[backend](formats[file](sql['column']))
cols = pd.DataFrame({
'csv' : [ col_exp('pandas', 'csv'), col_exp('arrow', 'csv') ],
'csv.gz' : [ col_exp('pandas', 'csv.gz'), col_exp('arrow', 'csv.gz') ],
'parquet' : [ col_exp('pandas', 'parquet'), col_exp('arrow', 'parquet') ]
}, index=['pandas', 'arrow'])
cols
# +
### By Row
row_exp = lambda backend,file: funcs[backend](formats[file](sql['row']))
rows = pd.DataFrame({
'csv' : [ row_exp('pandas', 'csv'), row_exp('arrow', 'csv') ],
'csv.gz' : [ row_exp('pandas', 'csv.gz'), row_exp('arrow', 'csv.gz') ],
'parquet' : [ row_exp('pandas', 'parquet'), row_exp('arrow', 'parquet') ]
}, index=['pandas', 'arrow'])
rows
# -
# **NOTE: I think Parquet will probably perform much better as the file size increases. Our files here are pretty small.**
# ## Conclusion: arrow > pandas.
#
# This experiment done with a very small dataset, but there are two observations:
#
# 1. **Arrow is faster than pandas in every case**.
# 2. **Scanning columns is WAY faster than scanning Rows**.
#
# Also, note that while `csv.gz` is slightly slower than `csv`, the `csv.gz` files are `1/4` the size in storage. For large files, this will translate to faster transfer speeds.
| querySQL/MinioSQL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Keras GPU
#
# ## Introduction
#
# This recipe shows how to run Keras using Batch AI. Keras supports tensorflow, cntk and theano backends. Currently only tensorflow and cntk backends supports running on GPU. Batch AI will automatic setup backend when toolkit is specified.
#
# ## Details
# - Keras can run with CNTK or Tensorflow backend.
# - Standard keras sample script [mnist_cnn.py](https://raw.githubusercontent.com/fchollet/keras/master/examples/mnist_cnn.py) is used;
# - The script downloads the standard MNIST Database on its own;
# - Standard output of the job will be stored on Azure File Share.
# ## Instructions
#
# ### Install Dependencies and Create Configuration file.
# Follow [instructions](/recipes) to install all dependencies and create configuration file.
# ### Read Configuration and Create Batch AI client
# + nbpresent={"id": "bfa11f00-8866-4051-bbfe-a9646e004910"}
from __future__ import print_function
from datetime import datetime
import sys
from azure.storage.file import FileService
import azure.mgmt.batchai.models as models
# The BatchAI/utilities folder contains helper functions used by different notebooks
sys.path.append('../../../')
import utilities as utils
cfg = utils.config.Configuration('../../configuration.json')
client = utils.config.create_batchai_client(cfg)
# -
# Create Resoruce Group and Batch AI workspace if not exists:
utils.config.create_resource_group(cfg)
_ = client.workspaces.create(cfg.resource_group, cfg.workspace, cfg.location).result()
# ## 1. Prepare Training Dataset and Script in Azure Storage
# ### Create File Share
#
# For this example we will create a new File Share with name `batchaidsvmsample` under your storage account.
#
# **Note** You don't need to create new file share for every cluster. We are doing this in this sample to simplify resource management for you.
azure_file_share_name = 'batchaisample'
service = FileService(cfg.storage_account_name, cfg.storage_account_key)
service.create_share(azure_file_share_name, fail_on_exist=False)
print('Done')
# ### Deploy Sample Script and Configure the Input Directories
#
# - Download original sample script:
sample_script_url = 'https://raw.githubusercontent.com/dhuynh/kerasnlpduy/master/kerastest.py'
utils.dataset.download_file(sample_script_url, 'kerastest.py')
# - For each job we will create a folder containing a copy of [train_mnist.py](https://github.com/chainer/chainer/blob/master/examples/mnist/train_mnist.py). This allows each job to have it's own copy of the sample script (in case you would like to change it).
keras_sample_dir = "KerasSamples"
service = FileService(cfg.storage_account_name, cfg.storage_account_key)
service.create_directory(
azure_file_share_name, keras_sample_dir, fail_on_exist=False)
service.create_file_from_path(
azure_file_share_name, keras_sample_dir, 'mnist_cnn.py', 'mnist_cnn.py')
print('Done')
# ### Configure Compute Cluster
#
# - For this example we will use a GPU cluster of `STANDARD_NC6` nodes. Number of nodes in the cluster is configured with `nodes_count` variable;
# - We will mount file share at folder with name `afs`. Full path of this folder on a computer node will be `$AZ_BATCHAI_MOUNT_ROOT/afs`;
# - We will call the cluster `nc6`;
#
#
# So, the cluster will have the following parameters:
# +
nodes_count = 1
cluster_name = 'nc6'
parameters = models.ClusterCreateParameters(
location=cfg.location,
vm_size='STANDARD_NC6',
scale_settings=models.ScaleSettings(
manual=models.ManualScaleSettings(target_node_count=nodes_count)
),
user_account_settings=models.UserAccountSettings(
admin_user_name=cfg.admin,
admin_user_password=cfg.admin_password or None,
admin_user_ssh_public_key=cfg.admin_ssh_key or None,
)
)
# -
# ### Create Compute Cluster
_ = client.clusters.create(cfg.resource_group, cfg.workspace, cluster_name, parameters).result()
# ### Monitor Cluster Creation
#
# The `utilities` module contains a helper function allowing to wait for the cluster to become available - all nodes are allocated and finished preparation.
cluster = client.clusters.get(cfg.resource_group, cfg.workspace, cluster_name)
utils.cluster.print_cluster_status(cluster)
# ### Configure Job
#
# - Will use configured previously input and output directories;
# - Will run standard `mnist_cnn.py` from SCRIPT input directory using custom framework;
# - Will output standard output and error streams to file share.
# Please choose what backend of Keras will be used: `'cntk'` or `'tensorflow'`
backend = 'cntk'
# If `'cntk'` backend is used:
# - The job will use `microsoft/2.5.1-gpu-python2.7-cuda9.0-cudnn7.0` container.
# - Keras framework has been preinstalled in the container
# - The job needs to have `cntk_settings` to be configured.
if backend == 'cntk':
parameters = models.JobCreateParameters(
location=cfg.location,
cluster=models.ResourceId(id=cluster.id),
node_count=1,
container_settings=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='microsoft/cntk:2.5.1-gpu-python2.7-cuda9.0-cudnn7.0')),
mount_volumes=models.MountVolumes(
azure_file_shares=[
models.AzureFileShareReference(
account_name=cfg.storage_account_name,
credentials=models.AzureStorageCredentialsInfo(
account_key=cfg.storage_account_key),
azure_file_url='https://{0}.file.core.windows.net/{1}'.format(
cfg.storage_account_name, azure_file_share_name),
relative_mount_path='afs')
]
),
std_out_err_path_prefix='$AZ_BATCHAI_JOB_MOUNT_ROOT/{0}'.format('afs'),
cntk_settings = models.CNTKsettings(
python_script_file_path='$AZ_BATCHAI_JOB_MOUNT_ROOT/afs/{0}/mnist_cnn.py'.format(keras_sample_dir)))
# If `'tensorflow'` backend is used:
# - The job will use `tensorflow/tensorflow:1.8.0-gpu` container.
# - Keras framework will be installed by job preparation command line.
# - The job needs to have `tensor_flow_settings` to be configured.
if backend == 'tensorflow':
parameters = models.JobCreateParameters(
location=cfg.location,
cluster=models.ResourceId(id=cluster.id),
node_count=1,
job_preparation=models.JobPreparation(command_line='pip install keras'),
container_settings=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='tensorflow/tensorflow:1.8.0-gpu')),
mount_volumes=models.MountVolumes(
azure_file_shares=[
models.AzureFileShareReference(
account_name=cfg.storage_account_name,
credentials=models.AzureStorageCredentialsInfo(
account_key=cfg.storage_account_key),
azure_file_url='https://{0}.file.core.windows.net/{1}'.format(
cfg.storage_account_name, azure_file_share_name),
relative_mount_path='afs')
]
),
std_out_err_path_prefix='$AZ_BATCHAI_JOB_MOUNT_ROOT/{0}'.format('afs'),
tensor_flow_settings=models.TensorFlowSettings(
python_script_file_path='$AZ_BATCHAI_JOB_MOUNT_ROOT/afs/{0}/mnist_cnn.py'.format(keras_sample_dir)))
# ### Create a training Job and wait for Job completion
#
experiment_name = 'keras_experiment'
experiment = client.experiments.create(cfg.resource_group, cfg.workspace, experiment_name).result()
job_name = datetime.utcnow().strftime('keras_{}_%m_%d_%Y_%H%M%S'.format(backend))
job = client.jobs.create(cfg.resource_group, cfg.workspace, experiment_name, job_name, parameters).result()
print('Created Job {0} in Experiment {1}'.format(job.name, experiment.name))
# ### Wait for Job to Finish
# The job will start running when the cluster will have enough idle nodes. The following code waits for job to start running printing the cluster state. During job run, the code prints current content of stdout file.
#
# **Note** Execution may take several minutes to complete.
# +
if backend == 'tensorflow':
read_file = 'stdout-wk-0.txt'
elif backend == 'cntk':
read_file = 'stdout.txt'
utils.job.wait_for_job_completion(client, cfg.resource_group, cfg.workspace,
experiment_name, job_name, cluster_name, 'stdouterr', read_file)
# -
# ### List log files for the Job
files = client.jobs.list_output_files(cfg.resource_group, cfg.workspace, experiment_name, job_name,
models.JobsListOutputFilesOptions(outputdirectoryid='stdouterr'))
for f in list(files):
print(f.name, f.download_url or 'directory')
# ## 4. Clean Up (Optional)
# ### Delete the Job
_ = client.jobs.delete(cfg.resource_group, cfg.workspace, experiment_name, job_name)
# ### Delete the Cluster
# When you are finished with the sample and don't want to submit any more jobs you can delete the cluster using the following code.
_ = client.clusters.delete(cfg.resource_group, cfg.workspace, cluster_name)
# ### Delete File Share
# When you are finished with the sample and don't want to submit any more jobs you can delete the file share completely with all files using the following code.
service = FileService(cfg.storage_account_name, cfg.storage_account_key)
service.delete_share(azure_file_share_name)
| Part 2 - Python Notebook/recipes/Keras/Keras-GPU/.ipynb_checkpoints/Keras-GPU-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Raykar(RGZ): $\vec \alpha$ and $\vec \beta$
#
# This notebook approximates the values of $\vec \alpha$ and $\vec \beta$ for crowd labellers on the Radio Galaxy Zoo galaxy classification task, and compares these to the values of $\vec \alpha$ and $\vec \beta$ estimated by the Raykar et al. algorithm.
# ## Approximate $\vec \alpha$ and $\vec \beta$
#
# Here, I approximate $\vec \alpha$ and $\vec \beta$ by comparing annotator accuracy to the Norris et al. label set. $\vec \alpha$ is the sensitivity, and $\vec \beta$ is the specificity.
# +
from pprint import pprint
import crowdastro.crowd.util
from crowdastro.crowd.raykar import RaykarClassifier
import crowdastro.experiment.experiment_rgz_raykar as rgzr
from crowdastro.experiment.results import Results
import crowdastro.plot
import h5py
import matplotlib.pyplot as plt
import numpy
import sklearn.metrics
# %matplotlib inline
CROWDASTRO_PATH = '../data/crowdastro.h5' # Generated by the crowdastro pipeline.
RESULTS_PATH = '../data/results_rgz_raykar.h5' # Generated by crowdastro.experiment.experiment_rgz_raykar.
# -
with h5py.File(CROWDASTRO_PATH, 'r') as crowdastro_h5:
norris_labels = crowdastro_h5['/wise/cdfs/norris_labels'].value
crowd_labels = numpy.ma.MaskedArray(
crowdastro_h5['/wise/cdfs/rgz_raw_labels'],
mask=crowdastro_h5['/wise/cdfs/rgz_raw_labels_mask'])
top_10 = rgzr.top_n_accurate_targets(crowdastro_h5, n_annotators=10)
# +
approx_alphas = []
approx_betas = []
for t in range(top_10.shape[0]):
cm = sklearn.metrics.confusion_matrix(norris_labels[~top_10[t].mask],
top_10[t][~top_10[t].mask])
alpha = cm[1, 1] / cm.sum(axis=1)[1]
beta = cm[0, 0] / cm.sum(axis=1)[0]
approx_alphas.append(alpha)
approx_betas.append(beta)
print('approximate alpha:')
pprint(approx_alphas)
print('approximate beta:')
pprint(approx_betas)
crowdastro.plot.vertical_scatter(['$\\alpha$', '$\\beta$'], [approx_alphas, approx_betas], line=True)
plt.show()
# -
# It seems that higher values of $\alpha$ are correlated with lower values of $\beta$, and vice versa. This seems to make some intuitive sense.
# ## Raykar-estimated $\vec \alpha$ and $\vec \beta$
#
# Here, I retrieve the $\vec \alpha$ and $\vec \beta$ estimated by the Raykar et al. algorithm and compare to the approximated values found previously. I will average the values approximated across all splits trialled.
results = Results.from_path(RESULTS_PATH)
# +
raykar_alphas = []
raykar_betas = []
raykar_classifiers = []
for split in range(results.n_splits):
rc = results.get_model('Raykar(Top-10-accurate)', split)
rc = RaykarClassifier.unserialise(rc)
raykar_alphas.append(rc.a_)
raykar_betas.append(rc.b_)
raykar_classifiers.append(rc)
raykar_alphas = numpy.mean(raykar_alphas, axis=0)
raykar_betas = numpy.mean(raykar_betas, axis=0)
print('raykar alpha:')
pprint(list(raykar_alphas))
print('raykar beta:')
pprint(list(raykar_betas))
crowdastro.plot.vertical_scatter(['$\\alpha$', '$\\beta$'], [raykar_alphas, raykar_betas], line=True)
plt.ylim(0, 0.005)
plt.show()
# -
# These numbers are all *really* small. This may be because the Raykar algorithm doesn't account for the fact that the labels are partially observed. If this is the case, then the Raykar algorithm is estimating the values as
#
# $$
# \alpha = \frac{\text{true positives}}{\text{true positives} + \text{false negatives} + \text{unobserved}}
# $$
#
#
# $$
# \beta = \frac{\text{true negatives}}{\text{true negatives} + \text{false positives} + \text{unobserved}}
# $$
#
# where we want
#
# $$
# \alpha = \frac{\text{true positives}}{\text{true positives} + \text{false negatives}}
# $$
#
#
# $$
# \beta = \frac{\text{true negatives}}{\text{true negatives} + \text{false positives}}.
# $$
| notebooks/thesis_rgz_raykar_ab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Task 1: Try the algo on Dataset3 - LabelEncoding of features: and Train test Division 95%-5%
# Libraries
import pandas as pd
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.model_selection import train_test_split
data=pd.read_csv("Dataset3.csv")
data
data.describe()
data.head()
# Outlook:
# R 1
# O 0
# S 2
# Temp:
# H 1
# M 2
# C 0
# Humidity:
# Normal 2
# High 0
# ex 1
# Wind:
# F 0
# T 1
#
#
#
# +
#creating labelEncoder
le = preprocessing.LabelEncoder()
# Converting string labels into numbers.
data["Outlook"]=le.fit_transform(data["Outlook"])
data["Temp"]=le.fit_transform(data["Temp"])
data["Humidity"]=le.fit_transform(data["Humidity"])
data["Wind"]=le.fit_transform(data["Wind"])
data["Class"]=le.fit_transform(data["Class"])
data.head()
# -
X=data.drop(["Class"], axis = 1)
Y=data["Class"]
# Train Test devison 95-5
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.05, random_state = 146)
import numpy as np
model = GaussianNB()
#Create a Classifier
#model=MultinomialNB()
# Train the model using the training sets
model.fit(X_train,Y_train)
# +
#Predict the response for test dataset
target_pred = model.predict(X_test)
# +
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(Y_test, target_pred))
# -
#Import confusion_matrix from scikit-learn metrics module for confusion_matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(Y_test, target_pred)
# +
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
precision = precision_score(Y_test, target_pred, average='binary')
recall = recall_score(Y_test, target_pred, average='binary')
print('precision: {}'.format(precision))
print('recall: {}'.format(recall))
# -
# Outlook:
# R 1
# O 0
# S 2
# Temp:
# H 1
# M 2
# C 0
# Humidity:
# Normal 2
# High 0
# ex 1
# Wind:
# F 0
# T 1
#
#
#
Que = {'Outlook': [1, 2],'Temp':[2,0],'Humidity':[2,0],'Wind':[0,1]}
,
df = pd.DataFrame (Que, columns = ['Outlook','Temp','Humidity','Wind'])
df
ans = model.predict(df)
ans
# Answer is:
# Q1: No
# Q2: Yes
| lab3/ex1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:replay_trajectory_paper] *
# language: python
# name: conda-env-replay_trajectory_paper-py
# ---
# +
from scipy.io import loadmat
import matplotlib.pyplot as plt
import numpy as np
clips = loadmat('clips.mat')['clips']
# -
plt.plot(clips[0])
plt.plot(clips[1])
plt.plot(clips[2])
plt.plot(clips[3])
plt.plot(clips[4])
plt.plot(clips[5])
plt.plot(clips[6])
# +
from loren_frank_data_processing.multiunit import get_multiunit_dataframe2
from src.parameters import ANIMALS
from loren_frank_data_processing import make_tetrode_dataframe
tetrode_info = make_tetrode_dataframe(ANIMALS)
# -
epoch_key = ('remy', 35, 2)
tetrode_info.xs(epoch_key, drop_level=False)
| notebooks/2021_01_18_Clips.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Step1: Create the Python Script
#
# In the cell below, you will need to complete the Python script and run the cell to generate the file using the magic `%%writefile` command. Your main task is to complete the following methods for the `PersonDetect` class:
# * `load_model`
# * `predict`
# * `draw_outputs`
# * `preprocess_outputs`
# * `preprocess_inputs`
#
# For your reference, here are all the arguments used for the argument parser in the command line:
# * `--model`: The file path of the pre-trained IR model, which has been pre-processed using the model optimizer. There is automated support built in this argument to support both FP32 and FP16 models targeting different hardware.
# * `--device`: The type of hardware you want to load the model on (CPU, GPU, MYRIAD, HETERO:FPGA,CPU)
# * `--video`: The file path of the input video.
# * `--output_path`: The location where the output stats and video file with inference needs to be stored (results/[device]).
# * `--max_people`: The max number of people in queue before directing a person to another queue.
# * `--threshold`: The probability threshold value for the person detection. Optional arg; default value is 0.60.
# +
# %%writefile person_detect.py
import numpy as np
import time
from openvino.inference_engine import IENetwork, IECore
import os
import cv2
import argparse
import sys
class Queue:
'''
Class for dealing with queues
'''
def __init__(self):
self.queues=[]
def add_queue(self, points):
self.queues.append(points)
def get_queues(self, image):
for q in self.queues:
x_min, y_min, x_max, y_max=q
frame=image[y_min:y_max, x_min:x_max]
yield frame
def check_co_ords(self, co_ords):
d={k+1:0 for k in range(len(self.queues))}
for co_ord in co_ords:
for i, q in enumerate(self.queues):
if co_ord[0]>q[0] and co_ord[2]<q[2]:
d[i+1]+=1
return d
class PersonDetect:
'''
Class for the Person Detection Model.
'''
def __init__(self, model_name, device, threshold=0.60):
self.model_weights=model_name+'.bin'
self.model_structure=model_name+'.xml'
self.device=device
self.threshold=threshold
try:
self.model=IENetwork(self.model_structure, self.model_weights)
except Exception as e:
raise ValueError("Could not Initialise the network. Have you enterred the correct model path?")
self.input_name=next(iter(self.model.inputs))
self.input_shape=self.model.inputs[self.input_name].shape
self.output_name=next(iter(self.model.outputs))
self.out_size=self.model.outputs[self.output_name].shape
self.core = IECore()
def output_shape(self, w, h):
self.w = w
self.h = h
def load_model(self):
self.net = self.core.load_network(network=self.model, device_name=self.device, num_requests=1)
def predict(self, image):
frame = self.preprocess_input(image)
output = self.net.infer({self.input_name: frame})
co_ordinates = self.preprocess_outputs(output[self.output_name])
self.draw_outputs(co_ordinates, image)
return co_ordinates, image
def draw_outputs(self, co_ords, image):
for co_ord in co_ords:
cv2.rectangle(image, (co_ord[0], co_ord[1]), (co_ord[2], co_ord[3]), (0, 255, 0), 1)
def preprocess_outputs(self, output):
co_ordinates = []
for box in output[0][0]:
conf_value = box[2]
if conf_value >= self.threshold:
xmin = int(box[3] * self.w)
ymin = int(box[4] * self.h)
xmax = int(box[5] * self.w)
ymax = int(box[6] * self.h)
co_ordinates.append((xmin, ymin, xmax, ymax))
return co_ordinates
def preprocess_input(self, image):
frame = cv2.resize(image, (self.input_shape[3], self.input_shape[2]))
frame = frame.transpose((2,0,1))
frame = frame.reshape(1, *frame.shape)
return frame
def main(args):
model=args.model
device=args.device
video_file=args.video
max_people=args.max_people
threshold=args.threshold
output_path=args.output_path
start_model_load_time=time.time()
pd= PersonDetect(model, device, threshold)
pd.load_model()
total_model_load_time = time.time() - start_model_load_time
queue=Queue()
try:
queue_param=np.load(args.queue_param)
for q in queue_param:
queue.add_queue(q)
except:
print("error loading queue param file")
try:
cap=cv2.VideoCapture(video_file)
except FileNotFoundError:
print("Cannot locate video file: "+ video_file)
except Exception as e:
print("Something else went wrong with the video file: ", e)
initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
video_len = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
out_video = cv2.VideoWriter(os.path.join(output_path, 'output_video.mp4'), cv2.VideoWriter_fourcc(*'avc1'), fps, (initial_w, initial_h), True)
counter=0
start_inference_time=time.time()
try:
pd.output_shape(initial_w, initial_h)
while cap.isOpened():
ret, frame=cap.read()
if not ret:
break
counter+=1
co_ords, image= pd.predict(frame)
num_people= queue.check_co_ords(co_ords)
print(f"Total People in frame = {len(co_ords)}")
print(f"Number of people in queue = {num_people}")
out_text=""
y_pixel=25
for k, v in num_people.items():
out_text += f"No. of People in Queue {k} is {v} "
if v >= int(max_people):
out_text += f" Queue full; Please move to next Queue "
cv2.putText(image, out_text, (15, y_pixel), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
out_text=""
y_pixel+=40
out_video.write(image)
total_time=time.time()-start_inference_time
total_inference_time=round(total_time, 1)
fps=counter/total_inference_time
with open(os.path.join(output_path, 'stats.txt'), 'w') as f:
f.write(str(total_inference_time)+'\n')
f.write(str(fps)+'\n')
f.write(str(total_model_load_time)+'\n')
cap.release()
cv2.destroyAllWindows()
except Exception as e:
print("Could not run Inference: ", e)
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('--model', required=True)
parser.add_argument('--device', default='CPU')
parser.add_argument('--video', default=None)
parser.add_argument('--queue_param', default=None)
parser.add_argument('--output_path', default='/results')
parser.add_argument('--max_people', default=2)
parser.add_argument('--threshold', default=0.60)
args=parser.parse_args()
main(args)
# -
# # Next Step
#
# Now that you've run the above cell and created your Python script, you will create your job submission shell script in the next workspace.
#
# **Note**: As a reminder, if you need to make any changes to the Python script, you can come back to this workspace to edit and run the above cell to overwrite the file with your changes.
| Create_Python_Script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stochastic Volatility model
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
# -
# Asset prices have time-varying volatility (variance of day over day `returns`). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, Hoffman (2011) p21.
#
# $$ \sigma \sim Exponential(50) $$
#
# $$ \nu \sim Exponential(.1) $$
#
# $$ s_i \sim Normal(s_{i-1}, \sigma^{-2}) $$
#
# $$ log(r_i) \sim t(\nu, 0, exp(-2 s_i)) $$
#
# Here, $r$ is the daily return series and $s$ is the latent log volatility process.
# ## Build Model
# First we load some daily returns of the S&P 500.
n = 400
returns = pd.read_hdf('../data/assets.h5', key='sp500/prices').loc['2000':, 'close'].pct_change().dropna()
returns[:5]
# As you can see, the volatility seems to change over time quite a bit but cluster around certain time-periods. Around time-points 2500-3000 you can see the 2009 financial crash.
returns.plot(figsize=(15,4))
# Specifying the model in `PyMC3` mirrors its statistical specification.
with pm.Model() as model:
step_size = pm.Exponential('sigma', 50.)
s = GaussianRandomWalk('s', sd=step_size, shape=len(returns))
nu = pm.Exponential('nu', .1)
r = pm.StudentT('r', nu=nu, lam=pm.math.exp(-2*s),
observed=returns)
# ## Fit Model
# For this model, the full maximum a posteriori (MAP) point is degenerate and has infinite density. NUTS, however, gives the correct posterior.
with model:
trace = pm.sample(tune=2000, nuts_kwargs=dict(target_accept=.9))
pm.traceplot(trace, varnames=['sigma', 'nu']);
# +
fig, ax = plt.subplots()
plt.plot(trace['s'].T, 'b', alpha=.03);
ax.set(title=str(s), xlabel='time', ylabel='log volatility');
# -
# Looking at the returns over time and overlaying the estimated standard deviation we can see how the model tracks the volatility over time.
pm.trace_to_dataframe(trace).info()
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(returns.values)
ax.plot(np.exp(trace[s]).T, 'r', alpha=.03);
ax.set(xlabel='time', ylabel='returns')
ax.legend(['S&P500', 'stoch vol']);
| Chapter09/06_stochastic_volatility.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TensorFlow Fundamentals
#
# Learning Objectives:
#
# * Gain experience with low-level Tensorflow operations
# * Learn to use GradientTape to calculate partial derivatives and perform gradient descent
# * Learn about the tf.data.Dataset class, including batching
#
# ## Calculating Gradients
#
# In a previous exercise, we practiced calculating partial derivatives on the following example:
#
# $$ f(x,y) = \sqrt{x^2 + y^2}$$
#
# $$\frac{\partial f}{\partial x} = \frac{x}{\sqrt{x^2 + y^2}}$$
#
# $$\frac{\partial f}{\partial y} = \frac{y}{\sqrt{x^2 + y^2}}$$
#
# ### Question
# Take a second to calcuate the following by hand:
#
# * $\displaystyle f(3, 4) = ??$
#
#
# * $ \displaystyle \frac{\partial f(3, 4)}{\partial x} = ??$
#
#
# * $ \displaystyle \frac{\partial f(3, 4)}{\partial y} = ??$
#
#
# ### Answers:
# *
# *
# *
# At its core, TensorFlow is a library for representing mathematical operations as graphical structures and automating the process of computing partial derivatives. We can use TensorFlow to write numpy-style mathematical operations:
#
# +
import tensorflow as tf
import numpy as np
def f(x, y):
return tf.sqrt(x**2 + y**2)
x = tf.Variable(3, dtype=tf.float32)
y = tf.Variable(4, dtype=tf.float32)
print(f(x, y))
# -
# More interestingly, we can use a `GradientTape` to record mathematical operations for automatic differentiation:
# +
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
tape.watch(y)
fxy = f(x, y)
df_dx = tape.gradient(fxy, x)
df_dy = tape.gradient(fxy, y)
print("f(3, 4) = {:.5}".format(fxy))
print("df(3, 4)/dx = {:.5}".format(df_dx))
print("df(3, 4)/dy = {:.5}".format(df_dy))
# -
# Once we have the partial derivatives, we can minimize our function using gradient descent. Use the cell below to find the x and y that minimize $ f(x,y) = \sqrt{x^2 + y^2}$. Adjust the learning rate and the number of iterations (*not* the starting x and y values) until the code converges to something close to the minimum value for the function.
# +
learning_rate = .01
iterations = 10
x = tf.Variable(3, dtype=tf.float32)
y = tf.Variable(4, dtype=tf.float32)
for iteration in range(iterations):
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
tape.watch(y)
fxy = f(x, y)
print('current "loss": {}'.format(fxy))
df_dx = tape.gradient(fxy, x)
df_dy = tape.gradient(fxy, y)
x = x - learning_rate * df_dx
y = y - learning_rate * df_dy
print("\nx: {}".format(x.numpy()))
print("y: {}".format(y.numpy()))
# -
# ### Questions:
#
# * What learning rate and iteration count did you settle on?
# * Where does this function have its minimum? (Note that this is a case where we don't *need* to use gradient descent to find the solution. You should be able to determine the minimum value without executing the code above.)
# ### Answers:
# *
# *
# ## DataSets
#
# In machine learning it is often the case that training data is too large to fit in memory on a single machine. We may also want to perform some pre-processing on the data as it is loaded. The `tf.data.Dataset` class provides a standard interface of feeding data to a machine learning model. `Dataset` objects act as Python generators.
#
# We can create a Dataset from a numpy array using the `from_tensor_slices` method:
#
# +
#Generate 6 random two-dimensional elements as column vectors:
features = np.round(np.random.random((6, 2, 1)), 2)
print("Numpy array of data:\n")
print(features)
# Build a dataset:
dataset = tf.data.Dataset.from_tensor_slices(features)
# iterate over the elements in the dataset:
print("\nIterate over the corresponding Dataset:\n")
for element in dataset:
print(element)
# -
# ## Batches
#
# It is usually more efficent to process data in *batches* than individually. Here is an example of Tensorflow code that multiplies each element in our data set by an appropriately sized weight vector and sums the result. In this example each element is processed individually.
# +
total = tf.Variable(np.zeros((1,1)))
weights = tf.Variable(np.random.random((2,1)))
for element in dataset:
total = total + tf.matmul(tf.transpose(weights), element)
print("Total so far: {}".format(total))
print("\nFinal Total: {}".format(total))
# -
# Instead of processing one data element per iteration, we can batch the dataset and process multiple elements per iteration. Many TensorFlow operators, including `tf.matmul`, are "batch-aware" and will recognize that the first dimension corresponds to the batch. Let's look at a batched version of our dataset:
dataset_batched = dataset.batch(3)
for batch in dataset_batched:
print("Shape: {}\n".format(batch.shape))
print("Elements:\n {}\n".format(batch))
# +
total = tf.Variable(np.zeros((1, 1)))
for batch in dataset_batched:
batch_of_products = tf.matmul(tf.transpose(weights), batch)
total = total + tf.reduce_sum(batch_of_products)
print("Total so far: {}".format(total))
print("\nFinal Total: {}".format(total))
| tf_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# ### Pyplot Available Styles
plt.style.available
# ### Plotting Example
def creatPlot():
randomNumbers = np.random.randn(5000, 6)
(figures, axes) = plt.subplots(figsize = (15, 10))
(n, bins, patches) = axes.hist(
randomNumbers,
12,
density = 1,
histtype = 'bar',
label = [
'Color 1',
'Color 1',
'Color 2',
'Color 3',
'Color 4',
'Color 5',
'Color 6'
])
axes.set_title('Histogram for Normal Distribution', fontsize = 24)
axes.set_xlabel('Data', fontsize = 16)
axes.set_ylabel('Frequency', fontsize = 16)
axes.legend()
plt.show()
creatPlot()
# ### Loading Styles from Files
plt.style.use('aux/mpl-styles/dark.mplstyle')
creatPlot()
| modules/02-data-organization-and-visualization/14-matplotlib-normal-distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import pandas as pd
import sys
# %matplotlib inline
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
# # Sort
# ### How do I sort a column ascending?
# dataframe
df = pd.DataFrame(data={'col1':[0,10,2,30,4]})
df
# Sort by col1 ascending
df.sort_values(by='col1')
# ### How do I sort by the index?
# +
# Unordered index
d = {'col2':[22,10,113]}
i = [pd.Timestamp('20130102'),
pd.Timestamp('2013-01-01'),
pd.Timestamp('1/3/2013')]
df = pd.DataFrame(data=d, index = i)
df
# -
# Index sorted ascending
df.sort_index()
# <p class="text-muted">This tutorial was created by <a href="http://www.hedaro.com" target="_blank"><strong>HEDARO</strong></a></p>
| lectures/01_intro/code/learn-pandas/lessons/Cookbook - Sort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from wflow import wflow_bmi
import os
import time
import datetime
import pandas as pd
import xarray as xr
import hydrostats.data as hd
import hydrostats.visual as hv
import matplotlib.pyplot as plt
import HydroErr as he
import numpy as np
# -
#Set working directory
os.chdir("C:\Users\jerom\Desktop\wflow\examples\wflow_rhine_sbm")
#Call BMI and initialize configuration file
wflow = wflow_bmi.wflowbmi_csdms()
wflow.initialize("wflow_sbm.ini")
wflow.get_input_var_names()
wflow.get_attribute_names()
#List of output variables
wflow.get_output_var_names()
#Set output variable name(s)
variable = "SurfaceRunoff"
#Get lon/lat for xarray
lon = wflow.get_grid_x(variable)
lat = wflow.get_grid_y(variable)
#Create 1D Lon/Lat lists for xarray
lon2 = lon[0,:]
lat2 = lat[:,0]
# +
#Get time step, start, end and calculate the amount of model time steps
tstep = wflow.get_time_step()
tstart = wflow.get_start_time()
tend = wflow.get_end_time()
tstep_nmbr = ((tend - tstart)/tstep) + 1
print tstep_nmbr
print time.ctime(int(tstart)) + " start"
print time.ctime(int(tend)) + " end"
# -
#Set correct date format that would work for all models
date_start = datetime.datetime.strptime(time.ctime(int(tstart)), "%a %b %d %H:%M:%S %Y").strftime('%Y-%m-%d')
#Create xarray dataset with coordinates and time series
ds = xr.Dataset(coords={'lon': lon2,
'lat': lat2,
'time': pd.date_range(date_start, periods = tstep_nmbr)})
ds
#Create empty array for faster filling
data_variable = np.zeros(shape=(tstep_nmbr, len(lat2), len(lon2)))
#Model run with retrieval of variables for each time step
for idx, i in enumerate(range(tstep_nmbr)):
wflow.update_until(tstart + i *tstep)
value = wflow.get_value(variable)
data_variable[idx,:,:] = value
#Append variable data to xarray dataset
ds[variable] = (('time', 'lat', 'lon'), data_variable)
ds
#Plot variable map of time step 50
array = ds[variable].isel(time= 50)
array.plot()
#Get timeseries data of a variabel from xarray Dataset for a given cell
timeseries = ds[variable].sel(lat= 51.8325, lon=6.0955, method= 'nearest')
#Timeseries plot of selected variable
timeseries.plot()
# +
#Create dataframe with time, simulated and observed values
#Note: order of columns needs to be time, sim, obs for the hydrostats package
timeseries = ds[variable].sel(lat= 51.8325, lon=6.0955, method= 'nearest')
sim = timeseries.to_dataframe()
obs = pd.read_csv(r"C:\Users\jerom\Desktop\wflow\examples\grdc.csv", sep=';')
obs['date'] = pd.to_datetime(obs['date'])
obs['date'] = obs['date'].dt.strftime('%Y-%m-%d')
pd.to_datetime(obs['date'])
obs.set_index('date', inplace=True)
val = sim.merge(obs, left_index=True, right_index=True, how='inner')
val.columns = ['sim', 'lat', 'lon', 'obs']
val = val.drop(columns=['lat', 'lon'])
# +
#Plot hydrograph for simulated and observed values and calculate statistics
hv.plot(val,
title='Hydrograph of Lobith',
linestyles=['r-', 'k-'],
legend=('Simulated', 'Observed'),
labels=['Datetime', 'Streamflow (cms)'],
metrics=['ME', 'NSE', 'SA'],
grid=True)
plt.show()
# +
#Seperate statistics using hydrostats
sim = val['sim']
obs = val['obs']
nse = he.nse(sim, obs)
pearson_r = he.pearson_r(sim, obs)
print nse
print pearson_r
# -
| wflow_example/wflow_sbm_rhine_hydrograph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from rnncomp.dataman import *
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# -
sig_list, desc = mk_cls_dataset(t_len=1, dims=1, n_classes=3, freq=10, class_type="disc_spec", save_res=True)
print(sig_list.shape)
dat = sig_list
disc_dat = []
for i in range(dat.shape[0]):
disc_dat.append(dat[i, :, 0].T)
for d in disc_dat:
plt.plot(d)
sig_list, desc = mk_cls_dataset(t_len=1, dims=1, n_classes=3, freq=10, class_type="cont_spec", save_res=True)
print(sig_list.shape)
dat = sig_list
cont_dat = []
for i in range(dat.shape[0]):
cont_dat.append(dat[i, :, 0].T)
for d in cont_dat:
plt.plot(d)
sig_list, desc = mk_cls_dataset(t_len=1, dims=1, n_classes=3, freq=10, class_type="orth_spec", save_res=True)
print(sig_list.shape)
dat = sig_list
spec_dat = []
for i in range(dat.shape[0]):
spec_dat.append(dat[i, :, 0].T)
for d in spec_dat:
plt.plot(d)
| dataset-gen-check.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.2
# language: julia
# name: julia-1.4
# ---
#
# <a id='harrison-kreps'></a>
# <div id="qe-notebook-header" style="text-align:right;">
# <a href="https://quantecon.org/" title="quantecon.org">
# <img style="width:250px;display:inline;" src="https://assets.quantecon.org/img/qe-menubar-logo.svg" alt="QuantEcon">
# </a>
# </div>
# # Asset Pricing III: Incomplete Markets
#
#
# <a id='index-0'></a>
# ## Contents
#
# - [Asset Pricing III: Incomplete Markets](#Asset-Pricing-III:--Incomplete-Markets)
# - [Overview](#Overview)
# - [Structure of the Model](#Structure-of-the-Model)
# - [Solving the Model](#Solving-the-Model)
# - [Exercises](#Exercises)
# - [Solutions](#Solutions)
# ## Overview
#
# This lecture describes a version of a model of Harrison and Kreps [[HK78]](../zreferences.html#harrkreps1978).
#
# The model determines the price of a dividend-yielding asset that is traded by two types of self-interested investors.
#
# The model features
#
# - heterogeneous beliefs
# - incomplete markets
# - short sales constraints, and possibly $ \ldots $
# - (leverage) limits on an investor’s ability to borrow in order to finance purchases of a risky asset
# ### References
#
# Prior to reading the following you might like to review our lectures on
#
# - [Markov chains](../tools_and_techniques/finite_markov.html)
# - [Asset pricing with finite state space](markov_asset.html)
# ### Bubbles
#
# Economists differ in how they define a *bubble*.
#
# The Harrison-Kreps model illustrates the following notion of a bubble that attracts many economists:
#
# > *A component of an asset price can be interpreted as a bubble when all investors agree that the current price of the asset exceeds what they believe the asset’s underlying dividend stream justifies.*
# ### Setup
# + hide-output=true
using InstantiateFromURL
# optionally add arguments to force installation: instantiate = true, precompile = true
github_project("QuantEcon/quantecon-notebooks-julia", version = "0.8.0")
# + hide-output=false
using LinearAlgebra, Statistics
# -
# ## Structure of the Model
#
# The model simplifies by ignoring alterations in the distribution of wealth
# among investors having different beliefs about the fundamentals that determine
# asset payouts.
#
# There is a fixed number $ A $ of shares of an asset.
#
# Each share entitles its owner to a stream of dividends $ \{d_t\} $ governed by a Markov chain defined on a state space $ S \in \{0, 1\} $.
#
# The dividend obeys
#
# $$
# d_t =
# \begin{cases}
# 0 & \text{ if } s_t = 0 \\
# 1 & \text{ if } s_t = 1
# \end{cases}
# $$
#
# The owner of a share at the beginning of time $ t $ is entitled to the dividend paid at time $ t $.
#
# The owner of the share at the beginning of time $ t $ is also entitled to sell the share to another investor during time $ t $.
#
# Two types $ h=a, b $ of investors differ only in their beliefs about a Markov transition matrix $ P $ with typical element
#
# $$
# P(i,j) = \mathbb P\{s_{t+1} = j \mid s_t = i\}
# $$
#
# Investors of type $ a $ believe the transition matrix
#
# $$
# P_a =
# \begin{bmatrix}
# \frac{1}{2} & \frac{1}{2} \\
# \frac{2}{3} & \frac{1}{3}
# \end{bmatrix}
# $$
#
# Investors of type $ b $ think the transition matrix is
#
# $$
# P_b =
# \begin{bmatrix}
# \frac{2}{3} & \frac{1}{3} \\
# \frac{1}{4} & \frac{3}{4}
# \end{bmatrix}
# $$
#
# The stationary (i.e., invariant) distributions of these two matrices can be calculated as follows:
# + hide-output=false
using QuantEcon
qa = [1/2 1/2; 2/3 1/3]
qb = [2/3 1/3; 1/4 3/4]
mcA = MarkovChain(qa)
mcB = MarkovChain(qb)
stA = stationary_distributions(mcA)
# + hide-output=false
stB = stationary_distributions(mcB)
# -
# The stationary distribution of $ P_a $ is approximately $ \pi_A = \begin{bmatrix} .57 & .43 \end{bmatrix} $.
#
# The stationary distribution of $ P_b $ is approximately $ \pi_B = \begin{bmatrix} .43 & .57 \end{bmatrix} $.
# ### Ownership Rights
#
# An owner of the asset at the end of time $ t $ is entitled to the dividend at time $ t+1 $ and also has the right to sell the asset at time $ t+1 $.
#
# Both types of investors are risk-neutral and both have the same fixed discount factor $ \beta \in (0,1) $.
#
# In our numerical example, we’ll set $ \beta = .75 $, just as Harrison and Kreps did.
#
# We’ll eventually study the consequences of two different assumptions about the number of shares $ A $ relative to the resources that our two types of investors can invest in the stock.
#
# 1. Both types of investors have enough resources (either wealth or the capacity to borrow) so that they can purchase the entire available stock of the asset <sup><a href=#f1 id=f1-link>[1]</a></sup>.
# 1. No single type of investor has sufficient resources to purchase the entire stock.
#
#
# Case 1 is the case studied in Harrison and Kreps.
#
# In case 2, both types of investor always hold at least some of the asset.
# ### Short Sales Prohibited
#
# No short sales are allowed.
#
# This matters because it limits pessimists from expressing their opinions
#
# - They can express their views by selling their shares.
# - They cannot express their pessimism more loudly by artificially “manufacturing shares” – that is, they cannot borrow shares from more optimistic investors and sell them immediately.
# ### Optimism and Pessimism
#
# The above specifications of the perceived transition matrices $ P_a $ and $ P_b $, taken directly from Harrison and Kreps, build in stochastically alternating temporary optimism and pessimism.
#
# Remember that state $ 1 $ is the high dividend state.
#
# - In state $ 0 $, a type $ a $ agent is more optimistic about next period’s dividend than a type $ b $ agent.
# - In state $ 1 $, a type $ b $ agent is more optimistic about next period’s dividend.
#
#
# However, the stationary distributions $ \pi_A = \begin{bmatrix} .57 & .43 \end{bmatrix} $ and $ \pi_B = \begin{bmatrix} .43 & .57 \end{bmatrix} $ tell us that a type $ B $ person is more optimistic about the dividend process in the long run than is a type A person.
#
# Transition matrices for the temporarily optimistic and pessimistic investors are constructed as follows.
#
# Temporarily optimistic investors (i.e., the investor with the most optimistic
# beliefs in each state) believe the transition matrix
#
# $$
# P_o =
# \begin{bmatrix}
# \frac{1}{2} & \frac{1}{2} \\
# \frac{1}{4} & \frac{3}{4}
# \end{bmatrix}
# $$
#
# Temporarily pessimistic believe the transition matrix
#
# $$
# P_p =
# \begin{bmatrix}
# \frac{1}{2} & \frac{1}{2} \\
# \frac{1}{4} & \frac{3}{4}
# \end{bmatrix}
# $$
#
# We’ll return to these matrices and their significance in the exercise.
# ### Information
#
# Investors know a price function mapping the state $ s_t $ at $ t $ into the equilibrium price $ p(s_t) $ that prevails in that state.
#
# This price function is endogenous and to be determined below.
#
# When investors choose whether to purchase or sell the asset at $ t $, they also know $ s_t $.
# ## Solving the Model
#
# Now let’s turn to solving the model.
#
# This amounts to determining equilibrium prices under the different possible specifications of beliefs and constraints listed above.
#
# In particular, we compare equilibrium price functions under the following alternative
# assumptions about beliefs:
#
# 1. There is only one type of agent, either $ a $ or $ b $.
# 1. There are two types of agent differentiated only by their beliefs. Each type of agent has sufficient resources to purchase all of the asset (Harrison and Kreps’s setting).
# 1. There are two types of agent with different beliefs, but because of limited wealth and/or limited leverage, both types of investors hold the asset each period.
# ### Summary Table
#
# The following table gives a summary of the findings obtained in the remainder of the lecture
# (you will be asked to recreate the table in an exercise).
#
# It records implications of Harrison and Kreps’s specifications of $ P_a, P_b, \beta $.
#
# |$ s_t $|0|1|
# |:---------------------:|:----:|:----:|
# |$ p_a $|1.33|1.22|
# |$ p_b $|1.45|1.91|
# |$ p_o $|1.85|2.08|
# |$ p_p $|1|1|
# |$ \hat{p}_a $|1.85|1.69|
# |$ \hat{p}_b $|1.69|2.08|
# Here
#
# - $ p_a $ is the equilibrium price function under homogeneous beliefs $ P_a $
# - $ p_b $ is the equilibrium price function under homogeneous beliefs $ P_b $
# - $ p_o $ is the equilibrium price function under heterogeneous beliefs with optimistic marginal investors
# - $ p_p $ is the equilibrium price function under heterogeneous beliefs with pessimistic marginal investors
# - $ \hat{p}_a $ is the amount type $ a $ investors are willing to pay for the asset
# - $ \hat{p}_b $ is the amount type $ b $ investors are willing to pay for the asset
#
#
# We’ll explain these values and how they are calculated one row at a time.
# ### Single Belief Prices
#
# We’ll start by pricing the asset under homogeneous beliefs.
#
# (This is the case treated in [the lecture](markov_asset.html) on asset pricing with finite Markov states)
#
# Suppose that there is only one type of investor, either of type $ a $ or $ b $, and that this investor always “prices the asset”.
#
# Let $ p_h = \begin{bmatrix} p_h(0) \cr p_h(1) \end{bmatrix} $ be the equilibrium price vector when all investors are of type $ h $.
#
# The price today equals the expected discounted value of tomorrow’s dividend and tomorrow’s price of the asset:
#
# $$
# p_h(s) = \beta \left( P_h(s,0) (0 + p_h(0)) + P_h(s,1) ( 1 + p_h(1)) \right), \quad s = 0, 1
# $$
#
# These equations imply that the equilibrium price vector is
#
#
# <a id='equation-harrkrep1'></a>
# $$
# \begin{bmatrix} p_h(0) \cr p_h(1) \end{bmatrix}
# = \beta [I - \beta P_h]^{-1} P_h \begin{bmatrix} 0 \cr 1 \end{bmatrix} \tag{1}
# $$
#
# The first two rows of of the table report $ p_a(s) $ and $ p_b(s) $.
#
# Here’s a function that can be used to compute these values
# + hide-output=false
using LinearAlgebra
function price_single_beliefs(transition, dividend_payoff;
β=.75)
# First compute inverse piece
imbq_inv = inv(I - β * transition)
# Next compute prices
prices = β * ((imbq_inv * transition) * dividend_payoff)
return prices
end
# -
# #### Single belief prices as benchmarks
#
# These equilibrium prices under homogeneous beliefs are important benchmarks for the subsequent analysis.
#
# - $ p_h(s) $ tells what investor $ h $ thinks is the “fundamental value” of the asset.
# - Here “fundamental value” means the expected discounted present value of future dividends.
#
#
# We will compare these fundamental values of the asset with equilibrium values when traders have different beliefs.
# ### Pricing under Heterogeneous Beliefs
#
# There are several cases to consider.
#
# The first is when both types of agent have sufficient wealth to purchase all of the asset themselves.
#
# In this case the marginal investor who prices the asset is the more optimistic type, so that the equilibrium price $ \bar p $ satisfies Harrison and Kreps’s key equation:
#
#
# <a id='equation-hakr2'></a>
# $$
# \bar p(s) =
# \beta
# \max
# \left\{
# P_a(s,0) \bar p(0) + P_a(s,1) ( 1 + \bar p(1))
# ,\;
# P_b(s,0) \bar p(0) + P_b(s,1) ( 1 + \bar p(1))
# \right\} \tag{2}
# $$
#
# for $ s=0,1 $.
#
# The marginal investor who prices the asset in state $ s $ is of type $ a $ if
#
# $$
# P_a(s,0) \bar p(0) + P_a(s,1) ( 1 + \bar p(1)) >
# P_b(s,0) \bar p(0) + P_b(s,1) ( 1 + \bar p(1))
# $$
#
# The marginal investor is of type $ b $ if
#
# $$
# P_a(s,1) \bar p(0) + P_a(s,1) ( 1 + \bar p(1)) <
# P_b(s,1) \bar p(0) + P_b(s,1) ( 1 + \bar p(1))
# $$
#
# **Thus the marginal investor is the (temporarily) optimistic type**.
#
# Equation [(2)](#equation-hakr2) is a functional equation that, like a Bellman equation, can be solved by
#
# - starting with a guess for the price vector $ \bar p $ and
# - iterating to convergence on the operator that maps a guess $ \bar p^j $ into an updated guess
# $ \bar p^{j+1} $ defined by the right side of [(2)](#equation-hakr2), namely
#
#
#
# <a id='equation-harrkrep3'></a>
# $$
# \bar p^{j+1}(s)
# = \beta \max
# \left\{
# P_a(s,0) \bar p^j(0) + P_a(s,1) ( 1 + \bar p^j(1))
# ,\;
# P_b(s,0) \bar p^j(0) + P_b(s,1) ( 1 + \bar p^j(1))
# \right\} \tag{3}
# $$
#
# for $ s=0,1 $.
#
# The third row of the table reports equilibrium prices that solve the functional equation when $ \beta = .75 $.
#
# Here the type that is optimistic about $ s_{t+1} $ prices the asset in state $ s_t $.
#
# It is instructive to compare these prices with the equilibrium prices for the homogeneous belief economies that solve under beliefs $ P_a $ and $ P_b $.
#
# Equilibrium prices $ \bar p $ in the heterogeneous beliefs economy exceed what any prospective investor regards as the fundamental value of the asset in each possible state.
#
# Nevertheless, the economy recurrently visits a state that makes each investor want to
# purchase the asset for more than he believes its future dividends are
# worth.
#
# The reason is that he expects to have the option to sell the asset later to another investor who will value the asset more highly than he will.
#
# - Investors of type $ a $ are willing to pay the following price for the asset
#
#
# $$
# \hat p_a(s) =
# \begin{cases}
# \bar p(0) & \text{ if } s_t = 0 \\
# \beta(P_a(1,0) \bar p(0) + P_a(1,1) ( 1 + \bar p(1))) & \text{ if } s_t = 1
# \end{cases}
# $$
#
# - Investors of type $ b $ are willing to pay the following price for the asset
#
#
# $$
# \hat p_b(s) =
# \begin{cases}
# \beta(P_b(0,0) \bar p(0) + P_b (0,1) ( 1 + \bar p(1))) & \text{ if } s_t = 0 \\
# \bar p(1) & \text{ if } s_t =1
# \end{cases}
# $$
#
# Evidently, $ \hat p_a(1) < \bar p(1) $ and $ \hat p_b(0) < \bar p(0) $.
#
# Investors of type $ a $ want to sell the asset in state $ 1 $ while investors of type $ b $ want to sell it in state $ 0 $.
#
# - The asset changes hands whenever the state changes from $ 0 $ to $ 1 $ or from $ 1 $ to $ 0 $.
# - The valuations $ \hat p_a(s) $ and $ \hat p_b(s) $ are displayed in the fourth and fifth rows of the table.
# - Even the pessimistic investors who don’t buy the asset think that it is worth more than they think future dividends are worth.
#
#
# Here’s code to solve for $ \bar p $, $ \hat p_a $ and $ \hat p_b $ using the iterative method described above
# + hide-output=false
function price_optimistic_beliefs(transitions,
dividend_payoff;
β=.75, max_iter=50000,
tol=1e-16)
# We will guess an initial price vector of [0, 0]
p_new = [0,0]
p_old = [10.0,10.0]
# We know this is a contraction mapping, so we can iterate to conv
for i ∈ 1:max_iter
p_old = p_new
temp = [maximum((q * p_old) + (q * dividend_payoff))
for q in transitions]
p_new = β * temp
# If we succed in converging, break out of for loop
if maximum(sqrt, ((p_new - p_old).^2)) < 1e-12
break
end
end
temp=[minimum((q * p_old) + (q * dividend_payoff)) for q in transitions]
ptwiddle = β * temp
phat_a = [p_new[1], ptwiddle[2]]
phat_b = [ptwiddle[1], p_new[2]]
return p_new, phat_a, phat_b
end
# -
# ### Insufficient Funds
#
# Outcomes differ when the more optimistic type of investor has insufficient wealth — or insufficient ability to borrow enough — to hold the entire stock of the asset.
#
# In this case, the asset price must adjust to attract pessimistic investors.
#
# Instead of equation [(2)](#equation-hakr2), the equilibrium price satisfies
#
#
# <a id='equation-harrkrep4'></a>
# $$
# \check p(s)
# = \beta \min
# \left\{
# P_a(s,1) \check p(0) + P_a(s,1) ( 1 + \check p(1)) ,\;
# P_b(s,1) \check p(0) + P_b(s,1) ( 1 + \check p(1))
# \right\} \tag{4}
# $$
#
# and the marginal investor who prices the asset is always the one that values it *less* highly than does the other type.
#
# Now the marginal investor is always the (temporarily) pessimistic type.
#
# Notice from the sixth row of that the pessimistic price $ \underline p $ is lower than the homogeneous belief prices $ p_a $ and $ p_b $ in both states.
#
# When pessimistic investors price the asset according to [(4)](#equation-harrkrep4), optimistic investors think that the asset is underpriced.
#
# If they could, optimistic investors would willingly borrow at the one-period gross interest rate $ \beta^{-1} $ to purchase more of the asset.
#
# Implicit constraints on leverage prohibit them from doing so.
#
# When optimistic investors price the asset as in equation [(2)](#equation-hakr2), pessimistic investors think that the asset is overpriced and would like to sell the asset short.
#
# Constraints on short sales prevent that.
#
# Here’s code to solve for $ \check p $ using iteration
# + hide-output=false
function price_pessimistic_beliefs(transitions,
dividend_payoff;
β=.75, max_iter=50000,
tol=1e-16)
# We will guess an initial price vector of [0, 0]
p_new = [0, 0]
p_old = [10.0, 10.0]
# We know this is a contraction mapping, so we can iterate to conv
for i ∈ 1:max_iter
p_old = p_new
temp=[minimum((q * p_old) + (q* dividend_payoff)) for q in transitions]
p_new = β * temp
# If we succed in converging, break out of for loop
if maximum(sqrt, ((p_new - p_old).^2)) < 1e-12
break
end
end
return p_new
end
# -
# ### Further Interpretation
#
# [[Sch14]](../zreferences.html#scheinkman2014) interprets the Harrison-Kreps model as a model of a bubble — a situation in which an asset price exceeds what every investor thinks is merited by the asset’s underlying dividend stream.
#
# Scheinkman stresses these features of the Harrison-Kreps model:
#
# - Compared to the homogeneous beliefs setting leading to the pricing formula, high volume occurs when the Harrison-Kreps pricing formula prevails.
#
#
# Type $ a $ investors sell the entire stock of the asset to type $ b $ investors every time the state switches from $ s_t =0 $ to $ s_t =1 $.
#
# Type $ b $ investors sell the asset to type $ a $ investors every time the state switches from $ s_t = 1 $ to $ s_t =0 $.
#
# Scheinkman takes this as a strength of the model because he observes high volume during *famous bubbles*.
#
# - If the *supply* of the asset is increased sufficiently either physically (more “houses” are built) or artificially (ways are invented to short sell “houses”), bubbles end when the supply has grown enough to outstrip optimistic investors’ resources for purchasing the asset.
# - If optimistic investors finance purchases by borrowing, tightening leverage constraints can extinguish a bubble.
#
#
# Scheinkman extracts insights about effects of financial regulations on bubbles.
#
# He emphasizes how limiting short sales and limiting leverage have opposite effects.
# ## Exercises
# ### Exercise 1
#
# Recreate the summary table using the functions we have built above.
#
# |$ s_t $|0|1|
# |:---------------------:|:----:|:----:|
# |$ p_a $|1.33|1.22|
# |$ p_b $|1.45|1.91|
# |$ p_o $|1.85|2.08|
# |$ p_p $|1|1|
# |$ \hat{p}_a $|1.85|1.69|
# |$ \hat{p}_b $|1.69|2.08|
# You will first need to define the transition matrices and dividend payoff vector.
# ## Solutions
# ### Exercise 1
#
# First we will obtain equilibrium price vectors with homogeneous beliefs, including when all
# investors are optimistic or pessimistic
# + hide-output=false
qa = [1/2 1/2; 2/3 1/3] # Type a transition matrix
qb = [2/3 1/3; 1/4 3/4] # Type b transition matrix
qopt = [1/2 1/2; 1/4 3/4] # Optimistic investor transition matrix
qpess = [2/3 1/3; 2/3 1/3] # Pessimistic investor transition matrix
dividendreturn = [0; 1]
transitions = [qa, qb, qopt, qpess]
labels = ["p_a", "p_b", "p_optimistic", "p_pessimistic"]
for (transition, label) in zip(transitions, labels)
println(label)
println(repeat("=", 20))
s0, s1 = round.(price_single_beliefs(transition, dividendreturn), digits=2)
println("State 0: $s0")
println("State 1: $s1")
println(repeat("-", 20))
end
# -
# We will use the price_optimistic_beliefs function to find the price under
# heterogeneous beliefs.
# + hide-output=false
opt_beliefs = price_optimistic_beliefs([qa, qb], dividendreturn)
labels = ["p_optimistic", "p_hat_a", "p_hat_b"]
for (p, label) ∈ zip(opt_beliefs, labels)
println(label)
println(repeat("=", 20))
s0, s1 = round.(p, digits = 2)
println("State 0: $s0")
println("State 1: $s1")
println(repeat("-", 20))
end
# -
# Notice that the equilibrium price with heterogeneous beliefs is equal to the price under single beliefs
# with optimistic investors - this is due to the marginal investor being the temporarily optimistic type.
# **Footnotes**
#
# <p><a id=f1 href=#f1-link><strong>[1]</strong></a> By assuming that both types of agent always have “deep enough pockets” to purchase all of the asset, the model takes wealth dynamics off the table. The Harrison-Kreps model generates high trading volume when the state changes either from 0 to 1 or from 1 to 0.
| multi_agent_models/harrison_kreps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# 
# + [markdown] tags=["remove_cell"]
# ---
# -
# # GridAPPS-D Python Library
# + [markdown] tags=["remove_cell"]
# This tutorial provides a first look at the GridAPPS-D Python Library
#
# __Learning Objectives:__
#
# At the end of the tutorial, the user should be able to
#
# * Explain how API calls can be wrapped in a generic programming language
# * Import required Python libraries and modules
# * Establish a connection to the GridAPPS-D platform
# * List the types of GridAPPSD-Python libraries and associated methods
# + [markdown] tags=["remove_cell"]
# ## Getting Started
#
# Before running any of the sample routines in this tutorial, it is first necessary to start the GridAPPS-D Platform and establish a connection to this notebook so that we can start passing calls to the API.
# + [markdown] tags=["remove_cell"]
# _Open the Ubuntu terminal and start the GridAPPS-D Platform if it is not running already:_
#
# `cd gridappsd-docker`
#
# ~/gridappsd-docker$ `./run.sh -t develop`
#
# _Once containers are running,_
#
# gridappsd@[container]:/gridappsd$ `./run-gridappsd.sh`
# + [markdown] tags=["remove_cell"]
# ---
# ## Table of Contents
#
# * [1. A First Course in GridAPPSD-Python](#1.-A-First-Course-in-GridAPPSD-Python)
#
#
# * [2. Building Blocks of an Application](#2.-Building-Blocks-of-an-Application)
# * [2.1. Import Required Python Libraries](#2.1.-Import-Required-Python-Libraries)
# * [2.2. Import Required GridAPPS-D Libraries](#2.2.-Import-Required-GridAPPS-D-Libraries)
# * [2.3. Establish a Connection to the GridAPPS-D Platform](#2.3.-Establish-a-Connection-to-the-GridAPPS-D-Platform)
# * [2.4. Pass a Simple API Call](#2.4.-Pass-a-Simple-API-Call)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 1. A First Course in GridAPPSD-Python
# -
# ## Intro to GridAPPSD-Python
#
# GridAPPSD-Python is a Python library that can wrap API calls and pass them to the various GridAPPS-D APIs through the GOSS Message Bus.
#
# The library has numerous shortcuts to help you develop applications faster and interface them with other applications, services, and GridAPPS-D compatible software packages.
#
# The GridAPPSD-Python library requires a python version >= 3.6 and < 4 in order to work properly. (Note: no testing has been done with python 4 to date).
#
# The GridAPPSD-Python library can be installed using `pip install gridappsd-python`.
#
# For more information, see the GridAPPSD-Python [GitHub Repo](https://github.com/GRIDAPPSD/gridappsd-python/tree/poetry) and [PyPi site](https://pypi.org/project/gridappsd-python/).
# + [markdown] tags=["remove_cell"]
# 
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 2. Connecting to the GridAPPS-D Platform
# -
# ## Connecting to GridAPPS-D Platform
#
# Before starting any development in the GridAPPS-D environment, it is necessary to establish a connection to the GridAPPS-D Platform.
#
# ### Specifying Environment Variables (Preferred)
#
# The preferred method for establishing a connection with the GridAPPS-D Platform is to define a set of environment variables that specify the connection address, port, username, and password.
#
#
# __Specifying the Environment Variables in Python Script__
#
# This method is recommended for initial application development when running in a development environment, such as PyCharm or the Jupyter Notebook tutorials.
# +
# Establish connection to GridAPPS-D Platform:
from gridappsd import GridAPPSD
import os # Set username and password
os.environ['GRIDAPPSD_USER'] = 'tutorial_user'
os.environ['GRIDAPPSD_PASSWORD'] = '<PASSWORD>!'
os.environ['GRIDAPPSD_ADDRESS'] = 'localhost'
os.environ['GRIDAPPSD_PORT'] = '61613'
# Connect to GridAPPS-D Platform
gapps = GridAPPSD()
assert gapps.connected
# -
# __Specifying the Environment Variable in ~/.bashrc Script__
#
# This method is recommended for more complete applications scripts where all the application scripts are called from a single ~/.bashrc script. In that script, the environment variables can be defined and then will be available to all scripts that need to connect the GridAPPS-D Platform.
#
#
# ```
# # export allows all processes started by this shell to have access to the global variable
#
# # address where the gridappsd server is running - default localhost
# export GRIDAPPSD_ADDRESS=localhost
#
# # port to connect to on the gridappsd server (the stomp client port)
# export GRIDAPPSD_PORT=61613
#
# # username to connect to the gridappsd server
# export GRIDAPPSD_USER=app_user
#
# # password to connect to the gridappsd server
# export GRIDAPPSD_PASSWORD=<PASSWORD>
#
# # Note these should be changed on the server in a cyber secure environment!
# ```
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# ### Specifying Connection Parameters Manually
#
# An older method of connecting to the GridAPPS-D Platform is manually specifying the connection parameters. This method is still supported, but may be deprecated in future releases.
#
# This method is less flexible and has an in-built portability issues associated with hard-coded platform passwords.
gapps = GridAPPSD("('localhost', 61613)", username='system', password='<PASSWORD>')
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# ## IMPORTANT: GridAPPS-D `utils` Deprecated
# -
# ### GridAPPSD-utils Deprecated
#
# GridAPPS-D Platform releases prior to 2021 used a library called `utils` to establish a connection with the platform. This library has been deprecated and replaced with Java Token Authentication using the environment variable method shown above.
#
# The authentication method below will work with 2019-2020 versions of the GridAPPS-D Platform and GridAPPSD-Python, but not with any newer releases.
#
# ```
# # DEPRECATED authentication method
# from gridappsd import GridAPPSD, utils
# gapps = GridAPPSD(address=utils.get_gridappsd_address(),
# username=utils.get_gridappsd_user(), password=utils.get_gridappsd_pass())
# ```
#
# `utils` -- __DEPRECATED__ A set of utilities to assist with common commands, inlcuding
#
#
# * `utils.validate_gridappsd_uri()` -- Checks if GridAPPS-D is hosted on the correct port
#
# * `utils.get_gridappsd_address()` -- Returns the platform address such that response can be passed directly to a socket or the STOMP library
#
# * `utils.get_gridappsd_user()` -- Returns the login username
#
# * `utils.get_gridappsd_pass()` -- Returns the login password
#
# * `utils.get_gridappsd_application_id()` -- Only applicable if the environment variable 'GRIDAPPSD_APPLICATION_ID' has been set
#
# * `utils.get_gridappsd_simulation_id()` -- Retrieves the simulation id from the environment.
#
#
# __It is strongly recommended that applications that previously used this method replace any connection objects with environment variables to ensure compatibility with subsequent releases of the GRIDAPPS-D platform__
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 3. Passing API calls with GridAPPSD-Python
# -
# ## Passing API calls with GridAPPSD-Python
#
# There are three methods used in GridAPPSD-Python Library to pass API calls to the GridAPPS-D platform:
#
# * `.get_response(self, topic, message, timeout)` -- Pass a database query, response expected before timeout
# * `.subscribe(self, topic, callback)` -- Subscribe to a data stream
# * `.send(self, topic, message)` -- Send a command to a simulation, no response expected
#
# Each are explained in more detail below
# + [markdown] tags=["remove_cell"]
# ## 3.1. `.get_response(topic, message)`
# -
# ### .get_response(topic, message)
#
# This is the most commonly used method for passing API calls to the GridAPPS-D Platform. This method is used when a response is expected back from the GridAPPS-D platform within a particular timeout period. It is used for all database queries using
#
#
# * [PowerGrid Models API](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb) -- queries for model info, object mRIDs, measurement mRIDs
#
# * [Configuration File API](../api_usage/3.4-Using-the-Configuration-File-API.ipynb) -- queries to convert the model into other file format versions
#
# * [Timeseries API](../api_usage/3.7-Using-the-Timeseries-API.ipynb) -- queries for weather data and historical data from prior simulations
#
#
# The syntax used when calling this method is `gapps.get_response(topic, message)` or alternatively, `gapps.get_response(topic, message, timeout = 30)`, where
#
# * `topic` is the GridAPPS-D topic for the particular API (as described in [API Communication Channels](../api_usage/3.1-API-Communication-Channels.ipynb).
#
# * `message` is the query message specifying what information the API should return
#
# * `timeout = ` is optional and gives the number of seconds given for the API to respond. Model conversion queries using the Configuration File API may take 30 - 60 seconds for very large models. Most other queries do not need a timeout specification.
#
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# ## 3.2. `.subscribe(topic, message)`
# -
# ### .subscribe(topic, message)
#
# This method is used for subscribing to the real-time data stream generated by the GridAPPS-D platform while running a simulation. It is used to subscribe to information published at each time step by the
#
# * [Simulation API](../api_usage/3.6-Controlling-Simulation-API.ipynb) -- simulated SCADA data and measurements created by the simulation
#
# * [Logging API](../api_usage/3.8-Using-the-Logging-API.ipynb) -- log messages published by the Platform, applications, and simulation
#
# The `.subscribe()`method is also used to subscribe to streaming data generated by some of the GridAPPS-D services.
#
#
# The syntax used when calling this method is `gapps.subscribe(topic, message)`, where
#
# * `topic` is the GridAPPS-D simulation output topic, log output topic, or service output topic for the particular real-time data stream the application needs to subscribe to, (as described in [API Communication Channels](../api_usage/3.1-API-Communication-Channels.ipynb).
#
# * `message` is the subscription message. For simulation and log outputs, it is a method or class definition, as described in [Comparison of Subscription Approaches](/api_usage/3.6-Controlling-Simulation-API.ipynb#Comparison-of-Subscription-Approaches).
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# ## 3.3. `.send(topic, message)`
# -
# ### .send(topic, message)
#
# This method is used for sending equipment command and simulation input messages to the GridAPPS-D platform while running a simulation. It is used to send difference messages to the [Simulation API](../api_usage/3.6-Controlling-Simulation-API.ipynb) and for other generic publishing needs, such as sending a command input to a GridAPPS-D Service.
#
# The syntax used when calling this method is `gapps.send(topic, message)`, where
#
# * `topic` is the simulation or service input topic(as described in [API Communication Channels](../api_usage/3.1-API-Communication-Channels.ipynb).
#
# * `message` is the API call message to be published. The most commonly used simulation input message is a [Difference Message](../api_usage/3.6-Controlling-Simulation-API.ipynb#Format-of-a-Difference-Message) used to control equipment settings.
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# ## 3.4. `.unsubcribe(conn_id)`
# -
# ### .unsubscribe(conn_id)
#
# This method is used to unsubscribe from a simulation or service that was previously subscribed to using the `.subscribe` method.
#
# The syntax of this method is `gapps.unsubscribe(conn_id)`, where `conn_id` is the connection id obtained when previously subscribing using the `conn_id = gapps.subscribe(topic, message)`.
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 4. Importing Required Python Libraries
# -
# ## Importing Required Python Libraries
#
# A typical GridAPPS-D application will require several libraries to be imported from GridAPPSD-Python as well as from other Python libraries.
#
# + [markdown] tags=["remove_cell"]
# ## 4.1. Required GridAPPS-D Libraries
# -
# ### Required GridAPPS-D Libraries
#
# The GridAPPS-Python API contains several libraries, which are used to query for information, subscribe to measurements, and publish commands to the GOSS message bus. These inlcude
#
#
# * `GridAPPSD` -- This is the primary library that contains numerous methods for passing API calls, connecting to the GridAPPS-D platform, and other common tasks
#
# * `topics` -- This library contains methods for constructing the correct API channel strings
#
# * `Simulation` -- This library contains shortcut methods for subscribing and controlling simulations
#
# * `Logger` -- This library contains logging methods. It is recommended to invoke those methods using the `gapps.get_logger` method rather than importing the library
#
# * `GOSS` -- This library contains methods for passing API calls to the GOSS Message Bus. It is imported automatically when importing the `GridAPPSD` library
#
# * `Houses` -- This library populates a feeder with thermal house model loads. It is imported automatically when importing the `GridAPPS` library
#
# * `utils` -- Deprecated
#
#
# Each of the libraries can be imported using `from gridappsd import library_name`. For example,
from gridappsd import GridAPPSD
from gridappsd import topics as t
# Each of the libraries are discussed in detail in the next section.
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# ## 4.2. Other Required Python Libraries
# -
# ### Other Required Python Libraries
#
# Below is a list of some of the additional libraries that you may need to import.
#
# You may not need all of these additional libraries, depending on the needs of your application
#
# * `argparse` -- This is the recommended command-line parsing module in Python.([Online Documentation](https://docs.python.org/3/howto/argparse.html))
#
# * `json` -- Encoder and decoder for JavaScript Object Notation (JSON). ([Online Documentation](https://docs.python.org/3/library/json.html))
#
# * `logging` -- This module defines classes and functions for event logging. ([Online Documentation](https://docs.python.org/3/library/logging.html)
#
# * `sys` -- Python module for system specific parameters. ([Online Documentation](https://docs.python.org/3/library/sys.html))
#
# * `time` -- Time access and conversions. ([Online Documentation](https://docs.python.org/3/library/time.html))
#
# * `pytz` -- Library to enable resolution of cross-platform time zones and ambiguous times. ([Online Documentation](https://pypi.org/project/pytz/)
#
# * `stomp` -- Python client for accessing messaging servers using the Simple Text Oriented Messaging Protocol (STOMP). ([Online Documentation](https://pypi.org/project/stomp.py/))
#
# * `os` -- Miscellaneous operating system interface. Needed to set environment variables for the GridAPPS-D connection if working from a single Python script or notebook. ([Online Documentation](https://docs.python.org/3/library/os.html))
#
import argparse
import json
import logging
import sys
import time
import pytz
import stomp
import os
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 5. GridAPPSD-Python `GridAPPSD` Library
# -
# ## GridAPPSD-Python GridAPPSD Library
#
# This library contains the most commonly used methods needed for building GridAPPS-D applications and services.
#
# All of these methods are for the GridAPPS-D connection object defined using `gapps = GridAPPSD()`
# ### Get Methods
#
#
# This group of methods are used to get information and statuses about the GridAPPS-D platform and simulations:
#
#
# * `.get_application_status()` -- Returns the current status of an application
#
# * `.get_application_id()` -- Returns the unique ID of an application registered with the Platform
#
# * `.get_houses()` -- Returns houses populated in the feeder
#
# * `.get_logger()` -- Returns a log instance for interacting with logs within the Platform
#
# * `.get_platform_status()` -- Returns the current status of the Platform
#
# * `.get_service_status()` -- Returns the current status of a service
#
# * `.get_simulation_id()` -- Returns the simulation ID for the current GridAPPSD connection
#
# ### Set / Send Methods
#
# This group of methods are used to set the status of applications and services:
#
#
# * `.set_application_status()` -- Set the status of an application
#
# * `.set_service_status()` -- Set the status of a service
#
# * `.set_simulation_id(simulation_id)` -- Set the simulation ID if none is defined
#
# * `.send_simulation_status(status, message, log_level)` -- Sets simulation + service status and writes to GridAPPS-D logs
#
# * `.send_status(status, message, log_level)` -- Sets application status and writes to GridAPPS-D logs
# ### PowerGrid Models API Methods
#
# This group of methods run pre-built PowerGrid Models API queries for simpler query types:
#
# * `query_data(query, timeout)` -- [Run a generic SPARQL Query](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-using-a-Generic-SPARQL-Query)
#
# * `query_model(model_id, object_type, object_id)` -- [Query using full CIM100 prefix](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-using-a-SPARQL-filter)
#
# * `query_model_info()` -- [Query for dictionary of all models](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-for-Dictionary-of-all-Models)
#
# * `query_model_names(model_id)` -- [Query for mRIDs of all models](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-for-Dictionary-of-all-Models)
#
# * `query_object(object_id, model_id)` -- [Query for CIM attributes of an object](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-for-CIM-Attributes-of-an-Object)
#
# * `query_object_dictionary(model_id, object_type, object_id)` -- [Query for object dictionary](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-for-Object-Dictionary)
#
# * `query_object_types(model_id)` -- [Query for CIM classes in a model](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb#Query-for-Dictionary-of-all-Models)
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 6. GridAPPSD-Python Topics Library
# -
# ## GridAPPSD-Python Topics Library
#
# The GridAPPSD-Python topics library is used to obtain the correct [API Communication Channel](../api_usage/3.1-API-Communication-Channels.ipynb), which tells the GridAPPS-D platform to which database, application, or simulation a particular API call should be delivered.
#
# Static GridAPPS-D topics (such as those for the [PowerGrid Models API](../api_usage/3.3-Using-the-PowerGrid-Models-API.ipynb), [Configuration File API](../api_usage/3.4-Using-the-Configuration-File-API.ipynb), and [Timeseries API](../api_usage/3.7-Using-the-Timeseries-API.ipynb)) can be imported using
from gridappsd import topics as t
# Dynamic GridAPPS-D topics (such as those for the [Simulation API](../api_usage/3.6-Controlling-Simulation-API.ipynb) and various GridAPPS-D services) can be imported using
from gridappsd.topics import simulation_output_topic
from gridappsd.topics import simulation_input_topic
from gridappsd.topics import simulation_log_topic
# Each of the specific methods available in the `topics` library are discussed in detail in [API Communication Channels](../api_usage/3.1-API-Communication-Channels.ipynb).
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# -
# ---
# + [markdown] tags=["remove_cell"]
# # 7. GridAPPSD-Python Simulation Library
# -
# ## GridAPPSD-Python Simulation Library
#
# The GridAPPSD-Python simulation library is used for starting, running, and controlling parallel digital twin simulations. For more details on specific usage, see
#
#
# * [Starting a Parallel Simulation](../api_usage/3.5-Creating-Running-Simulation-API.ipynb#Starting-the-Simulation)
#
# * [Pausing, Resuming, and Stopping Parallel Simulations](../api_usage/3.5-Creating-Running-Simulation-API.ipynb#Using-the-gridappsd.simulation-Python-Library)
#
# * [Subscribing to Parallel Simulations](../api_usage/3.6-Controlling-Simulation-API.ipynb#Subscribing-to-Parallel-Simulations)
#
#
#
# The Simulation library can be imported using
from gridappsd.simulation import Simulation
# Available methods in the `Simulation` library are
#
# * `.start_simulation()` -- Start the simulation
#
#
# * `.pause()` -- Pause the simulation
#
#
# * `.resume()` -- Resume the simulation
#
#
# * `.resume_pause_at(pause_time)` -- Resume the simulation, and then pause it in so many seconds
#
#
# * `.stop()` -- Stop the simulation
#
#
# * `.run_loop()` -- Loop the entire simulation until interrupted
#
#
# * `.simulation_id` -- Returns the Simulation ID of the simulation
#
#
# * `.add_ontimestep_callback(myfunction1)` -- Run the desired function on each timestep
#
#
# * `.add_onmesurement_callback(myfunction2)` -- Run the desired function when a measurement is received.
#
#
# * `.add_oncomplete_callback(myfunction3)` -- Run the desired function when simulation is finished
#
#
# * `.add_onstart_callback(myfunction4)` -- Run desired function when simulation is started
#
#
# __Note: method name `.add_onmesurement` is misspelled in the library definition!!__
#
# ---
# + [markdown] tags=["remove_cell"]
# # 8. GridAPPSD-Python `DifferenceBuilder`
# -
# ## GridAPPSD-Python DifferenceBuilder
#
# `DifferenceBuilder` is a GridAPPSD-Python library that is used to create and correctly format difference messages that used to create equipment control commands. The usage of difference builder is given in [Using DifferenceBuilder](../api_usage/3.6-Controlling-Simulation-API.ipynb#Using-GridAPPSD-Python-DifferenceBuilder).
#
# The `DifferenceBuilder` library can be imported using
# + tags=["remove_cell"]
simulation_id = "12345678"
# +
from gridappsd import DifferenceBuilder
my_diff_build = DifferenceBuilder(simulation_id)
# + [markdown] tags=["remove_cell"]
# [Return to Top](#Table-of-Contents)
# + [markdown] tags=["remove_cell"]
# ---
# + [markdown] tags=["remove_cell"]
# [](2.2--Lesson-2.2--GridAPPS-D-Architecture.ipynb)
# + [markdown] tags=["remove_cell"]
# [](2.3Q--Quiz-for-Lesson-2.3.ipynb)
# + [markdown] tags=["remove_cell"]
# [](2.4--Lesson-2.4--GridAPPS-D-Application-Structure.ipynb)
# + [markdown] tags=["remove_cell"]
# [](2.0--Module-2--GridAPPS-D-Overview.ipynb)
# -
# ---
# 
| module-content/overview/.ipynb_checkpoints/2.3-GridAPPS-D-Python-Library-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/muraliparimi/Python/blob/master/The_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jPyDpDqvZuLo"
# # **Welcome to the Notebook**
# + [markdown] id="F2YNhXuUwGNl"
# ### Let's mount the google drive
# + id="MmQC7J5cvkzE" colab={"base_uri": "https://localhost:8080/"} outputId="fa353a1d-c188-4027-86b2-0d1efbc167af"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="cs9EInKyWicZ"
# # Task 1 :
# Installing pyspark module
# + id="LJq2nzUKWujx" colab={"base_uri": "https://localhost:8080/"} outputId="71fbc839-dfc8-4cdc-f7ce-2d8c2819afd4"
# !pip install pyspark
# + [markdown] id="dVNAlw2jWiWb"
# Importing the modules
# + id="gS3YIWJiW7I3"
from pyspark.sql import SparkSession
from pyspark.sql.functions import count, desc , col, max
import matplotlib.pyplot as plts
# + [markdown] id="RKuIUYLwSkNP"
# creating spark session
# + id="XXSZBvRgSnCN" cellView="both"
spark = SparkSession.builder.appName('spark_app').getOrCreate()
# + [markdown] id="vv_SKqr8T9mT"
# # Task 2 :
# importing the *Listenings.csv* file:
# + id="gK22lJDRTuKY"
lsitening_csv_path = '/content/drive/MyDrive/dataset/listenings.csv'
listening_df = spark.read.format('csv').option('inferSchema',True).option('header',True).load(lsitening_csv_path)
# + [markdown] id="s1i6KVshykdn"
# let's check the data:
# + id="5Ji0zViUyjUE" colab={"base_uri": "https://localhost:8080/"} outputId="0a6533d7-9da0-4519-c944-5cba60fef1d0"
listening_df.show()
# + [markdown] id="HS6wd2d_woNC"
# let's delete useless columns:
# + id="pIMzBAglwtNP" colab={"base_uri": "https://localhost:8080/"} outputId="73291da1-ad4f-40e2-c609-233389a039cf"
listening_df = listening_df.drop('date')
listening_df.show()
# + [markdown] id="MwpJJeWa4qmn"
# drop the null rows:
# + id="Botf6-Vb4uqs" colab={"base_uri": "https://localhost:8080/"} outputId="fb6cfc6f-d5f9-42ab-f581-30a1c0548d41"
listening_df = listening_df.na.drop()
listening_df.show()
# + [markdown] id="tTN6jr3K4xkF"
# let's check the dataset again:
# + id="JDp_rdEY40u3" colab={"base_uri": "https://localhost:8080/"} outputId="feaef3a7-76b2-4966-d7a5-41eb63442b56"
listening_df.show()
# + [markdown] id="Z7nKCYoZltnv"
# let's see the schema:
# + id="qVg1jt1OyWdh" colab={"base_uri": "https://localhost:8080/"} outputId="41d23ce2-3fcb-4d64-b99a-e36734c6bc6f"
listening_df.printSchema()
# + [markdown] id="JMkQsQt2xSb5"
# let's see the shape of our dataframe:
# + id="6POkV3YFmh6b" colab={"base_uri": "https://localhost:8080/"} outputId="0aa138d2-fd2a-4053-fc53-51ab32c718e1"
shape = (listening_df.count(), len(listening_df.columns))
print(shape)
# + [markdown] id="XMD0DhFl2FEJ"
# # Task 3:
#
# **Query #0:**
# select two columns: track and artist
# + id="FZTdA5wn2TZy" colab={"base_uri": "https://localhost:8080/"} outputId="d767984a-9691-4bc6-e0cd-c63553864237"
q0 = listening_df.select('track','artist')
q0.show()
# + [markdown] id="QRcgXOFs2hjw"
# **Query #1**:
#
# Let's find all of the records of those users who have listened to ***Rihanna***
# + id="ICyiTMVnppLw" colab={"base_uri": "https://localhost:8080/"} outputId="6019089f-5fe6-47eb-ecf2-6d49bdd7bf0e"
q1= q0.filter(q0['artist'] == 'Rihanna')
q1.show()
# + [markdown] id="I0IafeyvFU9O"
# **Query #2:**
#
# Let's find top 10 users who are fan of ***Rihanna***
# + id="3-hM9kMm7JmI" colab={"base_uri": "https://localhost:8080/"} outputId="fe353fec-2433-4d64-e3a7-d88390e15d15"
q2=listening_df.filter(listening_df['artist'] == 'Rihanna').groupby('user_id').agg(count('user_id').alias('count')).orderBy(desc('count')).limit(10)
q2.show()
# + [markdown] id="hgAAl6aAcp41"
# **Query #3:**
#
# find top 10 famous tracks
# + id="qlh3IUzfJ3_I" colab={"base_uri": "https://localhost:8080/"} outputId="ffc76374-0110-4500-9ae0-39b89e02f07e"
q3 = listening_df.select('artist','track').groupby('artist','track').agg(count('*').alias('count')).orderBy(desc('count')).limit(10)
q3.show()
# + [markdown] id="HqhPhQvjeXt0"
# **Query #4:**
#
# find top 10 famous tracks of ***Rihanna***
# + id="D_npmdh1ec8y" colab={"base_uri": "https://localhost:8080/"} outputId="4a3f18e0-b2c9-4665-a846-cac1a296ca96"
q4 = listening_df.select('artist','track').filter(listening_df['artist'] == 'Rihanna').groupby('track').agg(count('*').alias('count')).orderBy(desc('count')).limit(10)
q4.show()
# + [markdown] id="E6pgcH0p1ZXo"
# **Query #5:**
#
# find top 10 famous albums
# + id="c5YHm6yKenE7" colab={"base_uri": "https://localhost:8080/"} outputId="2d767b17-9235-415d-fe24-62eb80b1332e"
listening_df.printSchema()
# + [markdown] id="YXev8HQ57bdq"
# # Task 4 :
# importing the ***genre.csv*** file:
# + id="tpXSrYfu14PB"
# + [markdown] id="aCHSo36W9RcP"
# let's check the data
# + id="LJu4Ouz89O6_"
# + [markdown] id="72OpWX7F98qg"
# Let's inner join these two data frames
# + [markdown] id="yM_f5qILBNeI"
# **Query #6**
#
# find top 10 users who are fan of ***pop*** music
# + id="dognQVlxBi2n"
# + [markdown] id="63quzy7t-zb7"
# **Query #7**
#
# find top 10 famous genres
# + id="aDFcoPPk-Rhf"
# + [markdown] id="hrZOAWVgLMZo"
# # Task 5:
# **Query #8**
#
# find out each user favourite genre
# + id="H3AWxlkbLvCg"
# + id="Soy2bMxQN-Ub"
# + [markdown] id="6oIyhOHkCDuv"
# **Query #9**
#
# find out how many pop,rock,metal and hip hop singers we have
#
# and then visulize it using bar chart
# + id="7_lEjNKVCDJv"
# + [markdown] id="h0h2SSk8InMH"
# Now, let's visualize the results using ***matplotlib***
# + id="J-n8gOC0Imj5"
# + id="CMigHLrEQgKv"
# + id="FYLE4Mbu_Lbu"
# + [markdown] id="1iDrvEwYQ4AE"
# now lets visualize these two lists using a bar chart
# + id="XOOq1U9BQjKI"
| The_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy
#
# ## Creating a Complex MODFLOW 6 Model with Flopy
#
# The purpose of this notebook is to demonstrate the Flopy capabilities for building a more complex MODFLOW 6 model from scratch. This notebook will demonstrate the capabilities by replicating the advgw_tidal model that is distributed with MODFLOW 6.
# ### Setup the Notebook Environment
# +
import sys
import os
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# -
# For this example, we will set up a model workspace.
# Model input files and output files will reside here.
model_name = 'advgw_tidal'
workspace = os.path.join('data', model_name)
if not os.path.exists(workspace):
os.makedirs(workspace)
data_pth = os.path.join('..', 'data', 'mf6', 'create_tests',
'test005_advgw_tidal')
assert os.path.isdir(data_pth)
# +
# create simulation
sim = flopy.mf6.MFSimulation(sim_name=model_name, version='mf6', exe_name='mf6',
sim_ws=workspace)
# create tdis package
tdis_rc = [(1.0, 1, 1.0), (10.0, 120, 1.0),
(10.0, 120, 1.0), (10.0, 120, 1.0)]
tdis = flopy.mf6.ModflowTdis(sim, pname='tdis', time_units='DAYS',
nper=4, perioddata=tdis_rc)
# create gwf model
gwf = flopy.mf6.ModflowGwf(sim, modelname=model_name,
model_nam_file='{}.nam'.format(model_name))
gwf.name_file.save_flows = True
# create iterative model solution and register the gwf model with it
ims = flopy.mf6.ModflowIms(sim, pname='ims', print_option='SUMMARY',
complexity='SIMPLE', outer_hclose=0.0001,
outer_maximum=500, under_relaxation='NONE',
inner_maximum=100, inner_hclose=0.0001,
rcloserecord=0.001, linear_acceleration='CG',
scaling_method='NONE', reordering_method='NONE',
relaxation_factor=0.97)
sim.register_ims_package(ims, [gwf.name])
# +
# discretization package
nlay = 3
nrow = 15
ncol = 10
botlay2 = {'factor':1.0, 'data': [-100 for x in range(150)]}
dis = flopy.mf6.ModflowGwfdis(gwf, pname='dis', nlay=nlay, nrow=nrow, ncol=ncol,
delr=500.0, delc=500.0, top=50.0,
botm=[5.0, -10.0, botlay2],
filename='{}.dis'.format(model_name))
# initial conditions
ic = flopy.mf6.ModflowGwfic(gwf, pname='ic', strt=50.0,
filename='{}.ic'.format(model_name))
# node property flow
npf = flopy.mf6.ModflowGwfnpf(gwf, pname='npf', save_flows=True,
icelltype=[1,0,0],
k=[5.0, 0.1, 4.0],
k33=[0.5, 0.005, 0.1])
# output control
oc = flopy.mf6.ModflowGwfoc(gwf, pname='oc', budget_filerecord='{}.cbb'.format(model_name),
head_filerecord='{}.hds'.format(model_name),
headprintrecord=[('COLUMNS', 10, 'WIDTH', 15,
'DIGITS', 6, 'GENERAL')],
saverecord=[('HEAD', 'ALL'), ('BUDGET', 'ALL')],
printrecord=[('HEAD', 'FIRST'), ('HEAD', 'LAST'),
('BUDGET', 'LAST')])
# +
# storage package
sy = flopy.mf6.ModflowGwfsto.sy.empty(gwf, layered=True)
for layer in range(0,3):
sy[layer]['data'] = 0.2
ss = flopy.mf6.ModflowGwfsto.ss.empty(gwf, layered=True, default_value=0.000001)
sto = flopy.mf6.ModflowGwfsto(gwf, pname='sto', save_flows=True, iconvert=1,
ss=ss, sy=sy, steady_state={0:True},
transient={1:True})
# +
# well package
# test empty with aux vars, bound names, and time series
period_two = flopy.mf6.ModflowGwfwel.stress_period_data.empty(gwf, maxbound=3, aux_vars=['var1', 'var2', 'var3'],
boundnames=True, timeseries=True)
period_two[0][0] = ((0,11,2), -50.0, -1, -2, -3, None)
period_two[0][1] = ((2,4,7), 'well_1_rate', 1, 2, 3, 'well_1')
period_two[0][2] = ((2,3,2), 'well_2_rate', 4, 5, 6, 'well_2')
period_three = flopy.mf6.ModflowGwfwel.stress_period_data.empty(gwf, maxbound=2, aux_vars=['var1', 'var2', 'var3'],
boundnames=True, timeseries=True)
period_three[0][0] = ((2,3,2), 'well_2_rate', 1, 2, 3, 'well_2')
period_three[0][1] = ((2,4,7), 'well_1_rate', 4, 5, 6, 'well_1')
period_four = flopy.mf6.ModflowGwfwel.stress_period_data.empty(gwf, maxbound=5, aux_vars=['var1', 'var2', 'var3'],
boundnames=True, timeseries=True)
period_four[0][0] = ((2,4,7), 'well_1_rate', 1, 2, 3, 'well_1')
period_four[0][1] = ((2,3,2), 'well_2_rate', 4, 5, 6, 'well_2')
period_four[0][2] = ((0,11,2), -10.0, 7, 8, 9, None)
period_four[0][3] = ((0,2,4), -20.0, 17, 18, 19, None)
period_four[0][4] = ((0,13,5), -40.0, 27, 28, 29, None)
stress_period_data = {}
stress_period_data[1] = period_two[0]
stress_period_data[2] = period_three[0]
stress_period_data[3] = period_four[0]
wel = flopy.mf6.ModflowGwfwel(gwf, pname='wel', print_input=True, print_flows=True,
auxiliary=[('var1', 'var2', 'var3')], maxbound=5,
stress_period_data=stress_period_data, boundnames=True,
save_flows=True)
# well ts package
ts_data =[(0.0, 0.0, 0.0, 0.0),
(1.0, -200.0, 0.0, -100.0),
(11.0, -1800.0, -500.0, -200.0),
(21.0, -200.0, -400.0, -300.0),
(31.0, 0.0, -600.0, -400.0)]
wel.ts.initialize(filename='well-rates.ts', timeseries=ts_data,
time_series_namerecord=[('well_1_rate', 'well_2_rate', 'well_3_rate')],
interpolation_methodrecord=[('stepwise', 'stepwise', 'stepwise')])
# -
# Evapotranspiration
evt_period = flopy.mf6.ModflowGwfevt.stress_period_data.empty(gwf, 150, nseg=3)
for col in range(0, 10):
for row in range(0, 15):
evt_period[0][col*15+row] = (((0, row, col), 50.0, 0.0004, 10.0, 0.2, 0.5, 0.3, 0.1, None))
evt = flopy.mf6.ModflowGwfevt(gwf, pname='evt', print_input=True, print_flows=True,
save_flows=True, maxbound=150,
nseg=3, stress_period_data=evt_period)
# General-Head Boundaries
ghb_period = {}
ghb_period_array = []
for layer, cond in zip(range(1, 3), [15.0, 1500.0]):
for row in range(0, 15):
ghb_period_array.append(((layer, row, 9), 'tides', cond, 'Estuary-L2'))
ghb_period[0] = ghb_period_array
ghb = flopy.mf6.ModflowGwfghb(gwf, pname='ghb', print_input=True, print_flows=True,
save_flows=True, boundnames=True,
maxbound=30, stress_period_data=ghb_period)
ts_recarray=[]
fd = open(os.path.join(data_pth, 'tides.txt'), 'r')
for line in fd:
line_list = line.strip().split(',')
ts_recarray.append((float(line_list[0]), float(line_list[1])))
ghb.ts.initialize(filename='tides.ts', timeseries=ts_recarray,
time_series_namerecord='tides',
interpolation_methodrecord='linear')
obs_recarray = {'ghb_obs.csv':[('ghb-2-6-10', 'GHB', (1, 5, 9)),
('ghb-3-6-10', 'GHB', (2, 5, 9))],
'ghb_flows.csv':[('Estuary2', 'GHB', 'Estuary-L2'),
('Estuary3', 'GHB', 'Estuary-L3')]}
ghb.obs.initialize(filename='{}.ghb.obs'.format(model_name), digits=10,
print_input=True, continuous=obs_recarray)
obs_recarray = {'head_obs.csv':[('h1_13_8', 'HEAD', (2, 12, 7))],
'intercell_flow_obs1.csv':[('ICF1_1.0', 'FLOW-JA-FACE', (0, 4, 5), (0, 5, 5))],
'head-hydrographs.csv':[('h3-13-9', 'HEAD', (2, 12, 8)),
('h3-12-8', 'HEAD', (2, 11, 7)),
('h1-4-3', 'HEAD', (0, 3, 2)),
('h1-12-3', 'HEAD', (0, 11, 2)),
('h1-13-9', 'HEAD', (0, 12, 8))]}
obs_package = flopy.mf6.ModflowUtlobs(gwf, pname='head_obs', filename='{}.obs'.format(model_name),
digits=10, print_input=True,
continuous=obs_recarray)
# River
riv_period = {}
riv_period_array = [((0,2,0),'river_stage_1',1001.0,35.9,None),
((0,3,1),'river_stage_1',1002.0,35.8,None),
((0,4,2),'river_stage_1',1003.0,35.7,None),
((0,4,3),'river_stage_1',1004.0,35.6,None),
((0,5,4),'river_stage_1',1005.0,35.5,None),
((0,5,5),'river_stage_1',1006.0,35.4,'riv1_c6'),
((0,5,6),'river_stage_1',1007.0,35.3,'riv1_c7'),
((0,4,7),'river_stage_1',1008.0,35.2,None),
((0,4,8),'river_stage_1',1009.0,35.1,None),
((0,4,9),'river_stage_1',1010.0,35.0,None),
((0,9,0),'river_stage_2',1001.0,36.9,'riv2_upper'),
((0,8,1),'river_stage_2',1002.0,36.8,'riv2_upper'),
((0,7,2),'river_stage_2',1003.0,36.7,'riv2_upper'),
((0,6,3),'river_stage_2',1004.0,36.6,None),
((0,6,4),'river_stage_2',1005.0,36.5,None),
((0,5,5),'river_stage_2',1006.0,36.4,'riv2_c6'),
((0,5,6),'river_stage_2',1007.0,36.3,'riv2_c7'),
((0,6,7),'river_stage_2',1008.0,36.2,None),
((0,6,8),'river_stage_2',1009.0,36.1),
((0,6,9),'river_stage_2',1010.0,36.0)]
riv_period[0] = riv_period_array
riv = flopy.mf6.ModflowGwfriv(gwf, pname='riv', print_input=True, print_flows=True,
save_flows='{}.cbc'.format(model_name),
boundnames=True, maxbound=20,
stress_period_data=riv_period)
ts_recarray=[(0.0,40.0,41.0),(1.0,41.0,41.5),
(2.0,43.0,42.0),(3.0,45.0,42.8),
(4.0,44.0,43.0),(6.0,43.0,43.1),
(9.0,42.0,42.4),(11.0,41.0,41.5),
(31.0,40.0,41.0)]
riv.ts.initialize(filename='river_stages.ts', timeseries=ts_recarray,
time_series_namerecord=[('river_stage_1', 'river_stage_2')],
interpolation_methodrecord=[('linear', 'stepwise')])
obs_recarray = {'riv_obs.csv':[('rv1-3-1', 'RIV', (0,2,0)), ('rv1-4-2', 'RIV', (0,3,1)),
('rv1-5-3', 'RIV', (0,4,2)), ('rv1-5-4', 'RIV', (0,4,3)),
('rv1-6-5', 'RIV', (0,5,4)), ('rv1-c6', 'RIV', 'riv1_c6'),
('rv1-c7', 'RIV', 'riv1_c7'), ('rv2-upper', 'RIV', 'riv2_upper'),
('rv-2-7-4', 'RIV', (0,6,3)), ('rv2-8-5', 'RIV', (0,6,4)),
('rv-2-9-6', 'RIV', (0,5,5,))],
'riv_flowsA.csv':[('riv1-3-1', 'RIV', (0,2,0)), ('riv1-4-2', 'RIV', (0,3,1)),
('riv1-5-3', 'RIV', (0,4,2))],
'riv_flowsB.csv':[('riv2-10-1', 'RIV', (0,9,0)), ('riv-2-9-2', 'RIV', (0,8,1)),
('riv2-8-3', 'RIV', (0,7,2))]}
riv.obs.initialize(filename='{}.riv.obs'.format(model_name), digits=10,
print_input=True, continuous=obs_recarray)
# First recharge package
rch1_period = {}
rch1_period_array = []
col_range = {0:3,1:4,2:5}
for row in range(0, 15):
if row in col_range:
col_max = col_range[row]
else:
col_max = 6
for col in range(0, col_max):
if (row == 3 and col == 5) or (row == 2 and col == 4) or (row == 1 and col == 3) or (row == 0 and col == 2):
mult = 0.5
else:
mult = 1.0
if row == 0 and col == 0:
bnd = 'rch-1-1'
elif row == 0 and col == 1:
bnd = 'rch-1-2'
elif row == 1 and col == 2:
bnd = 'rch-2-3'
else:
bnd = None
rch1_period_array.append(((0, row, col), 'rch_1', mult, bnd))
rch1_period[0] = rch1_period_array
rch1 = flopy.mf6.ModflowGwfrch(gwf, filename='{}_1.rch'.format(model_name),
pname='rch_1', fixed_cell=True,
auxiliary='MULTIPLIER', auxmultname='MULTIPLIER',
print_input=True, print_flows=True,
save_flows=True, boundnames=True,
maxbound=84, stress_period_data=rch1_period)
ts_data =[(0.0, 0.0015), (1.0, 0.0010),
(11.0, 0.0015),(21.0, 0.0025),
(31.0, 0.0015)]
rch1.ts.initialize(filename='recharge_rates_1.ts', timeseries=ts_data,
time_series_namerecord='rch_1',
interpolation_methodrecord='stepwise')
# Second recharge package
rch2_period = {}
rch2_period_array = [((0,0,2), 'rch_2', 0.5),
((0,0,3), 'rch_2', 1.0),
((0,0,4), 'rch_2', 1.0),
((0,0,5), 'rch_2', 1.0),
((0,0,6), 'rch_2', 1.0),
((0,0,7), 'rch_2', 1.0),
((0,0,8), 'rch_2', 1.0),
((0,0,9), 'rch_2', 0.5),
((0,1,3), 'rch_2', 0.5),
((0,1,4), 'rch_2', 1.0),
((0,1,5), 'rch_2', 1.0),
((0,1,6), 'rch_2', 1.0),
((0,1,7), 'rch_2', 1.0),
((0,1,8), 'rch_2', 0.5),
((0,2,4), 'rch_2', 0.5),
((0,2,5), 'rch_2', 1.0),
((0,2,6), 'rch_2', 1.0),
((0,2,7), 'rch_2', 0.5),
((0,3,5), 'rch_2', 0.5),
((0,3,6), 'rch_2', 0.5)]
rch2_period[0] = rch2_period_array
rch2 = flopy.mf6.ModflowGwfrch(gwf, filename='{}_2.rch'.format(model_name),
pname='rch_2', fixed_cell=True,
auxiliary='MULTIPLIER', auxmultname='MULTIPLIER',
print_input=True, print_flows=True, save_flows=True,
maxbound=20, stress_period_data=rch2_period)
ts_data = [(0.0, 0.0016), (1.0, 0.0018),
(11.0, 0.0019),(21.0, 0.0016),
(31.0, 0.0018)]
rch2.ts.initialize(filename='recharge_rates_2.ts', timeseries=ts_data,
time_series_namerecord='rch_2',
interpolation_methodrecord='linear')
# Third recharge package
rch3_period = {}
rch3_period_array = []
col_range = {0:9,1:8,2:7}
for row in range(0, 15):
if row in col_range:
col_min = col_range[row]
else:
col_min = 6
for col in range(col_min, 10):
if (row == 0 and col == 9) or (row == 1 and col == 8) or (row == 2 and col == 7) or (row == 3 and col == 6):
mult = 0.5
else:
mult = 1.0
rch3_period_array.append(((0, row, col), 'rch_3', mult))
rch3_period[0] = rch3_period_array
rch3 = flopy.mf6.ModflowGwfrch(gwf, filename='{}_3.rch'.format(model_name),
pname='rch_3', fixed_cell=True,
auxiliary='MULTIPLIER', auxmultname='MULTIPLIER',
print_input=True, print_flows=True, save_flows=True,
maxbound=54, stress_period_data=rch3_period)
ts_data=[(0.0, 0.0017),(1.0, 0.0020),(11.0, 0.0017),(21.0, 0.0018),(31.0, 0.0020)]
rch3.ts.initialize(filename='recharge_rates_3.ts', timeseries=ts_data,
time_series_namerecord='rch_3',
interpolation_methodrecord='linear')
# ### Create the MODFLOW 6 Input Files and Run the Model
#
# Once all the flopy objects are created, it is very easy to create all of the input files and run the model.
# +
# change folder to save simulation
#sim.simulation_data.mfpath.set_sim_path(run_folder)
# -
# write simulation to new location
sim.write_simulation()
# Print a list of the files that were created
# in workspace
print(os.listdir(workspace))
# ### Run the Simulation
#
# We can also run the simulation from the notebook, but only if the MODFLOW 6 executable is available. The executable can be made available by putting the executable in a folder that is listed in the system path variable. Another option is to just put a copy of the executable in the simulation folder, though this should generally be avoided. A final option is to provide a full path to the executable when the simulation is constructed. This would be done by specifying exe_name with the full path.
# Run the simulation
success, buff = sim.run_simulation()
print('\nSuccess is: ', success)
# ### Post-Process Head Results
#
# Post-processing MODFLOW 6 results is still a work in progress. There aren't any Flopy plotting functions built in yet, like they are for other MODFLOW versions. So we need to plot the results using general Flopy capabilities. We can also use some of the Flopy ModelMap capabilities for MODFLOW 6, but in order to do so, we need to manually create a SpatialReference object, that is needed for the plotting. Examples of both approaches are shown below.
#
# First, a link to the heads file is created with `HeadFile`. The link can then be accessed with the `get_data` function, by specifying, in this case, the step number and period number for which we want to retrieve data. A three-dimensional array is returned of size `nlay, nrow, ncol`. Matplotlib contouring functions are used to make contours of the layers or a cross-section.
# Read the binary head file and plot the results
# We can use the existing Flopy HeadFile class because
# the format of the headfile for MODFLOW 6 is the same
# as for previous MODFLOW verions
headfile = '{}.hds'.format(model_name)
fname = os.path.join(workspace, headfile)
hds = flopy.utils.binaryfile.HeadFile(fname)
h = hds.get_data()
# +
# We can also use the Flopy PlotMapView capabilities for MODFLOW 6
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
# Next we create an instance of the ModelMap class
modelmap = flopy.plot.PlotMapView(model=gwf, ax=ax)
# Then we can use the plot_grid() method to draw the grid
# The return value for this function is a matplotlib LineCollection object,
# which could be manipulated (or used) later if necessary.
#quadmesh = modelmap.plot_ibound(ibound=ibd)
linecollection = modelmap.plot_grid()
contours = modelmap.contour_array(h[0])
# -
# ### Post-Process Flows
#
# MODFLOW 6 writes a binary grid file, which contains information about the model grid. MODFLOW 6 also writes a binary budget file, which contains flow information. Both of these files can be read using Flopy capabilities. The MfGrdFile class in Flopy can be used to read the binary grid file. The CellBudgetFile class in Flopy can be used to read the binary budget file written by MODFLOW 6.
# +
# read the binary grid file
fname = os.path.join(workspace, '{}.dis.grb'.format(model_name))
bgf = flopy.utils.mfgrdfile.MfGrdFile(fname)
# data read from the binary grid file is stored in a dictionary
bgf._datadict
# -
# Information from the binary grid file is easily retrieved
ia = bgf._datadict['IA'] - 1
ja = bgf._datadict['JA'] - 1
# +
# read the cell budget file
fname = os.path.join(workspace, '{}.cbb'.format(model_name))
cbb = flopy.utils.CellBudgetFile(fname, precision='double')
#cbb.list_records()
flowja = cbb.get_data(text='FLOW-JA-FACE')[0][0, 0, :]
# -
# By having the ia and ja arrays and the flow-ja-face we can look at
# the flows for any cell and process them in the follow manner.
k = 2; i = 7; j = 7
celln = k * nrow * ncol + i * nrow + j
print('Printing flows for cell {}'.format(celln + 1))
for ipos in range(ia[celln] + 1, ia[celln + 1]):
cellm = ja[ipos] # change from one-based to zero-based
print('Cell {} flow with cell {} is {}'.format(celln + 1, cellm + 1, flowja[ipos]))
fname = 'head-hydrographs.csv'
fname = os.path.join(workspace, fname)
csv = np.genfromtxt(fname, delimiter=',', dtype=None, names=True)
for name in csv.dtype.names[1:]:
plt.plot(csv['time'], csv[name], label=name)
plt.legend()
| examples/Notebooks/flopy3_mf6_B_complex-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Linked Brushing Demo
#
# [](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=3-tech-demos/linked-brushing.ipynb)
#
# This notebook demonstrates linked brushing with **Holoviews.selection**.
#
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from datetime import datetime
import numpy as np
import pandas as pd
import geopandas as gpd
import holoviews as hv
import hvplot.pandas # noqa
from datashader.utils import lnglat_to_meters
from holoviews.element import tiles
from holoviews.util.transform import dim
from holoviews.selection import link_selections
from holoviews.operation import gridmatrix
from holoviews.operation.element import histogram
from holoviews import opts
hv.__version__
opts.defaults(opts.Overlay(active_tools=['wheel_zoom']))
gdf = gpd.read_file('../data/ais.gpkg', rows=1000)
gdf.head()
gdf.loc[:, 'x'], gdf.loc[:, 'y'] = lnglat_to_meters(gdf.geometry.x, gdf.geometry.y)
df = pd.DataFrame(gdf)
# ## Pandas DataFrame.hvplot
hist_plot = df.where((df.SOG>0) & (df.SOG<50)).hvplot.hist("SOG", bins=20, width=400, height=300)
map_plot = df.hvplot.scatter(x='x', y='y', width=400, height=300)
link_selections(tiles.CartoLight() * map_plot + hist_plot)
# ## Geopandas GeoDataFrame.hvplot
#
# To use the GeoDataFrame directly, we need to explicitly set a suitable index for linking, as described in https://github.com/holoviz/geoviews/issues/497
gdf['id'] = np.arange(len(gdf))
gdf_map = gdf.hvplot(geo=True, tiles='CartoLight', width=400, height=300)
gdf_hist = pd.DataFrame(gdf).where((gdf.SOG>0) & (gdf.SOG<50)).hvplot.hist("SOG", bins=20, width=400, height=300)
link_selections(gdf_map + gdf_hist, index_cols=['id'])
# ## Datashade
datashade = df.hvplot.scatter(x='x', y='y', datashade=True, width=400, height=300)
link_selections(datashade + hist_plot)#.cols(1)
link_selections(tiles.CartoLight() * datashade + hist_plot)#.cols(1)
# ## Bar plots (unsupported)
# It would be nice to add a bar plot with counts per ship type but bar plots are currently not supported, see http://holoviews.org/user_guide/Linked_Brushing.html
bar_plot = df.groupby('ShipType').agg({'SOG':'count'}).rename(columns={'SOG':'count'}).hvplot.barh(width=400, height=400)
bar_plot
| 3-tech-demos/linked-brushing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting image insights with Google's Vision AI
# ## Overview
#
# Given a folder full of images, we want a list of descriptions of each image – and we'll use an online service to get those descriptions.
#
#
# ## The Plan
#
# Our steps will be:
#
# 1. Load some test images
# 1. Send one image to the Google Vision API
# 1. Dissect the response for that image
# 1. Send several images to the Google Vision API
# 1. Generate a list of images and their contents
# ## Credits
#
# The code in this notebook was written by <NAME> at [Quartz](https://qz.com).
# ## Setup
# ### Using an online service
#
# Instead of downloading and using a pretrained model, as described in the notebook `bb-label-images-with-resnet.ipynb`, we can send each image to an online service to detect the contents. There are several available, and in this case we'll use [Google's Vision AI](https://cloud.google.com/vision/) service.
#
# These services generally allow you to try _some_ images for free, and then you must pay for any over that limit. As of this writing, Google Vision allows you to do 1,000 checks in a month for free. [The full price list is here](https://cloud.google.com/vision/pricing).
#
# Because you _could_ go over the limit, Google requires a credit card on file before you may use this service – even if you're not yet over the free limit.
#
# In exchange, you get an API key, which looks like someone smashed away on a keyboard: `LKjldSLKfivvl384Ls0409Sloo...`
#
# You need an API key to use this notebook! To get a key, follow the instructions in Video 5.
# ### For those using Google Colaboratory ...
#
# Be aware that Google Colab instances are ephemeral -- they vanish *Poof* when you close them, or after a period of sitting idle (currently 90 minutes), or if you use one for more than 12 hours.
#
# Also note that although we are using two Google services – Colaboratory and Vision AI – I'm treating them as completely separate. So we actually communicate between the two services the same way we would communicate from any computer to Vision AI.
# ### Everybody do this ...
# Everyone needs to run the next cell, which installs and initializes the Python libraries we'll use in this notebook.
# + jupyter={"outputs_hidden": true}
## *EVERYBODY* SHOULD RUN THIS CELL
# !pip install Pillow --quiet
# !pip install requests --quiet
import os
import json
import requests
import base64
from PIL import Image
from IPython.display import display
from IPython.display import Image as Show
# -
# ### Your API key goes here
#
# As mentioned above, you'll need to get an API key. Video 5 has the details on how to get that key from Google.
#
# Once you have a key, replace `XXXXX` with your key in the next cell and run it.
# + jupyter={"outputs_hidden": true}
# Replace the XXXXX in the next line with your API key (keep the quotes!):
YOUR_API_KEY = 'XXXXX'
# -
# ## The Data
# We're going to download some data for this notebook:
#
# - A folder containing some **pictures**. These are just pictures I took.
# + jupyter={"outputs_hidden": true}
# Run this cell to download the data we'll use for this exercise
# !wget -N https://qz-aistudio-public.s3.amazonaws.com/workshops/labelling_images_data.zip --quiet
# !unzip -q labelling_images_data.zip
print('Done!')
# + jupyter={"outputs_hidden": true}
data_path ='./data/'
# -
# We can look at the data on the computer we're using by using the `ls` command:
# + jupyter={"outputs_hidden": true}
# %ls data/
# + jupyter={"outputs_hidden": true}
# %ls data/images
# + jupyter={"outputs_hidden": true}
Show(data_path + 'images/boat2.jpg', width=600)
# + jupyter={"outputs_hidden": true}
# Here we open our boat file and convert it into image data text (called base-64-encoded text)
with open(data_path + 'images/boat2.jpg', "rb") as my_image:
my_image_data = str(base64.b64encode(my_image.read()).decode("utf-8"))
# + jupyter={"outputs_hidden": true}
# Want to see what an image "looks" like in base-64 text???
my_image_data
# -
# ## Send our image to Google Vision AI
# Next, we need construct a "payload" to send to Google Vision AI. [The format for building this payload is in the google documentation](https://cloud.google.com/vision/docs/request) if you'd like to read more. But we'll walk through a simple case now.
#
# For our first payload, we'll send the image (as text) and a request for "LABEL_DETECTION."
# + jupyter={"outputs_hidden": true}
# Establish the payload, which includes our image data as a long string of text
google_vision_payload = {
'requests':[
{
'image':{
'content': my_image_data
},
'features':[
{
'type':'LABEL_DETECTION'
},
]
}
]
}
# + jupyter={"outputs_hidden": true}
# Now we'll build the URL to hit, which includes your Google Vision API key.
google_vision_url = 'https://vision.googleapis.com/v1/images:annotate?key=' + YOUR_API_KEY
# + jupyter={"outputs_hidden": true}
# And then we ship our boat (in the payload) to that URL
web_request = requests.post(google_vision_url, json=google_vision_payload)
# -
# The `web_request` now contains a lot of stuff in it. Let's pull it apart a little.
# + jupyter={"outputs_hidden": true}
# This line lets us know how hitting Google Vision AI worked. Code "200" is what we want
web_request.status_code
# -
# Let's take a look at what we got from Google.
# + jupyter={"outputs_hidden": true}
google_guess = web_request.json() # turns the web_request into JSON
google_guess['responses'] # let's look at the "responses" part
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['labelAnnotations']
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['labelAnnotations'][0]
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['labelAnnotations'][0]['description']
# -
# We can also ask for a different features, such as [text detection](https://cloud.google.com/vision/docs/ocr). (Note that there's a different version to use if you're looking for [text in images of documents](https://cloud.google.com/vision/docs/ocr).
# + jupyter={"outputs_hidden": true}
# Again, we build the payload but use "TEXT_DETECTION" instead ...
google_vision_payload = {
'requests':[
{
'image':{
'content': my_image_data
},
'features':[
{
'type':'TEXT_DETECTION'
}
]
}
]
}
# + jupyter={"outputs_hidden": true}
# And we hit the URL we provided above, but with this new payload
web_request = requests.post(google_vision_url, json=google_vision_payload)
# + jupyter={"outputs_hidden": true}
web_request.status_code
# + jupyter={"outputs_hidden": true}
google_guess = web_request.json()
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['textAnnotations']
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['fullTextAnnotation']['text']
# -
# We can also ask for what Google calls "[web detection](https://cloud.google.com/vision/docs/detecting-web)", which pulls information from it's vast knowledge of the internet to try to get more information about the image.
# + jupyter={"outputs_hidden": true}
# Here we'll ask for "WEB_DETECTION" and just one result ('maxResults':1)
google_vision_payload = {
'requests':[
{
'image':{
'content': my_image_data
},
'features':[
{
'type':'WEB_DETECTION',
'maxResults':1
},
]
}
]
}
# + jupyter={"outputs_hidden": true}
# And we hit the URL we provided above, but with this new payload
web_request = requests.post(google_vision_url, json=google_vision_payload)
google_guess = web_request.json()
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]
# + jupyter={"outputs_hidden": true}
google_guess['responses'][0]['webDetection']['webEntities'][0]['description']
# -
# ## Processing a set of images
# Let's get some images! (Pretend there are 8,000 instead of 8.)
# + jupyter={"outputs_hidden": true}
image_files = os.listdir(data_path + 'images')
image_files
# + jupyter={"outputs_hidden": true}
# Loop through the list of files
category_list=[]
for file in image_files:
# open a file and convert it into image data text
with open(data_path + 'images/' + file, "rb") as my_image:
my_image_data = str(base64.b64encode(my_image.read()).decode("utf-8"))
# set up the google vision payload, including the image data text
google_vision_payload = {
'requests':[
{
'image':{
'content': my_image_data
},
'features':[
{
'type':'WEB_DETECTION',
'maxResults':1
},
]
}
]
}
# hit the google vision api
web_request = requests.post(google_vision_url, json=google_vision_payload)
google_guess = web_request.json()
# from the goole response, pull out the webEntity description
category = google_guess['responses'][0]['webDetection']['webEntities'][0]['description']
# add this category to the category list
category_list.append(category)
# print the file name and the category guess
print(file, category)
# + jupyter={"outputs_hidden": true}
# let's loop through all the images we have and display the
# category Google thinks it falls into
for i, file in enumerate(image_files):
image_filename = data_path + 'images/' + file
print(f'\n{category_list[i]}')
display(Show(filename=image_filename, retina=True))
# -
# ### Extra Challenge
#
# Even more useful would be to save this list as a file! If you're comfortable with Python, you should be able to pull that off pretty easily.
# ## Google Vision Pricing
#
# You can get categories for 1,000 images for free and 10,000 images for $15.00. [Check out the pricing info](https://cloud.google.com/vision/pricing).
| notebooks/cc-label-images-with-google-vision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit
# name: python3
# ---
# # String to Integer (atoi)
# Implement the myAtoi(string s) function, which converts a string to a 32-bit signed integer (similar to C/C++'s atoi function).
# The algorithm for myAtoi(string s) is as follows:
#
# Read in and ignore any leading whitespace.
# Check if the next character (if not already at the end of the string) is '-' or '+'. Read this character in if it is either. This determines if the final result is negative or positive respectively. Assume the result is positive if neither is present.
# Read in next the characters until the next non-digit character or the end of the input is reached. The rest of the string is ignored.
# Convert these digits into an integer (i.e. "123" -> 123, "0032" -> 32). If no digits were read, then the integer is 0. Change the sign as necessary (from step 2).
# If the integer is out of the 32-bit signed integer range [-2<sup>31</sup>, 2<sup>31</sup> - 1], then clamp the integer so that it remains in the range. Specifically, integers less than -2<sup>31</sup> should be clamped to -2<sup>31</sup>, and integers greater than 2<sup>31</sup> - 1 should be clamped to 2<sup>31</sup> - 1.
# Return the integer as the final result.
#
# Note:
# - Only the space character ' ' is considered a whitespace character.
# - Do not ignore any characters other than the leading whitespace or the rest of the string after the digits.
#
# ### Example 1:
# Input: s = "42"
# Output: 42
# Explanation: The underlined characters are what is read in, the caret is the current reader position.
# - Step 1: "42" (no characters read because there is no leading whitespace)
# ^
# - Step 2: "42" (no characters read because there is neither a '-' nor '+')
# ^
# - Step 3: "42" ("42" is read in)
# ^
# - The parsed integer is 42.
# - Since 42 is in the range [-2<sup>31</sup>, 2<sup>31</sup> - 1], the final result is 42.
#
# ### Example 2:
# Input: s = " -42"
# Output: -42
# Explanation:
# - Step 1: " -42" (leading whitespace is read and ignored)
# ^
# - Step 2: " -42" ('-' is read, so the result should be negative)
# ^
# - Step 3: " -42" ("42" is read in)
# ^
# - The parsed integer is -42.
# - Since -42 is in the range [-2<sup>31</sup>, 2<sup>31</sup> - 1], the final result is -42.
#
# ### Example 3:
# Input: s = "4193 with words"
# Output: 4193
# Explanation:
# - Step 1: "4193 with words" (no characters read because there is no leading whitespace)
# ^
# - Step 2: "4193 with words" (no characters read because there is neither a '-' nor '+')
# ^
# - Step 3: "4193 with words" ("4193" is read in; reading stops because the next character is a non-digit)
# ^
# - The parsed integer is 4193.
# - Since 4193 is in the range [-2<sup>31</sup>, 2<sup>31</sup> - 1], the final result is 4193.
#
# ### Example 4:
# Input: s = "words and 987"
# Output: 0
# Explanation:
# - Step 1: "words and 987" (no characters read because there is no leading whitespace)
# ^
# - Step 2: "words and 987" (no characters read because there is neither a '-' nor '+')
# ^
# - Step 3: "words and 987" (reading stops immediately because there is a non-digit 'w')
# ^
# - The parsed integer is 0 because no digits were read.
# - Since 0 is in the range [-2<sup>31</sup>, 2<sup>31</sup> - 1], the final result is 0.
#
#
# ### Example 5:
# Input: s = "-91283472332"
# Output: -2147483648
# Explanation:
# - Step 1: "-91283472332" (no characters read because there is no leading whitespace)
# ^
# - Step 2: "-91283472332" ('-' is read, so the result should be negative)
# ^
# - Step 3: "-91283472332" ("91283472332" is read in)
# ^
# - The parsed integer is -91283472332.
# - Since -91283472332 is less than the lower bound of the range [-2<sup>31</sup>, 2<sup>31</sup> - 1], the final result is clamped to -2<sup>31</sup> = -2147483648.
#
# Constraints:
# 0 <= s.length <= 200
# s consists of English letters (lower-case and upper-case), digits (0-9), ' ', '+', '-', and '.'.
#
#
# ## Solution
#
# ### Intuition
# This question is pretty self descriptive. We have three diffrent states when parsing the input: clear_spaces, parse_sign and parse_int. All transitions also well defined clear_spaces -> parse_sign -> parse_int. The state parse_int can be implemented by looping thgouh string's characters and checking if the char is digit, if yes, add it to the prev result by multiplying it by 10. If the char is not digit, stop and return result. When updating the result we will check the boundaries.
# +
def my_atoi(s: str) -> int:
s = s.strip()
if s.startswith('-'):
is_pos, s = False, s[1:]
elif s.startswith('+'):
is_pos, s = True, s[1:]
else:
is_pos = True
result = 0
for c in s:
if c.isdigit():
result = result * 10 + int(c)
if is_pos and result > pow(2, 31) - 1:
return pow(2, 31) - 1
elif not is_pos and -result < -pow(2, 31):
return -pow(2, 31)
else:
break
return result if is_pos else -result
assert my_atoi("42") == 42
assert my_atoi(" -42") == -42
assert my_atoi("4193 with words") == 4193
assert my_atoi("words and 987") == 0
assert my_atoi("-91283472332") == -2147483648
# -
# ### Complexity Analisys
# - Time Complexity: O(n). We iterate over the input string only once
# - Space Complexity: O(1)
#
#
# ### LeetCode Output
# - Success
# - Runtime: 32 ms, faster than 84.72% of Python3 online submissions for String to Integer (atoi).
# - Memory Usage: 14.3 MB, less than 25.80% of Python3 online submissions for String to Integer (atoi).
# - ! We can achieve a better memory usage by avoiding using string slices.
#
#
| python-data-structures/leetocde/string-to-int.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
#
# # Color Quantization using K-Means
#
#
# Performs a pixel-wise Vector Quantization (VQ) of an image of the summer palace
# (China), reducing the number of colors required to show the image from 96,615
# unique colors to 64, while preserving the overall appearance quality.
#
# In this example, pixels are represented in a 3D-space and K-means is used to
# find 64 color clusters. In the image processing literature, the codebook
# obtained from K-means (the cluster centers) is called the color palette. Using
# a single byte, up to 256 colors can be addressed, whereas an RGB encoding
# requires 3 bytes per pixel. The GIF file format, for example, uses such a
# palette.
#
# For comparison, a quantized image using a random codebook (colors picked up
# randomly) is also shown.
#
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD 3 clause
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin
from sklearn.datasets import load_sample_image
from sklearn.utils import shuffle
from time import time
n_colors = 64
# Load the Summer Palace photo
china = load_sample_image("china.jpg")
# Convert to floats instead of the default 8 bits integer coding. Dividing by
# 255 is important so that plt.imshow behaves works well on float data (need to
# be in the range [0-1])
china = np.array(china, dtype=np.float64) / 255
# Load Image and transform to a 2D numpy array.
w, h, d = original_shape = tuple(china.shape)
assert d == 3
image_array = np.reshape(china, (w * h, d))
print("Fitting model on a small sub-sample of the data")
t0 = time()
image_array_sample = shuffle(image_array, random_state=0)[:1000]
kmeans = KMeans(n_clusters=n_colors, random_state=0).fit(image_array_sample)
print("done in %0.3fs." % (time() - t0))
# Get labels for all points
print("Predicting color indices on the full image (k-means)")
t0 = time()
labels = kmeans.predict(image_array)
print("done in %0.3fs." % (time() - t0))
codebook_random = shuffle(image_array, random_state=0)[:n_colors + 1]
print("Predicting color indices on the full image (random)")
t0 = time()
labels_random = pairwise_distances_argmin(codebook_random,
image_array,
axis=0)
print("done in %0.3fs." % (time() - t0))
def recreate_image(codebook, labels, w, h):
"""Recreate the (compressed) image from the code book & labels"""
d = codebook.shape[1]
image = np.zeros((w, h, d))
label_idx = 0
for i in range(w):
for j in range(h):
image[i][j] = codebook[labels[label_idx]]
label_idx += 1
return image
# Display all results, alongside original image
plt.figure(1)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Original image (96,615 colors)')
plt.imshow(china)
plt.figure(2)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Quantized image (64 colors, K-Means)')
plt.imshow(recreate_image(kmeans.cluster_centers_, labels, w, h))
plt.figure(3)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Quantized image (64 colors, Random)')
plt.imshow(recreate_image(codebook_random, labels_random, w, h))
plt.show()
# -
| examples/k-means/plot_color_quantization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Now You Code 2: Multiply By
#
# Write a program to ask for a number to multiply by and then lists the multiplication table for that number from 1 to 10. This process will repeat until you enter quit as which point the program will exit.
#
# Wow. Seems complicated! We'll use the technique called **problem simplification** to make this problem a little easier to solve.
#
# First we'll write a complete program to solve *part* of the problem, then we will take our new level of understanding to solve the *entire* problem.
#
# ## Start with Sub-Problem 1
#
# Let's write a program to simply input a number and then uses a loop to print out the multiplication table up to 10 for that number.
#
# For example, when you input `5`:
#
# ```
# Enter number to multiply by: 5
# 1 x 5 = 5
# 2 x 5 = 10
# 3 x 5 = 15
# 4 x 5 = 20
# 5 x 5 = 25
# 6 x 5 = 30
# 7 x 5 = 35
# 8 x 5 = 40
# 9 x 5 = 45
# 10 x 5 = 50
# ```
#
# ## Step 1: Problem Analysis for Sub-Problem 1
#
# Inputs:
#
# Outputs:
#
# Algorithm (Steps in Program):
#
#
#
# Step 2: write code for sub-problem 1
n=1
enter=int(input("Enter a number to multiply by: "))
while n<11:
print(n, "x",enter,"=", n*enter)
n=n+1
# # Full Problem
#
# Now that you have part of the problem figured out, try to solve the entire problem. The program should keep asking for numbers and then print out multiplcation tables until you enter `quit`. Here's an example:
#
# Example Run:
#
# ```
# Enter number to multiply by or type 'quit': 10
# 1 x 10 = 10
# 2 x 10 = 20
# 3 x 10 = 30
# 4 x 10 = 40
# 5 x 10 = 50
# 6 x 10 = 60
# 7 x 10 = 70
# 8 x 10 = 80
# 9 x 10 = 90
# 10 x 10 = 100
# Enter number to multiply by or type 'quit': 5
# 1 x 5 = 5
# 2 x 5 = 10
# 3 x 5 = 15
# 4 x 5 = 20
# 5 x 5 = 25
# 6 x 5 = 30
# 7 x 5 = 35
# 8 x 5 = 40
# 9 x 5 = 45
# 10 x 5 = 50
# Enter number to multiply by or type 'quit': quit
# ```
#
# **NOTE:** you need another loop to complete this program. Take the code you wrote in the first part and repeat it in a second loop until you type quit.
# ## Step 3: Problem Analysis for Full Problem
#
# Inputs:
#
# Outputs:
#
# Algorithm (Steps in Program):
# Step 4: Write code for full problem
# Step 4: Write code for full problem
while True:
ask=input("Enter number to multiply by or type 'quit':" )
if ask=='quit':
break
num=int(ask)
for n in range (0,11):
print(n, "x",ask,"=", n*num)
n=n+1
# ## Step 3: Questions
#
# 1. What is the loop control variable for the first (outer) loop?
#
# Answer:
# THe loop control variable for the first (outer) loop is while true.
#
# 2. What is the loop control variable for the second (inner) loop?
#
# Answer:
# The loop control variable for the second loop is for n in rage (0,11).
#
# 3. Provide at least one way this program can be improved, or make more flexible by introducing more inputs?
#
# Answer:
#
# THe program can be improved by adding more if statements so that it can handle more inputs.
#
# ## Step 4: Reflection
#
# Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
#
# To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
#
# Keep your response to between 100 and 250 words.
#
# `--== Write Your Reflection Below Here ==--`
# It took me a long time to understand how to do the second question. I attempted it for multiple days and still couldn't understand it. Eventually, I went to my teacher for help and then understood how to do it. It took many iterations adn the original problem analysis did not work as designed. I need to work on understanding which kind of loop to use when because there are different ones that should be used in different scenarios.
#
| content/lessons/04/Now-You-Code/NYC2-Multiply-By.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 基本程序设计
# - 一切代码输入,请使用英文输入法
# ## 编写一个简单的程序
# - 圆公式面积: area = radius \* radius \* 3.1415
# ### 在Python里面不需要定义数据的类型
# ## 控制台的读取与输入
# - input 输入进去的是字符串
# - eval
# - 在jupyter用shift + tab 键可以跳出解释文档
# ## 变量命名的规范
# - 由字母、数字、下划线构成
# - 不能以数字开头 \*
# - 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)
# - 可以是任意长度
# - 驼峰式命名
# ## 变量、赋值语句和赋值表达式
# - 变量: 通俗理解为可以变化的量
# - x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式
# - test = test + 1 \* 变量在赋值之前必须有值
# ## 同时赋值
# var1, var2,var3... = exp1,exp2,exp3...
# ## 定义常量
# - 常量:表示一种定值标识符,适合于多次使用的场景。比如PI
# - 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的
# ## 数值数据类型和运算符
# - 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次
# <img src = "../Photo/01.jpg"></img>
# ## 运算符 /、//、**
# ## 运算符 %
# ## EP:
# - 25/4 多少,如果要将其转变为整数该怎么改写
# - 输入一个数字判断是奇数还是偶数
# - 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒
# - 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天
# ## 科学计数法
# - 1.234e+2
# - 1.234e-2
# ## 计算表达式和运算优先级
# <img src = "../Photo/02.png"></img>
# <img src = "../Photo/03.png"></img>
# ## 增强型赋值运算
# <img src = "../Photo/04.png"></img>
# ## 类型转换
# - float -> int
# - 四舍五入 round
# ## EP:
# - 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)
# - 必须使用科学计数法
# # Project
# - 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)
# 
# # Homework
# - 1
# <img src="../Photo/06.png"></img>
c = float(input("请输入摄氏度:"))
f = ( 9 / 5) * c + 32
print("%.0f c is %.1f f"%(c,f))
# - 2
# <img src="../Photo/07.png"></img>
r , h= eval(input("请输入半径和高:"))
area = r*r*3.14
volume = area*h
print("The area is %.4f"%area)
print("The volume is %.4f"%volume)
# - 3
# <img src="../Photo/08.png"></img>
feet = float(input("请输入英尺数:"))
meters = feet*0.305
print(" %.1f feet is %.4f meters"%(feet,meters))
# - 4
# <img src="../Photo/10.png"></img>
M,i,f = eval(input("请输入水量以及水的初始温度和最终温度:"))
Q = M * (f - i) * 4184
print("The energy needed is {}".format(Q))
# - 5
# <img src="../Photo/11.png"></img>
c,l = eval(input("请输入差额和年利率:"))
interest = c * (l / 1200)
print("The interest is %.5f"%interest)
# - 6
# <img src="../Photo/12.png"></img>
v0,v1,t = eval(input("请输入v0,v1,t:"))
a = (v1 - v0) / t
print("The a is %.4f"%a)
# - 7 进阶
# <img src="../Photo/13.png"></img>
yuan = float(input("请输入每月存款数:"))
account=0
for i in range(6):
account=(yuan + account) * (1 + 0.00417)
print("The account is %.2f"%account)
# - 8 进阶
# <img src="../Photo/14.png"></img>
n = int(input("请输入一个0-1000之间的数字"))
g = n % 10
s = n // 10 % 10
b = n // 100
sum = g+s+b
print("The sum of the digits is %d"%sum)
| 7.16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: kagglevil
# language: python
# name: kagglevil
# ---
# ---
# title: "Unique Function"
# author: "Vaishnavi"
# date: 2020-08-09
# description: "-"
# type: technical_note
# draft: false
#
#
# ---
import pandas as pd
pd.Series([2, 4, 3, 3], name='P').unique()
| docs/python/basics/Unique_function.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Задание на повторение материала предыдущего семестра
# +
# Зависимости
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet, LogisticRegression
from sklearn.neighbors import KNeighborsRegressor, KNeighborsClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.cluster import KMeans
from sklearn.metrics import mean_squared_error, f1_score, silhouette_score
# -
# Генерируем уникальный seed
my_code = "Маматбеков"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
# Данные загружены отсюда: https://www.kaggle.com/dwdkills/russian-demography
# Читаем данные из файла
example_data = pd.read_csv("datasets/russian_demography.csv")
# +
# "year" - год (1990-2017)
# "region" - название региона
# "npg" - естественный прирост населения на 1000 человек
# "birth_rate" - количество рождений на 1000 человек
# "death_rate" - количество смертей на 1000 человек
# "gdw" - коэффициент демографической нагрузки на 100 человек (Отношение числа нетрудоспособных к числу трудоспособных).
# "urbanization" - процент городского населения
example_data.head()
# -
# Так как список регионов меняется от года к году, в данных есть строки без значений. Удалим их
example_data.dropna(inplace=True)
# Определим размер валидационной и тестовой выборок
val_test_size = round(0.2*len(example_data))
print(val_test_size)
# Создадим обучающую, валидационную и тестовую выборки
random_state = my_seed
train_val, test = train_test_split(example_data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
# +
# Значения в числовых столбцах преобразуем к отрезку [0,1].
# Для настройки скалировщика используем только обучающую выборку.
columns_to_scale = ['year', 'npg', 'birth_rate', 'death_rate', 'gdw', 'urbanization']
ct = ColumnTransformer(transformers=[('numerical', MinMaxScaler(), columns_to_scale)], remainder='passthrough')
ct.fit(train)
# -
# Преобразуем значения, тип данных приводим к DataFrame
sc_train = pd.DataFrame(ct.transform(train))
sc_test = pd.DataFrame(ct.transform(test))
sc_val = pd.DataFrame(ct.transform(val))
# Устанавливаем названия столбцов
column_names = columns_to_scale + ['region']
sc_train.columns = column_names
sc_test.columns = column_names
sc_val.columns = column_names
sc_train
# +
# Вспоминаем алгоритмы решения задачи регрессии: линейную регрессию и метод k ближайших соседей
r_models = []
# Линейная регрессия
# Для использования регуляризации, вместо LinearRegression используем Lasso, Ridge или ElasticNet
# Параметр alpha - коэффициент регуляризации для Lasso и Ridge, по умолчанию равен 1
# Для ElasticNet, если регуляризация иммет вид a*L1+b*L2, то
# параметр alpha = a + b, по умолчанию равен 1
# параметр l1_ratio = a / (a + b), по умолчанию равен 0.5
r_models.append(LinearRegression())
r_models.append(Lasso(alpha=1.0))
r_models.append(Ridge(alpha=1.0))
r_models.append(ElasticNet(alpha=1.0, l1_ratio=0.5))
# K ближайших соседей
# Параметр n_neighbors - количество соседей, по умолчания равен 5
r_models.append(KNeighborsRegressor(n_neighbors=5))
r_models.append(KNeighborsRegressor(n_neighbors=10))
r_models.append(KNeighborsRegressor(n_neighbors=15))
# +
# Выделим предикторы и зависимую переменную
x_labels = column_names[0:-2]
y_labels = ['urbanization']
x_train = sc_train[x_labels]
x_test = sc_test[x_labels]
x_val = sc_val[x_labels]
y_train = sc_train[y_labels]
y_test = sc_test[y_labels]
y_val = sc_val[y_labels]
# -
# Обучаем модели
for model in r_models:
model.fit(x_train, y_train)
# Оценииваем качество работы моделей на валидационной выборке.
mses = []
for model in r_models:
val_pred = model.predict(x_val)
mse = mean_squared_error(y_val, val_pred)
mses.append(mse)
print(mse)
# Выбираем лучшую модель
i_min = mses.index(min(mses))
best_r_model = r_models[i_min]
best_r_model
# Вычислим ошибку лучшей модели на тестовой выборке.
test_pred = best_r_model.predict(x_test)
mse = mean_squared_error(y_test, test_pred)
print(mse)
# +
# Вспоминаем алгоритмы решения задачи классификации:
# логистическую регрессию, наивный байесовский классификатор и (снова) метод k ближайших соседей
c_models = []
# Логистическая регрессия
# Параметр penalty - тип регуляризации: 'l1', 'l2', 'elasticnet', 'none'}, по умолчанию 'l2'
# Для некоторых типов регуляризации доступны не все алгоритмы (параметр solver)
# Для elasticnet регуляризации необходимо уазывать параметр l1_ratio (0 - l2, 1 - l1)
c_models.append(LogisticRegression(penalty='none', solver='saga'))
c_models.append(LogisticRegression(penalty='l1', solver='saga'))
c_models.append(LogisticRegression(penalty='l2', solver='saga'))
c_models.append(LogisticRegression(penalty='elasticnet', l1_ratio=0.5, solver='saga'))
c_models.append(LogisticRegression())
# Наивный байесовский классификатор
# Параметр alpha - параметр сглаживания, по умолчанию равен 1 (сглаживание Лапласа)
c_models.append(MultinomialNB(alpha=0.0))
c_models.append(MultinomialNB(alpha=0.5))
c_models.append(MultinomialNB(alpha=1.0))
# K ближайших соседей
# Параметр n_neighbors - количество соседей, по умолчания равен 5
c_models.append(KNeighborsClassifier(n_neighbors=5))
c_models.append(KNeighborsClassifier(n_neighbors=10))
c_models.append(KNeighborsClassifier(n_neighbors=15))
# +
# Выделим предикторы и метки классов
x_labels = column_names[0:-1]
y_labels = ['region']
x_train = sc_train[x_labels]
x_test = sc_test[x_labels]
x_val = sc_val[x_labels]
y_train = np.ravel(sc_train[y_labels])
y_test = np.ravel(sc_test[y_labels])
y_val = np.ravel(sc_val[y_labels])
# -
# Обучаем модели
for model in c_models:
model.fit(x_train, y_train)
# Оценииваем качество работы моделей на валидационной выборке.
f1s = []
for model in c_models:
val_pred = model.predict(x_val)
f1 = f1_score(y_val, val_pred, average='weighted')
f1s.append(f1)
print(f1)
# Выбираем лучшую модель
i_min = f1s.index(min(f1s))
best_c_model = c_models[i_min]
best_c_model
# Вычислим ошибку лучшей модели на тестовой выборке.
test_pred = best_c_model.predict(x_test)
f1 = f1_score(y_test, test_pred, average='weighted')
print(f1)
# Вспоминаем алгоритм решения задачи кластеризации - метод k-средних
# Параметр n_clusters - количество кластеров, по умолчанию равен 8
k_models = []
k_models.append(KMeans(n_clusters=5))
k_models.append(KMeans(n_clusters=8))
k_models.append(KMeans(n_clusters=20))
k_models.append(KMeans(n_clusters=50))
# Выделим используемые параметры
x_labels = column_names[0:-1]
x = pd.concat([sc_train[x_labels], sc_val[x_labels], sc_test[x_labels]])
x
# Произведем кластеризацию
for model in k_models:
model.fit(x)
# Оценим качество результата
sils = []
for model in k_models:
cluster_labels = model.predict(x)
s = silhouette_score(x, cluster_labels)
sils.append(s)
print(s)
# +
# Выбираем лучшую модель
i_min = sils.index(min(sils))
best_k_model = k_models[i_min]
print(best_k_model)
print(sils[i_min])
# -
# Задание №1 - анализ моделей для задачи регрессии
# Общий список моделей
r_models = [
LinearRegression(),
Lasso(alpha=1.0),
Lasso(alpha=0.5),
Ridge(alpha=1.0),
Ridge(alpha=0.5),
ElasticNet(alpha=1.0, l1_ratio=0.5),
ElasticNet(alpha=1.0, l1_ratio=0.25),
ElasticNet(alpha=1.0, l1_ratio=0.75),
ElasticNet(alpha=0.5, l1_ratio=0.5),
ElasticNet(alpha=0.5, l1_ratio=0.25),
ElasticNet(alpha=0.5, l1_ratio=0.75),
KNeighborsRegressor(n_neighbors=5),
KNeighborsRegressor(n_neighbors=10),
KNeighborsRegressor(n_neighbors=15),
KNeighborsRegressor(n_neighbors=20),
KNeighborsRegressor(n_neighbors=25)
]
# Выбор моделей для задания
n = 4
random.seed(my_seed)
my_models1 = random.sample(r_models, n)
print(my_models1)
# Загрузим данные для задачи регрессии
data = pd.read_csv("datasets/weather.csv")
data
data['water_level'] = data['water_level'].str.replace(',','.')
data['precipitation'] = data['precipitation'].str.replace(',','.')
data['temperature'] = data['temperature'].str.replace(',','.')
data['humidity'] = data['humidity'].str.replace(',','.')
data['visibility'] = data['visibility'].str.replace(',','.')
data['wind'] = data['wind'].str.replace(',','.')
data['weather'] = data['weather'].str.replace(',','.')
data['pressure'] = data['pressure'].str.replace(',','.')
data['fire'] = data['fire'].str.replace(',','.')
data['wl_changer'] = data['wl_change'].str.replace(',','.')
data['temp_change'] = data['temp_change'].str.replace(',','.')
data['pressure_change'] = data['pressure_change'].str.replace(',','.')
data
# +
# Зависимая переменная для всех одна и та же. Предикторы выбираем случайнм образом.
columns = list(data.columns)
n_x = 5
y_label = ['water_level']
x_labels = random.sample(columns[1:], n_x)
print(x_labels)
# -
# Преобразуйте значения всех необходимых параметров к отрезку [0,1].
# Решите получившуюся задачу регрессии с помощью выбранных моделей и сравните их эффективность.
all_lb = x_labels + y_label
# +
data=data[list(all_lb)]
data
# -
# Определим размер валидационной и тестовой выборок
val_test_size = round(0.2*len(data))
print(val_test_size)
# Создадим обучающую, валидационную и тестовую выборки
random_state = my_seed
train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
# +
# Значения в числовых столбцах преобразуем к отрезку [0,1].
# Для настройки скалировщика используем только обучающую выборку.
columns_to_scale = all_lb
ct = ColumnTransformer(transformers=[('numerical', MinMaxScaler(), columns_to_scale)], remainder='passthrough')
ct.fit(train)
# -
# Преобразуем значения, тип данных приводим к DataFrame
sc_train = pd.DataFrame(ct.transform(train))
sc_test = pd.DataFrame(ct.transform(test))
sc_val = pd.DataFrame(ct.transform(val))
# Устанавливаем названия столбцов
column_names = all_lb
sc_train.columns = column_names
sc_test.columns = column_names
sc_val.columns = column_names
sc_train
# +
x_train = sc_train[x_labels]
x_test = sc_test[x_labels]
x_val = sc_val[x_labels]
y_train = sc_train[y_label]
y_test = sc_test[y_label]
y_val = sc_val[y_label]
# -
# Обучаем модели
for model in my_models1:
model.fit(x_train, y_train)
# Оценииваем качество работы моделей на валидационной выборке.
mses = []
for model in my_models1:
val_pred = model.predict(x_val)
mse = mean_squared_error(y_val, val_pred)
mses.append(mse)
print(mse)
# Выбираем лучшую модель # Укажите, какая модель решает задачу лучше других.
i_min = mses.index(min(mses))
best_my_model = my_models1[i_min]
print('Модель ', best_my_model, 'решает задачу лучше других.')
# Вычислим ошибку лучшей модели на тестовой выборке.
test_pred = best_r_model.predict(x_test)
mse = mean_squared_error(y_test, test_pred)
print(mse)
# Задание №2 - анализ моделей для задачи классификации
# Общий список моделей
c_models = [
LogisticRegression(penalty='none', solver='saga'),
LogisticRegression(penalty='l1', solver='saga'),
LogisticRegression(penalty='l2', solver='saga'),
LogisticRegression(penalty='elasticnet', l1_ratio=0.25, solver='saga'),
LogisticRegression(penalty='elasticnet', l1_ratio=0.5, solver='saga'),
LogisticRegression(penalty='elasticnet', l1_ratio=0.75, solver='saga'),
LogisticRegression(),
MultinomialNB(alpha=0.0),
MultinomialNB(alpha=0.25),
MultinomialNB(alpha=0.5),
MultinomialNB(alpha=0.75),
MultinomialNB(alpha=1.0),
KNeighborsClassifier(n_neighbors=5),
KNeighborsClassifier(n_neighbors=10),
KNeighborsClassifier(n_neighbors=15),
KNeighborsClassifier(n_neighbors=20),
KNeighborsClassifier(n_neighbors=25)
]
# Выбор моделей для задания
n = 5
my_models2 = random.sample(c_models, n)
print(my_models2)
# Загрузим данные для задачи классификации
data = pd.read_csv("datasets/zoo2.csv")
data
# +
#data.drop(['animal_name'], axis='columns', inplace=True)
# +
#data.head()
# -
data['animal_name'] = pd.get_dummies(data['animal_name'])
data.head()
# +
# Метка класса для всех одна и та же. Параметры выбираем случайнм образом.
columns = list(data.columns)
n_x = 8
y_label = ['class_type']
x_labels = random.sample(columns[:-1], n_x)
all_lb1 = x_labels + y_label
print(x_labels)
# +
# Преобразуйте значения всех необходимых параметров к отрезку [0,1].
# Решите получившуюся задачу классификации с помощью выбранных моделей и сравните их эффективность.
# Укажите, какая модель решает задачу лучше других.
# -
data=data[list(all_lb1)]
data.head()
# Определим размер валидационной и тестовой выборок
val_test_size = round(0.2*len(data))
print(val_test_size)
# Создадим обучающую, валидационную и тестовую выборки
random_state = my_seed
train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
# +
# Значения в числовых столбцах преобразуем к отрезку [0,1].
# Для настройки скалировщика используем только обучающую выборку.
columns_to_scale = x_labels#all_lb1
ct = ColumnTransformer(transformers=[('numerical', MinMaxScaler(), columns_to_scale)], remainder='passthrough')
ct.fit(train)
# -
# Преобразуем значения, тип данных приводим к DataFrame
sc_train = pd.DataFrame(ct.transform(train))
sc_test = pd.DataFrame(ct.transform(test))
sc_val = pd.DataFrame(ct.transform(val))
# Устанавливаем названия столбцов
column_names = all_lb1
sc_train.columns = column_names
sc_test.columns = column_names
sc_val.columns = column_names
sc_train
# +
x_train = sc_train[x_labels]
x_test = sc_test[x_labels]
x_val = sc_val[x_labels]
y_train = np.ravel(sc_train[y_label])
y_test = np.ravel(sc_test[y_label])
y_val = np.ravel(sc_val[y_label])
# -
# Обучаем модели
for model2 in my_models2:
model2.fit(x_train, y_train)
# Оценииваем качество работы моделей на валидационной выборке.
f1s = []
for model in my_models2:
val_pred = model.predict(x_val)
f1 = f1_score(y_val, val_pred, average='weighted')
f1s.append(f1)
print(f1)
# Выбираем лучшую модель # Укажите, какая модель решает задачу лучше других.
i_min = f1s.index(min(f1s))
best_my_models2 = my_models2[i_min]
print('Модель ', best_my_models2, 'решает задачу лучше других.')
# Вычислим ошибку лучшей модели на тестовой выборке.
test_pred = best_my_models2.predict(x_test)
f1 = f1_score(y_test, test_pred, average='weighted')
print(f1)
| 2021 Весенний семестр/Практическое задание 1/Маматбеков – задание 8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple RBMK Model
# This model is intended to highlight some basic reactor physics properties of an RBMK model. The simplification will be in the geometric representation of fuel an control elements each of which will be explained in the blocks below.
# ## Materials
import openmc
import numpy as np
# ### Fuel Channel Materials
#
# Fuel channels will be modeled as a homogeneous mixture of water, slightly enriched uranium fuel, and ziraloy cladding. In the block below I will work out the volume fractions of each material.
# +
# computations to estimate the atom density of various
# nuclides comprising the fuel and water that I am smearing
# together for the fuel channels.
# obviously, this is a "guesstimate" and gross simplification
# but (aspirationally) can be made more accurate for more robust
# calculations
r_channel = 8.0/2.; # cm
L_channel = 700; # cm
V_channel = np.pi*(r_channel**2)*L_channel
n_bundles = 2;
n_pins_per_bundle = 18;
L_bundle = L_channel/n_bundles; # okay; this is a fudge
r_fuel = 1.146/2.; # cm, just the radius of the fuel pellet
r_clad_o = 1.36/2.; # cm, clad outer radius
r_clad_i = r_clad_o - 0.09; # cm, clad inner radius (0.9mm clad thickness)
fuel_vol = np.pi*(r_fuel**2)*L_bundle*n_bundles*n_pins_per_bundle; #cm^3
fuel_vol_frac = fuel_vol/V_channel;
print(f'fuel volume fraction: %5.4f'%fuel_vol_frac);
clad_vol = np.pi*(r_clad_o**2 - r_clad_i**2)*L_bundle*n_bundles*n_pins_per_bundle;
clad_vol_frac = clad_vol/V_channel;
print(f'clad volume fraction: %5.4f'%clad_vol_frac);
coolant_vol_frac = 1 - clad_vol_frac - fuel_vol_frac
print(f'coolant volume fraction: %5.4f'%coolant_vol_frac);
# -
# with this volume fraction information I can create the fuel channel mixture.
# +
uo2 = openmc.Material(name='uo2');
uo2.add_element('U',1,enrichment=1.1);
uo2.add_element('O',2);
uo2.set_density('g/cm3',10.5);
water = openmc.Material(name='water');
water.add_element('H',2);
water.add_element('O',1);
water.set_density('g/cm3',0.7);
clad = openmc.Material(name='clad');
clad.add_element('Zr',1);
clad.set_density('g/cm3',6.51);
fuel_chan_mat = openmc.Material.mix_materials([uo2,water,clad],
[fuel_vol_frac,
coolant_vol_frac,
clad_vol_frac],'vo');
fuel_chan_mat.add_s_alpha_beta('c_H_in_H2O');
# -
# the graphite and B4C control rods will be modeled simply.
b4c = openmc.Material(name='b4c');
b4c.add_element('B',4.);
b4c.add_element('C',1.);
b4c.set_density('g/cm3',2.52);
mod = openmc.Material(name='mod');
mod.add_element('C',1.);
mod.set_density('g/cm3',1.7);
mod.add_s_alpha_beta('c_Graphite');
materials = openmc.Materials()
materials += [fuel_chan_mat,b4c,mod,water];
materials.export_to_xml();
# ## Geometry
# In the next chunks of code I will create universes for fuel channels and control rod channels.
# +
top = openmc.ZPlane(z0 = L_channel,boundary_type='vacuum');
bottom = openmc.ZPlane(z0=0.,boundary_type='vacuum');
chan_wall = openmc.ZCylinder(r=r_channel);
# fueled universe
fuel_chan_cell = openmc.Cell();
fuel_chan_cell.fill = fuel_chan_mat;
fuel_chan_cell.region = -chan_wall & +bottom & -top;
mod_cell = openmc.Cell();
mod_cell.fill = mod;
mod_cell.region = +chan_wall & +bottom & -top;
fu = openmc.Universe();
fu.add_cells([fuel_chan_cell,mod_cell]); #infinite universe with fuel channel in the middle
# control rod universe
pct_insert_cr = 0.99;
cr_follower_len = 500; # cm, length of graphite follower
cr_surf = openmc.ZPlane(z0=(1.-pct_insert_cr)*L_channel); #bottom surface of control rod
crf_surf = openmc.ZPlane(z0=(1.-pct_insert_cr*L_channel)-cr_follower_len); #bottom surface of follower rod
cr_cell = openmc.Cell();
cr_cell.fill = b4c;
cr_cell.region = -chan_wall & -top & +cr_surf;
gf_cell = openmc.Cell();
gf_cell.fill = mod;
gf_cell.region = -chan_wall & +crf_surf & -cr_surf;
cr_water_cell = openmc.Cell();
cr_water_cell.fill = water;
cr_water_cell.region = -chan_wall & +bottom & -crf_surf;
mod_in_cr_cell = openmc.Cell();
mod_in_cr_cell.fill = mod;
mod_in_cr_cell.region = +chan_wall & +bottom & -top;
cu = openmc.Universe();
cu.add_cells([cr_cell,gf_cell,cr_water_cell,mod_in_cr_cell]);
mod_cell = openmc.Cell();
mod_cell.fill = mod;
all_mod = openmc.Universe();
all_mod.add_cell(mod_cell);
n_cells = 3;
pitch = 25
left = openmc.YPlane(y0=-n_cells*pitch/2.,boundary_type='reflective');
right = openmc.YPlane(y0=n_cells*pitch/2.,boundary_type='reflective');
front = openmc.XPlane(x0=n_cells*pitch/2.,boundary_type='reflective');
back = openmc.XPlane(x0=-n_cells*pitch/2.,boundary_type='reflective');
lattice = openmc.RectLattice();
lattice.dimension = [3,3];
lattice.lower_left = [-n_cells*pitch/2.,-n_cells*pitch/2.];
lattice.pitch = [pitch,pitch];
lattice.universes = [
[fu,fu,fu],
[fu,cu,fu],
[fu,fu,fu]
]
lattice.outer = all_mod;
core_cell = openmc.Cell();
core_cell.fill = lattice;
core_cell.region = +left & -right & -front & +back & -top & +bottom;
root = openmc.Universe();
root.add_cells([core_cell]);
geometry = openmc.Geometry();
geometry.root_universe = root;
geometry.export_to_xml();
# -
colors = {};
colors[b4c]='yellow';
colors[fuel_chan_mat]='olive';
colors[mod]='grey';
colors[water]='blue';
# +
p = openmc.Plot();
p.width = [n_cells*pitch,n_cells*pitch];
p.origin = [0.,0.,L_channel-1.];
p.pixels = [400,400];
p.color_by = 'material';
p.colors = colors;
openmc.plot_inline(p);
# -
p2 = openmc.Plot();
p2.width = [n_cells*pitch,2*n_cells*pitch];
p2.origin = [0.,0.,L_channel/2.];
p2.pixels = [400,400];
p2.color_by = 'material';
p2.colors = colors;
p2.basis='yz';
openmc.plot_inline(p2);
# Okay. the geometry looks okay.
# +
bounds = [-n_cells*pitch/2.,n_cells*pitch/2.,
-n_cells*pitch/2.,n_cells*pitch/2.,
0., L_channel];
uniform_dist = openmc.stats.Box(bounds[:3],bounds[3:],
only_fissionable=True);
settings = openmc.Settings();
settings.batches = 250;
settings.inactive = 100;
settings.particles = 10000
;
settings.source = openmc.source.Source(space=uniform_dist);
settings.export_to_xml();
# -
openmc.run()
# Create a function that will position the control rods at a specific location and calculate $k_{\text{eff}}$
# +
def shim_rods(rod_insert_pct):
# control rod universe
pct_insert_cr = rod_insert_pct;
cr_follower_len = 500; # cm, length of graphite follower
cr_surf = openmc.ZPlane(z0=(1.-pct_insert_cr)*L_channel); #bottom surface of control rod
crf_surf = openmc.ZPlane(z0=(1.-pct_insert_cr*L_channel)-cr_follower_len); #bottom surface of follower rod
cr_cell = openmc.Cell();
cr_cell.fill = b4c;
cr_cell.region = -chan_wall & -top & +cr_surf;
gf_cell = openmc.Cell();
gf_cell.fill = mod;
gf_cell.region = -chan_wall & +crf_surf & -cr_surf;
cr_water_cell = openmc.Cell();
cr_water_cell.fill = water;
cr_water_cell.region = -chan_wall & +bottom & -crf_surf;
mod_in_cr_cell = openmc.Cell();
mod_in_cr_cell.fill = mod;
mod_in_cr_cell.region = +chan_wall & +bottom & -top;
cu = openmc.Universe();
cu.add_cells([cr_cell,gf_cell,cr_water_cell,mod_in_cr_cell]);
mod_cell = openmc.Cell();
mod_cell.fill = mod;
all_mod = openmc.Universe();
all_mod.add_cell(mod_cell);
n_cells = 3;
pitch = 25
left = openmc.YPlane(y0=-n_cells*pitch/2.,boundary_type='reflective');
right = openmc.YPlane(y0=n_cells*pitch/2.,boundary_type='reflective');
front = openmc.XPlane(x0=n_cells*pitch/2.,boundary_type='reflective');
back = openmc.XPlane(x0=-n_cells*pitch/2.,boundary_type='reflective');
lattice = openmc.RectLattice();
lattice.dimension = [3,3];
lattice.lower_left = [-n_cells*pitch/2.,-n_cells*pitch/2.];
lattice.pitch = [pitch,pitch];
lattice.universes = [
[fu,fu,fu],
[fu,cu,fu],
[fu,fu,fu]
]
lattice.outer = all_mod;
core_cell = openmc.Cell();
core_cell.fill = lattice;
core_cell.region = +left & -right & -front & +back & -top & +bottom;
root = openmc.Universe();
root.add_cells([core_cell]);
geometry = openmc.Geometry();
geometry.root_universe = root;
geometry.export_to_xml();
# -
shim_rods(0.0);
pct_insert = np.linspace(0.,1.,num=11)
print(pct_insert);
type(pct_insert[0])
k_array = []
pct_insert = np.linspace(0,1.,num=11)
for i in range(11):
print(f'Shimming rods to: %4.2f percent' % (pct_insert[i]*100.));
shim_rods(pct_insert[i]);
openmc.run();
sp = openmc.StatePoint('statepoint.250.h5');
k_array.append(sp.k_combined);
sp.__exit__();
# package the results in an array that can be plotted.
k_out = np.ndarray((2,11),dtype=np.float64);
for i in range(11):
k = k_array[i];
k_out[0,i]=k.nominal_value;
k_out[1,i]=k.std_dev;
from matplotlib import pyplot
pyplot.plot(pct_insert,k_out[0,:]);
pyplot.grid();
pyplot.xlabel('Percent Control Rod Inserted',fontsize=12,fontweight='bold');
pyplot.ylabel('Nominal $k_{eff}$',fontsize=12,fontweight='bold');
# Let's repeat this analysis, but this time without the graphite follower
def shim_rods_no_follower(rod_insert_pct):
# control rod universe
pct_insert_cr = rod_insert_pct;
cr_follower_len = 50; # cm, length of graphite follower
cr_surf = openmc.ZPlane(z0=(1.-pct_insert_cr)*L_channel); #bottom surface of control rod
crf_surf = openmc.ZPlane(z0=(1.-pct_insert_cr*L_channel)-cr_follower_len); #bottom surface of follower rod
cr_cell = openmc.Cell();
cr_cell.fill = b4c;
cr_cell.region = -chan_wall & -top & +cr_surf;
gf_cell = openmc.Cell();
gf_cell.fill = water;
gf_cell.region = -chan_wall & +crf_surf & -cr_surf;
cr_water_cell = openmc.Cell();
cr_water_cell.fill = water;
cr_water_cell.region = -chan_wall & +bottom & -crf_surf;
mod_in_cr_cell = openmc.Cell();
mod_in_cr_cell.fill = mod;
mod_in_cr_cell.region = +chan_wall & +bottom & -top;
cu = openmc.Universe();
cu.add_cells([cr_cell,gf_cell,cr_water_cell,mod_in_cr_cell]);
mod_cell = openmc.Cell();
mod_cell.fill = mod;
all_mod = openmc.Universe();
all_mod.add_cell(mod_cell);
n_cells = 3;
pitch = 25
left = openmc.YPlane(y0=-n_cells*pitch/2.,boundary_type='reflective');
right = openmc.YPlane(y0=n_cells*pitch/2.,boundary_type='reflective');
front = openmc.XPlane(x0=n_cells*pitch/2.,boundary_type='reflective');
back = openmc.XPlane(x0=-n_cells*pitch/2.,boundary_type='reflective');
lattice = openmc.RectLattice();
lattice.dimension = [3,3];
lattice.lower_left = [-n_cells*pitch/2.,-n_cells*pitch/2.];
lattice.pitch = [pitch,pitch];
lattice.universes = [
[fu,fu,fu],
[fu,cu,fu],
[fu,fu,fu]
]
lattice.outer = all_mod;
core_cell = openmc.Cell();
core_cell.fill = lattice;
core_cell.region = +left & -right & -front & +back & -top & +bottom;
root = openmc.Universe();
root.add_cells([core_cell]);
geometry = openmc.Geometry();
geometry.root_universe = root;
geometry.export_to_xml();
k_array_nf = []
pct_insert = np.linspace(0,1.,num=11)
for i in range(11):
print(f'Shimming rods to: %4.2f percent' % (pct_insert[i]*100.));
shim_rods_no_follower(pct_insert[i]);
openmc.run();
sp = openmc.StatePoint('statepoint.250.h5');
k_array_nf.append(sp.k_combined);
sp.__exit__();
k_out_nf = np.ndarray((2,11),dtype=np.float64);
for i in range(11):
k = k_array_nf[i];
k_out_nf[0,i]=k.nominal_value;
k_out_nf[1,i]=k.std_dev;
pyplot.plot(pct_insert,k_out_nf[0,:],label='no follower');
pyplot.plot(pct_insert,k_out[0,:],label='with follower');
pyplot.grid();
pyplot.xlabel('Percent Control Rod Inserted',fontsize=12,fontweight='bold');
pyplot.ylabel('Nominal $k_{eff}$',fontsize=12,fontweight='bold');
pyplot.legend();
# Now let's re-do the test but now have the rods fully withdrawn from the start (including the graphite follower)
k_array_3 = []
pct_insert_2 = np.linspace(-0.15,1.,num=13)
for i in range(13):
print(f'Shimming rods to: %4.2f percent' % (pct_insert_2[i]*100.));
shim_rods(pct_insert_2[i]);
openmc.run();
sp = openmc.StatePoint('statepoint.250.h5');
k_array_3.append(sp.k_combined);
sp.__exit__();
k_out_plus = np.ndarray((2,13),dtype=np.float64);
for i in range(13):
k = k_array_3[i];
k_out_plus[0,i]=k.nominal_value;
k_out_plus[1,i]=k.std_dev;
pyplot.plot(pct_insert,k_out_nf[0,:],label='no follower');
pyplot.plot(pct_insert,k_out[0,:],label='with follower');
pyplot.plot(pct_insert_2,k_out_plus[0,:],label='+ withdraw');
pyplot.grid();
pyplot.xlabel('Percent Control Rod Inserted',fontsize=12,fontweight='bold');
pyplot.ylabel('Nominal $k_{eff}$',fontsize=12,fontweight='bold');
pyplot.legend();
# The above results should be re-done with a longer control rod follower.
# +
def shim_rods_lf(rod_insert_pct):
# control rod universe
pct_insert_cr = rod_insert_pct;
cr_follower_len = 700; # cm, length of graphite follower
cr_surf = openmc.ZPlane(z0=(1.-pct_insert_cr)*L_channel); #bottom surface of control rod
crf_surf = openmc.ZPlane(z0=(1.-pct_insert_cr*L_channel)-cr_follower_len); #bottom surface of follower rod
cr_cell = openmc.Cell();
cr_cell.fill = b4c;
cr_cell.region = -chan_wall & -top & +cr_surf;
gf_cell = openmc.Cell();
gf_cell.fill = mod;
gf_cell.region = -chan_wall & +crf_surf & -cr_surf;
cr_water_cell = openmc.Cell();
cr_water_cell.fill = water;
cr_water_cell.region = -chan_wall & +bottom & -crf_surf;
mod_in_cr_cell = openmc.Cell();
mod_in_cr_cell.fill = mod;
mod_in_cr_cell.region = +chan_wall & +bottom & -top;
cu = openmc.Universe();
cu.add_cells([cr_cell,gf_cell,cr_water_cell,mod_in_cr_cell]);
mod_cell = openmc.Cell();
mod_cell.fill = mod;
all_mod = openmc.Universe();
all_mod.add_cell(mod_cell);
n_cells = 3;
pitch = 25
left = openmc.YPlane(y0=-n_cells*pitch/2.,boundary_type='reflective');
right = openmc.YPlane(y0=n_cells*pitch/2.,boundary_type='reflective');
front = openmc.XPlane(x0=n_cells*pitch/2.,boundary_type='reflective');
back = openmc.XPlane(x0=-n_cells*pitch/2.,boundary_type='reflective');
lattice = openmc.RectLattice();
lattice.dimension = [3,3];
lattice.lower_left = [-n_cells*pitch/2.,-n_cells*pitch/2.];
lattice.pitch = [pitch,pitch];
lattice.universes = [
[fu,fu,fu],
[fu,cu,fu],
[fu,fu,fu]
]
lattice.outer = all_mod;
core_cell = openmc.Cell();
core_cell.fill = lattice;
core_cell.region = +left & -right & -front & +back & -top & +bottom;
root = openmc.Universe();
root.add_cells([core_cell]);
geometry = openmc.Geometry();
geometry.root_universe = root;
geometry.export_to_xml();
# -
k_array_4 = []
pct_insert_2 = np.linspace(-0.15,1.,num=13)
for i in range(13):
print(f'Shimming rods to: %4.2f percent' % (pct_insert_2[i]*100.));
shim_rods_lf(pct_insert_2[i]);
openmc.run();
sp = openmc.StatePoint('statepoint.250.h5');
k_array_4.append(sp.k_combined);
sp.__exit__();
k_out_plus_lf = np.ndarray((2,13),dtype=np.float64);
for i in range(13):
k = k_array_4[i];
k_out_plus_lf[0,i]=k.nominal_value;
k_out_plus_lf[1,i]=k.std_dev;
pyplot.plot(pct_insert_2,k_out_plus_lf[0,:]);
pyplot.grid();
pyplot.xlabel('Percent Control Rod Inserted',fontsize=12,fontweight='bold');
pyplot.ylabel('Nominal $k_{eff}$',fontsize=12,fontweight='bold');
# This still doesn't show a reactivity increase as graphite is inserted. I wonder if it becomes prominent if you model (in some way) the density gradient of the water in the core. (?)
| lab7/RBMK_info/simple_RBMK.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Supporting notebooks for IA Paper 4 (mathematics) teaching
#
# This collection of Jupyter/Python notebooks is being produced as an experiment in supporting the teaching of mathematical methods to Part IA students at the Department of Engineering at University of Cambridge in the second half of Michaelmas Term. These notebooks are produced by <NAME> (<http://www.eng.cam.ac.uk/~gnw20/>).
#
#
# ## What these notebooks are
#
# The intention of these notebooks is to support the learning of and understanding of the mathematics that is taught in the course.
#
#
# ## What these notebooks are not
#
# These notebooks are not complete learning material. They supplement the lecture handouts on the Moodle site <https://www.vle.cam.ac.uk/course/view.php?id=69781>.
#
# The notebooks are not intended to teach students how to use Python. It is hoped that students will modify and experiment with the notebooks.
#
#
# ## This is an experiment
#
# Feedback is welcome. Send feedback to <NAME> at <<EMAIL>>, or report error and typos on Github at <https://github.com/garth-wells/IA-maths-Jupyter/issues>.
#
#
# ## Computer algebra systems (CAS)
#
# Some of these notebooks use a computer algebra systems. Computer algebra systems can perform symbolic mathematical operations and manipulations. Well known proprietary computer algebra systems are Maple and Mathematica (some Mathematica code can be executed in Wolfram Alpha). Open computer algebra systems include Maxima (<http://maxima.sourceforge.net/>) and SymPy (<http://sympy.org/>). Sage (<http://www.sagemath.org/>) is an interesting synthesis of a number of open packages.
#
# [SymPy](http://www.sympy.org/) is the CAS used in these notebooks.
#
#
# ## List of lectures
#
# Each notebook corresponds to one lecture in the course.
#
# - [Lecture 1 (first-order ordinary differential equations)](Lecture01.ipynb)
# - [Lecture 2 (second-order ordinary differential equations)](Lecture02.ipynb)
# - [Lecture 3 (non-homegeneous second-order ordinary differential equations)](Lecture03.ipynb)
# - [Lecture 4 (non-homegeneous second-order ordinary differential equations (extension), balance laws)](Lecture04.ipynb)
# - [Lecture 5 (difference equations)](Lecture05.ipynb)
# - [Lecture 6 (partial derivatives and gradients)](Lecture06.ipynb)
# - [Lecture 7 (matrices and vectors)](Lecture07.ipynb)
# - [Lecture 8 (orthogonal matrices)](Lecture08.ipynb)
# - [Lecture 9 (coordinate transformations)](Lecture09.ipynb)
# - [Lecture 10 (eigenvalue problems)](Lecture10.ipynb)
# - [Lecture 11 (matrix diagonalisation)](Lecture11.ipynb)
# - [Lecture 12 (eigenproblem applications)](Lecture12.ipynb)
| Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center>**BUILDING A MACHINE LEARNING MODEL TO PREDICT CUSTOMER CHURN IN A COMPANY**
# Customer churn occurs when a customer decides to stop using a company's services, content or products. There are many examples and instances of churn:
#
# * Cancellation of a contracted or uncontracted service ;
# * Purchase from another competitor's shop;
# * Unsubscribing from a newsletter;
# * Closing a bank account;
# * Etc.
#
# In today's business environment, with many competitors, the cost of acquiring new customers is very high. Therefore, the retention of existing customers is more important for its companies. Thus, the company needs to better understand the behaviour of its customers in order to retain them. One way to do this is to create **a Machine Learning model that can predict which customers are likely to unsubscribe**. This allows the company to better target and retain those specific customers who are at higher risk of churn.
#
# In this project, we will explore a dataset from a telecommunications company and create a model to predict which customers are at higher risk of churn. We will use different machine learning algorithms to compare their performance and then choose the best model.
#
# Question 1: What are the types of each of the variables? and Which variables have missing values?
# Question 2: Is customer churn influenced by gender (the gender variable)?
# Question 3: In your opinion, which variable has the greatest impact on a customer's susceptibility to churn?
# Question 4: Which regression model can be used to approach the problem logically?
# Question 5: Can we choose a model and test its performance?
#
# ## Importation of libraries
# +
import pandas as pd
import numpy as np
import seaborn as sns
import seaborn as sn
import matplotlib.pyplot as plt
# %matplotlib inline
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed, interact_manual
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
# -
# ## Importation of dataset
# +
df = pd.read_csv('Data_telecom.csv')
df.head()
# -
# ### Description of the dataset
#
# ##### The target variable is the Churn variable which takes two values: Yes (the customer has unsubscribed) and No (the customer has not unsubscribed).
# +
# Description of the data set
df.info()
# -
# Note that the variable TotalCharges is not of type character, it must contain spaces which jupyter NoteBook sees the type of this variable as a character
# +
# Know which columns have missing values
df.columns[np.sum(df.isnull())==0]
# +
# We show the percentage of missing values in the columns
df.isnull().mean(axis=0)
# -
# During pre-processing we will have to delete the rows containing the missing values if there are any
# Let's see if we have any repeating customers in the dataset, as we must have a row corresponding to a customer. Our objective is to model the behaviour of the customer in terms of potential churn.
# We will once determine the unique values that each of the columns have and this allows us to have a general look and in particular on the column ID costumer which must return us the number of lines of the data base to confirm that the customers are indeed all different.
# +
# Number of unique values per column
df.nunique()
# -
# ### Exploratory data analysis
# +
# Convert the type of the values of the variable 'TotalCharges' to float (decimal)
df['TotalCharges'] = df['TotalCharges'].replace(' ', np.nan).astype(float)
# Quantitative variables
Numerical_columns=['MonthlyCharges', 'TotalCharges', 'tenure']
Numerical_columns
# Note that the SeniorCitizen variable is a categorical variable as it responds to Yes or No with 1 or 0.
# +
# Categorical variables
categorical_columns = df.nunique()[df.nunique() < 5].keys().to_list()
categorical_columns
# -
# As there are not enough missing values, we will delete the rows affected by the missing values
# We will extract the histograms for each of the variables using the dataset
# ##### Case of Quantitative Variables
def hist_plot(b):
sns.distplot(df[b], kde=False)
plt.title('Histogram of '+str(b))
return plt.show()
# +
# Dynamic display from the numerical columns we have transformed
interact(hist_plot, b = Numerical_columns);
# -
# Looking at the histogram, we see that by selecting the variable TotalCharges we see that it is asymmetrical and can greatly influence the forecast. We should be able to remedy the situation by normalising the data in this column. We can find the skewness coefficient of this variable.
# +
# Skewness coefficient of the variable 'TotalCharges'.
df['TotalCharges'].skew()
# -
# We have a skewness coefficient that tends towards 1, which reflects a high level of non-normalisation of the data contained in this column.
# +
#Let's move on to a brief summary of our quantitative variables
#Statistical summary
df.describe()
# -
# As you have noticed, the three quantitative variables (***Hardness***, ***MonthlyCharges***, and ***TotalCharges***) have different scales. ***tenure*** varies between 1 and 72 while ***MonthlyCharges*** varies between 18.25 and 118.75 and ***TotalCharges*** varies between 18.8 and 8684.8. It is desirable to have a Machine model that works best with standardised or normalised variables.
#
# Case of Qualitative variables
# +
# Creating a bar graph construction function and interactively
def bar_plot(c):
df[c].value_counts(normalize = True).plot(kind = 'bar')
plt.ylabel('proportion')
plt.title('Distribution of' + str(c))
return plt.show()
# +
#Interact
interact(bar_plot, c = categorical_columns);
# -
# We will now do the pre-processing of the data (Normalisation and/or standardisation)
# ## Pre-processing of data
# #### Handling missing values
#Determine the number of missing values in the columns if any.
data2=df.copy()
data2.isna().sum()
# +
#remove all rows with missing values
data2.dropna(inplace=True)
data2.isna().sum()
# -
data2.info()
# We leave 7043 rows to 6984 rows after deleting rows with missing values and the number of columns has not changed.
# What we will do to prepare the data for modelling is the encoding of categorical variables.
data2.head()
# #### Encodings of categorical variables
# +
# Encoding of binary variables
data2['gender'] = data2['gender'].apply(lambda row: 1 if row == 'Female' else 0)
binary_columns = data2.drop('gender', axis=1).nunique()[data2.drop('gender', axis=1).nunique() < 3].keys().to_list()
binary_columns
data2['gender']
# -
for column in binary_columns:
data2[column] = data2[column].apply(lambda row: 1 if row == 'Yes' else 0)
# +
# Encoding of the remaining categorical variables
remaining_cat_vars = data2[categorical_columns].nunique()[data2[categorical_columns].nunique() > 2].keys().to_list()
remaining_cat_vars
# -
remaining_cat_vars_dummies = pd.get_dummies(data2[remaining_cat_vars], columns=remaining_cat_vars, drop_first=True)
# #### Creation of the new dataframe
# +
# New dataframe
data = pd.concat([data2['gender'], data2[binary_columns], remaining_cat_vars_dummies, data2[Numerical_columns]], axis=1)
# +
# Let's display the reprocessed dataframe
data.head()
# +
# Revisiting data
data.shape
# -
# #### Treatment of the information asymmetry of the target variable ('Churn')
# We have seen that the variable TotalCharges is very asymmetric so let's transform it by a square root function.
# +
# Let's transform the variable 'TotalCharges' with the root-square function
data['TotalCharges'] = np.sqrt(data['TotalCharges'])
# Histogram of the variable
sns.distplot(data['TotalCharges'], kde=False)
# +
# We find again the skewness coefficient of the variable 'TotalCharges' after transformation
data['TotalCharges'].skew()
# -
# ### Segmentation of the dataset
# +
# Segmentation into explanatory variables and variables to be explained
X = data.drop('Churn', axis = 1)
y = data['Churn']
# +
# breakdown into training and test data
seed=1111
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = seed, stratify = y)
# -
# Let's make sure that the proportions of people who have or have not terminated their contracts are the same in the different data sets
# +
# Frequency of classes in y
pd.DataFrame(y).value_counts(normalize = True)
# +
# Frequency of classes in y_train
pd.DataFrame(y_train).value_counts(normalize = True)
# +
# Frequency of classes in y_test
pd.DataFrame(y_test).value_counts(normalize = True)
# -
# #### Class imbalance in the category of the target variable ('churn')
# Note that there is a very large difference between the number of observations in each category of the target variable to be predicted ('Churn'), which may lead to errors in future modelling.
#
# Our case here shows that just over 73% of people have not cancelled their subscription (modality 0) compared to just over 26% who have cancelled their subscription (modality 1). There is therefore a large class imbalance. We can use resampling to create more balance between the categories of the target variable.
# We will use the sub-sampling method, which means reducing the observations of the majority class (modality 0) proportionally with the minority class (modality 1)
# +
# Solving the Class Imbalance Problem: Majority Class Subsampling Method
X2 = X_train
X2['Churn'] = y_train.values
minority = X2[X2.Churn == 1]
majority = X2[X2.Churn == 0]
majority_downsampled = resample(majority, replace=False, n_samples = len(minority), random_state = seed)
downsampled = pd.concat([minority, majority_downsampled], axis=0)
downsampled
# +
# Checking the sub-sampling carried out
downsampled['Churn'].value_counts(normalize=True)
# -
# #### Review of training data
# +
# We extract the training data after reprocessing by sub-sampling
X_train_down = downsampled.drop('Churn', axis=1)
y_train_down = downsampled['Churn']
# +
# Definition of training data
# Possible choices : (X_train, y_train) et (X_train_down, y_train_down)
#dataframe of explanatory variables
train_features = X_train_down
#dataframe of the variable to be explained
train_labels = y_train_down
# -
# We have chosen the data from the sub-sampling as a basis.
# Note that it is recommended to work all the cases and choose the best model afterwards.
#
# ***This allows us to reset all the variables between 0 and 1 to allow a better training and thus reduce the bias that can be generated in our model.
#
# #### Normalisation or Standardisation of data for explanatory variables
# +
# We will normalise instead of standardising the variables
# The majority of our variables are contained between 0 and 1
#Normalization via the MinMaxScaler() method
#Standardisation via the StandardScaler() method
# +
#Normalization of explanatory variable data
scaler = MinMaxScaler()
mod_scaler = scaler.fit(train_features)
train_features = mod_scaler.transform(train_features)
X_test = mod_scaler.transform(X_test)
# Retransformation of the data into Dataframe because the data is of type numpy
train_features = pd.DataFrame(train_features, columns = X.columns)
X_test = pd.DataFrame(X_test, columns = X.columns)
# +
# All our explanatory variable values are now between 0 and 1.
train_features.describe()
# -
# ## Modelling via Logistic regression
# We use logistic regression because of the interpretation of our variable to be explained which is either 0 or 1.
# It is therefore more logical to turn to the logistic regression method*** whose variable to be explained in this model obeys the logic ***True or False (0 or 1)***.
# ### Choice of the Metric to evaluate the performance of our model
# There is the ***accuracy metric, the ***precision*** metric and the ***recall*** metric see -> Metric choices :
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
#
# Improving *precision* decreases *recall* and vice versa.
# Fortunately, there is a metric that contains both sensitivity and specificity. This is the F1 score.
#
# * F1 score**: Harmonic mean of ***precision*** and ***recall***. It is therefore calculated by the formula : $$F1 score = 2 * \frac{precision * recall}{precision + recall}$$
#
# For a perfect model, f1 score is equal to 1 and the worst performance is a model with an f1 score equal to O.
#
# We choose the F1 score to evaluate the performance of each model that will be built.
# ### Selection of the best variables to predict the result
#
# There are several ways to select the best predictors.
# ***Note that models based on the decision tree method have an attribute that gives the importance of each of the predictor variables. This allows us to select the best predictors of the outcome.
#
# We will create a random forest model without looking for the best hyperparameters. From this model we will determine the most important variables that will be used to train the machine learning algorithm.
# +
# Selection of the best predictors
# We will train the data without searching for the best hyperparameters
rf = RandomForestClassifier()
rf.fit(train_features, train_labels)
print(classification_report(y_test, rf.predict(X_test)))
# +
# Visualisation of important explanatory variables
plt.figure(figsize=(14, 6))
# the variables are stored in the variable 'features_importances_' (attribute giving the degree of importance of each predictor variable)
#selections of column names ("index")
#we sort from the largest value to the smallest one we do "sort_values(ascending =False)
vars_imp = pd.Series(rf.feature_importances_, index = train_features.columns).sort_values(ascending=False)
#Let's draw a bar chart
#variable or column names are in row indices
#"X is the panda series object we just created
#"y are the oms of the columns that are in row index"
sns.barplot(x = vars_imp.index, y=vars_imp)
plt.xticks(rotation=90)
plt.xlabel("Variables")
plt.ylabel("Importance score of the variable")
plt.title("Importance of predictor or explanatory variables")
plt.show()
# +
#Display of explanatory values in descending order
vars_imp
# -
# We will select the variables that best predict the model by defining a threshold.
#
# ***We note from the data of the important variables below
# that the majority of the variables are lower than 0.009 our threshold will be equal to 0.009, i.e. 1% of estimation error by the randomforets model***.
# +
# Selections of variables for modelling
seuil= 0.01
#We select and transform into a list
vars_selected= vars_imp[vars_imp>seuil].index.to_list()
train_features=train_features[vars_selected]
X_test=X_test[vars_selected]
# -
# Visualisation of the number of new variables that will be definitively used for modelling. **** Let's note that we had 31 variables of which 30 were explanatory variables after transforming the categorical variables into quantitative variables. ****
#
#
# +
# Number of final predictors
len(train_features.columns)
len(vars_selected)
# -
# We move to 22 explanatory variables
# ### Modelling proper
# ***Our objective is to develop a model to predict in advance which customers will unsubscribe.***
# This will allow us to better retain customers so that they do not unsubscribe due to the reduced costs compared to the cost of acquiring new customers
# ##### Training of the model
# +
# Dictionary of hyperparameters
#The hyperparameter settings allow the algorithm to learn instead of memorise, hence C.
#The stronger the 'C' the more it tends towards regularisation the more we have under-fitting
#The weaker the 'C' the more it tends towards overfitting
param_grid = {'C':[0.001, 0.01, 1, 10, 40, 90, 150, 400]}
# GridSearchCV object
# 'f1' is the evaluation metric (we chose it at the beginning of the modelling)
# the default scoring is accurancy we chose f1-score
#'cv' crossvalidation: refers to dividing the training dataset into 5 (4 parts for training and 1 part for testing) according to the 1000 iterations
# crossvalidation is an indispensable element allowing the machine to generalise
modele_logreg_class = GridSearchCV(estimator=LogisticRegression(random_state=seed, max_iter=1000),
param_grid=param_grid,
scoring='f1',
cv=5)
# Training of the algorithm
logreg_model = modele_logreg_class.fit(train_features, train_labels)
# Best score and best hyperparameter
#round allows rounding to 3 digits after the decimal point
print(round(logreg_model.best_score_, 3))
print(logreg_model.best_estimator_)
# -
# ***The best score is equal to 0.77 for C = 10***.
# #### Test of the model
# +
#Model performance testing functions
def model_test(model, features, labels):
pred=model.predict(features)
print(classification_report(labels, pred))
# +
#logistic model test
model_test(logreg_model, X_test,y_test)
# -
# We have thus a model tested on a dataset of which he has no knowledge and we can see that it allows to generalize with an Accuracy of 0.74 and the f1-score for the modality (0) = 0.80 thus tends towards 1 for the modality (1) it tends towards 1 by a rounding by excess.
#
# Our logistic regression model thus allows us to have ***good overall accuracy*** as well as ***precision*** and ***good recall***.
#
# However, we can test other models outside logistic regression in order to choose the best model.
#
| Project_1_Udacity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Contravariant & Covariant indices in Tensors (Symbolic)
from einsteinpy.symbolic import SchwarzschildMetric, MetricTensor, ChristoffelSymbols, RiemannCurvatureTensor
from einsteinpy.symbolic.predefined import Schwarzschild
import sympy
sympy.init_printing()
# ### Analysing the schwarzschild metric along with performing various operations
sch = Schwarzschild()
sch.tensor()
sch_inv = sch.inv()
sch_inv.tensor()
sch.order
sch.config
# ### Obtaining Christoffel Symbols from Metric Tensor
chr = ChristoffelSymbols.from_metric(sch_inv) # can be initialized from sch also
chr.tensor()
chr.config
# ### Changing the first index to covariant
new_chr = chr.change_config('lll') # changing the configuration to (covariant, covariant, covariant)
new_chr.tensor()
new_chr.config
# ### Any arbitary index configuration would also work!
new_chr2 = new_chr.change_config('lul')
new_chr2.tensor()
# ### Obtaining Riemann Tensor from Christoffel Symbols and manipulating it's indices
rm = RiemannCurvatureTensor.from_christoffels(new_chr2)
rm[0,0,:,:]
rm.config
rm2 = rm.change_config("uuuu")
rm2[0,0,:,:]
rm3 = rm2.change_config("lulu")
rm3[0,0,:,:]
rm4 = rm3.change_config("ulll")
rm4[0,0,:,:]
# #### It is seen that `rm` and `rm4` are same as they have the same configuration
| docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Titanic Logistic Regression
# Load the database [Titanic Data Set da Kaggle](https://www.kaggle.com/c/titanic)
# +
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
sc= SparkContext()
sqlContext = SQLContext(sc)
titanic = pd.read_csv('titanic_train.csv')
titanic.head()
# -
sns.heatmap(titanic.isnull(),yticklabels=False,cbar=False,cmap='viridis')
sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Sex',data=titanic,palette='RdBu_r')
sns.countplot(x='Survived', hue='Pclass', data=titanic, palette='rainbow')
# ## Handle incomplete columns
# Complete the age feature with the mean by class and delete the cabin feature
titanic.drop('Cabin',axis=1,inplace=True)
plt.figure(figsize=(12, 7))
sns.boxplot(x='Pclass',y='Age',data=titanic,palette='winter')
# +
def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
titanic['Age'] = titanic[['Age','Pclass']].apply(impute_age,axis=1)
# -
sns.heatmap(titanic.isnull(),yticklabels=False,cbar=False,cmap='viridis')
# +
titanic.dropna(inplace=True)
titanic.info()
# -
sex = pd.get_dummies(titanic['Sex'], drop_first=True)
embark = pd.get_dummies(titanic['Embarked'], drop_first=True)
titanic.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
titanic = pd.concat([titanic,sex,embark],axis=1)
titanic.info()
titanic.head()
# ## Parse the dataset to Spark and split it
data = sqlContext.createDataFrame(titanic)
data.columns
# +
import functools
oldColumns = data.schema.names
newColumns = ['PassengerId',
'label',
'Pclass',
'Age',
'SibSp',
'Parch',
'Fare',
'male',
'Q',
'S']
df = functools.reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]),
range(len(oldColumns)), data)
df.printSchema()
df.show(3)
# +
from pyspark.ml.feature import VectorAssembler
vectorAssembler = VectorAssembler(inputCols = ['PassengerId',
'Pclass',
'Age',
'SibSp',
'Parch',
'Fare',
'male',
'Q',
'S'], outputCol = 'features')
vData = vectorAssembler.transform(df)
vData = vData.select(['features', 'label'])
vData.show(3)
vData.select(['label']).distinct().show()
vData.printSchema()
# +
train, test = vData.randomSplit([0.7, 0.3])
print("Training Dataset Count: " + str(train.count()))
print("Test Dataset Count: " + str(test.count()))
train.printSchema()
# -
# ## Create the Logit Regression model
# +
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(featuresCol = 'features', labelCol = 'label', maxIter=50)
lrModel = lr.fit(train)
# -
import matplotlib.pyplot as plt
import numpy as np
beta = np.sort(lrModel.coefficients)
plt.plot(beta)
plt.ylabel('Beta Coefficients')
plt.show()
trainingSummary = lrModel.summary
roc = trainingSummary.roc.toPandas()
plt.plot(roc['FPR'],roc['TPR'])
plt.ylabel('False Positive Rate')
plt.xlabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
print('Training set areaUnderROC: ' + str(trainingSummary.areaUnderROC))
pr = trainingSummary.pr.toPandas()
plt.plot(pr['recall'],pr['precision'])
plt.ylabel('Precision')
plt.xlabel('Recall')
plt.show()
# ## Make predictions on the test set and evaluate the results
predictions = lrModel.transform(test)
predictions.select('label', 'rawPrediction',
'prediction', 'probability').show(10)
# +
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator()
print('Test Area Under ROC', evaluator.evaluate(predictions))
# println(eva.confusionMatrix)
# -
fMeasure = trainingSummary.fMeasureByThreshold
maxFMeasure = fMeasure.groupBy().max('F-Measure').select('max(F-Measure)').head()
bestThreshold = fMeasure.where(fMeasure['F-Measure'] ==
maxFMeasure['max(F-Measure)']).select('threshold').head()['threshold']
# Overall statistics
print("Summary Stats")
print('Accuracy = %s' % str(trainingSummary.accuracy))
print("Precision = %s" % trainingSummary.precisionByLabel)
print("F1 Score = %s" % trainingSummary.fMeasureByLabel())
print("False Positive = %s" % trainingSummary.falsePositiveRateByLabel)
print("Recall = %s" % trainingSummary.recallByLabel)
# +
from pyspark.mllib.evaluation import MulticlassMetrics
pred = predictions.select('prediction', 'label')
metricsp = MulticlassMetrics(pred.rdd)
# metricsp.recall(1)
tp = pred[(pred.label == 1) & (pred.prediction == 1)].count()
tn = pred[(pred.label == 0) & (pred.prediction == 0)].count()
fp = pred[(pred.label == 0) & (pred.prediction == 1)].count()
fn = pred[(pred.label == 1) & (pred.prediction == 0)].count()
print("True Positives: %f" % tp)
print("True Negatives: %f" % tn)
print("False Positives: %f" % fp)
print("False Negatives: %f" % fn)
print("Total: %d" % pred.count())
r = float(tp)/(tp + fn)
print("recall %f" % r)
p = float(tp) / (tp + fp)
print("precision %f" % p)
# -
| notebooks/Logistic Regression/Titanic-Logit-Spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
file1_1 = 'dataset/0012_3_1.bmp'
file1_2 = 'dataset/0012_3_2.bmp'
file2_1 = 'dataset/012_8_1.bmp'
file2_2 = 'dataset/012_8_2.bmp'
img1_1 = cv2.imread(file1_1, cv2.IMREAD_GRAYSCALE )# IMREAD_GRAYSCALE, IMREAD_COLOR
img1_2 = cv2.imread(file1_2, cv2.IMREAD_GRAYSCALE )# IMREAD_GRAYSCALE, IMREAD_COLOR
img2_1 = cv2.imread(file2_1, cv2.IMREAD_GRAYSCALE )# IMREAD_GRAYSCALE, IMREAD_COLOR
img2_2 = cv2.imread(file2_2, cv2.IMREAD_GRAYSCALE )# IMREAD_GRAYSCALE, IMREAD_COLOR
# -
# # 1. image pre-process
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
# GaussianBlur
blur1_1 = cv2.GaussianBlur(img1_1,(5,5),0)
blur1_2 = cv2.GaussianBlur(img1_2,(5,5),0)
blur2_1 = cv2.GaussianBlur(img2_1,(5,5),0)
blur2_2 = cv2.GaussianBlur(img2_2,(5,5),0)
# plot
plt.subplot(121),plt.imshow(img1_1, cmap = 'gray', interpolation = 'bicubic'),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(blur1_1, cmap = 'gray', interpolation = 'bicubic'),plt.title('Gaussian')
plt.xticks([]), plt.yticks([])
plt.show()
# +
# Otsu's thresholding
ret1_1,th1_1 = cv2.threshold(blur1_1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret1_2,th1_2 = cv2.threshold(blur1_2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret2_1,th2_1 = cv2.threshold(blur2_1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret2_2,th2_2 = cv2.threshold(blur2_2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# plot
plt.subplot(121),plt.imshow(blur1_1, cmap = 'gray', interpolation = 'bicubic'),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(th1_1, cmap = 'gray', interpolation = 'bicubic'),plt.title('OTSU')
plt.xticks([]), plt.yticks([])
plt.show()
# -
# # 2. thining
# ! python biometrics/thining.py dataset/0012_3_1.bmp
| image_process/opencv/fingerprint_identification/fingerprint_identification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
import pandas as pd
# -
# ## 各梯次替代役役男學歷統計表
df = pd.read_csv("https://quality.data.gov.tw/dq_download_csv.php?nid=73249&md5_url=0edbbf4b7e03f2761085bfdc1113d38e")
df.head()
for i in range(21):
k = 2 * i + 1
df.drop(k, inplace=True)
df.drop('其他', axis = 1, inplace=True)
df.head()
df.index = range(1, 22)
df.head()
# +
for i in df[['基礎訓練人數', '博 士', '碩 士', '學 士', '大 專', '高中職', '國 中', '國 小']]:
df[i] = df[i].astype(int)
df['高中職以下'] = df['高中職'] + df['國 中'] + df['國 小']
df.drop('高中職', axis = 1, inplace=True)
df.drop('國 中', axis = 1, inplace=True)
df.drop('國 小', axis = 1, inplace=True)
# -
df.head()
# ### 平均基礎訓練人數
int(df['基礎訓練人數'].mean())
df.基礎訓練人數.plot(title = 'Trainers', yticks = range(0,3000,200),use_index = 'false')
# ### 整體博士/碩士/學士 /大專 /高中職以下人數比率
print('博士比率:{:.2%}'.format(int((df['博 士'].sum())) / int(df['基礎訓練人數'].sum())))
print('碩士比率:{:.2%}'.format(int((df['碩 士'].sum())) / int(df['基礎訓練人數'].sum())))
print('學士比率:{:.2%}'.format(int((df['學 士'].sum())) / int(df['基礎訓練人數'].sum())))
print('大專比率:{:.2%}'.format(int((df['大 專'].sum())) / int(df['基礎訓練人數'].sum())))
print('高中職以下比率:{:.2%}'.format(int(df['高中職以下'].sum())/ int(df['基礎訓練人數'].sum())))
df['博 士'].plot(color = 'red',legend = 'true')
df['碩 士'].plot(color = 'white',legend = 'true')
df['學 士'].plot(color = 'black',legend = 'true')
df['大 專'].plot(color = 'yellow',legend = 'true')
df['高中職以下'].plot(color ='m',legend = 'true')
plt.axis('off')
| NCCU Applications of mathematics softwares/lesson_8_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # AlexNet in TFLearn
# #### for Oxford's 17 Category Flower Dataset Classification
# #### Based on https://github.com/tflearn/tflearn/blob/master/examples/images/alexnet.py
from __future__ import division, print_function, absolute_import
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
# #### Import Data
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))
# #### Build 'AlexNet'
# +
network = input_data(shape=[None, 227, 227, 3])
network = conv_2d(network, 96, 11, strides=4, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 256, 5, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 256, 3, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 17, activation='softmax')
network = regression(network, optimizer='momentum',
loss='categorical_crossentropy',
learning_rate=0.001)
# -
# #### Training
model = tflearn.DNN(network, checkpoint_path='model_alexnet',
max_checkpoints=1, tensorboard_verbose=2)
# n_epoch=1000 is recommended:
model.fit(X, Y, n_epoch=10, validation_set=0.1, shuffle=True,
show_metric=True, batch_size=64, snapshot_step=200,
snapshot_epoch=False, run_id='alexnet_oxflowers17')
| demos-for-talks/AlexNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images")
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# -
# This fetches the MNIST dataset, which is a set of 70000 small images of hand-written digits
# In case MNIST site is down, download the MNIST.mat file and put it in scikit-learn-data folder
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist
X, y = mnist["data"], mnist["target"]
X.shape , y.shape
# +
# The 784 above referes to the number of pixels present in each image.
# Each image in the dataset is a 28x28 image, giving it 784 pixels.
# Each value in the feature columns for each row is the intensity of that pixel in the image
# Pixel intensity values vary from 0 (white) to 255 (black).
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# This variable is used later in the Stochastic Gradient Descent classifier
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = matplotlib.cm.binary,interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
# -
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[36000]
# The dataset is already split into training and test sets
# First 60000 rows of training data and remaining test data
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
# +
# This shuffles the training set
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
# -
# This creates target vectors for the digit 5
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
# +
# Use the Stochastic Gradient Descent classifier.
# This classifier works in the following way:
# For each instance, it computes a decision score based on a decision function
# If the above score is higher than a threshold, the instance is assgined to a positive class
# Otherwise, it enters the instance in the negative class
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=5, random_state=42)
sgd_clf.fit(X_train, y_train_5)
# -
sgd_clf.predict([some_digit])
# Accuracy is generally not a good classifier score for classification problems.
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
# +
# cross_val_predict() performs K-fold cross-validation, but instead of returning the evaluation scores,
# it returns the predictions made on each test fold
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
# +
# The above allows the use of the confusion matrix, since we have a clean prediction for each instance in the training set
# Confusion Matrix :
# First row = negative class (not 5s),
# First column = correct non 5 = true negative (TN), Second column = false positive (FP) (wrongly classed as 5)
# Second row = positive class (5s),
# First column = wrongly classified as not 5 = false negative (FN), Second column = true positive (TP)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
# -
# Demonstrating that a perfect classifier has non-zero values only on the main diagonal
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
# +
# Precision = TP/(TP+FP)
# Recall = TP/(TP+FN)
# When the detector claims an image is a 5, it is only correct 76% of the time
# The detector is only able to detect 79% of the 5s present in the training set
from sklearn.metrics import precision_score, recall_score
print(precision_score(y_train_5, y_train_pred))
recall_score(y_train_5, y_train_pred)
# -
# Precision and Recall is often combined to create the F-score
# F-score is the harmonic mean of the precision and recall scores
# Harmonic mean given more weight to lower values.
# So classifier only gets a high F-score only if both precision and recall values are high.
# F-scores favour classifiers who have both high precision and high recall.
# increasing precision reduces recall, and vice versa
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
# If the value of the decision threshold mentioned earlier changes,
# it can change the precision and recall values
y_scores = sgd_clf.decision_function([some_digit])
y_scores
# The SGD classifier by default uses a threshold value of zero
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
# Here we change the threshold value for the classifier and recalculate the predictions
# So, changing the threshold can affect precision and recall values.
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
# +
# Deciding which value of threshold to use.
# For this we need the scores of all instances of the training set
# This is obtained using the cross_val_predict() function, but specifying that we need decision scores instead of predictions.
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
print(y_scores.shape)
# Using these scores, compute precision and recall for all possible threshold values.
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
# +
# Plot precision and recall as functions of the threshold values.
# precisions[:-1] returns all values from the precisions list except the last value.
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
# -
y_train_pred_90 = (y_scores > 70000)
print(precision_score(y_train_5, y_train_pred_90))
print(recall_score(y_train_5, y_train_pred_90))
# +
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
# +
# ROC (Receiver Operating Characteristic) curve - It is very similar to the precision-recall curve
# Difference is that it plots the True Positive Rate (recall) against false positive rate
# False Positive Rate : ratio of negative instances that are incorrectly classified as positive.
# To Plot ROC Curve, calculate first the TPR and FPR for various threshold values\
# This is done using roc_curve() function
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
# +
# Plotting TPR against FPR shows that greater the TPR (recall) value, higher is the FPR
# Dotted line on the curve represents a purely random classifier.
# Curves for good classifiers need to be as far away towqards the top left of the curve from the dotted line.
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
# +
# roc_auc_score calculates the area under the curve for the ROC curve.
# A good classifier will have this value close to 1.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
# -
# Random Forest Classifier does not have a method to return actual decision scores.
# Instead the predict_proba method returns the probabilities of positive and negative classes
# The second column of the ndarray returned by cross_val_predict corresponds to positive probabilities
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=10, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
# The probability value for the positive class serves as a proxy for the decision scores in y_probas_forest
y_scores_forest = y_probas_forest[:, 1]
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest=cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
print(precision_score(y_train_5, y_train_pred_forest))
print(recall_score(y_train_5, y_train_pred_forest))
# Under the hood, SGD classifier trains a binary classifier for each class
# gets their decision scores
# select the class with the highest score.
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
# decision function scores for the various binary classifiers
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
# class with the highest decision function score
np.argmax(some_digit_scores)
# When a classifier is trained, it stores the list of target classes in the classes_ attribute
sgd_clf.classes_
# In case of the RFC, there is no competition between OvsA or OvsO strategies.
# This is because RFC can directly classify instances into multiple classes.
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
# Probabilities of the test digit belonging to each class
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
# Using the StandardScaler improves the accuracy to over 90% from 85%
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
# The confusion matrix assumes that a proper model has been found for the data
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
# Plotting errors
row_sums = conf_mx.sum(axis=1, keepdims=True) # Finds how many of the images are classified into each class
norm_conf_mx = conf_mx / row_sums
# Filling the main diagonal with zeros only leaves the erroneous classifications which we want to plot.
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
# +
# The left column are all digits classified as 3, and the right column are digits classified as 5.
# The bottom left corner contains digits incorrectly classified as 3
# The top right corner contains digits incorrectly classified as 5
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
# +
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd] #concatenates y_train_large and y_train_odd along the columns
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
# -
knn_clf.predict([some_digit])
# This code computes the F1 score based on all labels
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
# Adding noise to images
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
| Chapter-3/classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Differentiable Neural Computer
#
# ## The Problem - how do we create more general purpose learning machines?
#
# Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just
# beginning to build neural networks that can think slowly. that is, deliberate or reason using knowledge.
# For example, how could a neural network store memories for facts like the connections in a transport network
# and then logically reason about its pieces of knowledge to answer questions?
#
# 
#
# this consists of a neural network that can read from and write to an external memory matrix,
# analogous to the random-access memory in a conventional computer.
#
# Like a conventional computer, it can use its memory to represent and manipulate complex data structures,
# but, like a neural network, it can learn to do so from data.
#
# DNCs have the capacity to solve complex, structured tasks that are
# inaccessible to neural networks without external read–write memory.
#
# 
#
# [](http://www.youtube.com/watch?v=B9U8sI7TcMYE)
#
#
#
# Modern computers separate computation and memory. Computation is performed by a processor,
# which can use an addressable memory to bring operands in and out of play.
#
# In contrast to computers, the computational and memory resources of artificial neural networks
# are mixed together in the network weights and neuron activity. This is a major liability:
# as the memory demands of a task increase, these networks cannot allocate new storage
# dynam-ically, nor easily learn algorithms that act independently of the values realized
# by the task variables.
#
# The whole system is differentiable, and can therefore be trained
# end-to-end with gradient descent, allowing the network to learn
# how to operate and organize the memory in a goal-directed manner.
#
# If the memory can be thought of as the DNC’s RAM, then the network, referred to as the ‘controller’,
# is a differentiable CPU whose operations are learned with gradient descent.
#
#
#
# How is it different from its predecessor, the Neural Turing Machine?
#
# basically, more memory access methods than NTM
#
# DNC extends the NTM addressing the following limitations:
#
# (1) Ensuring that blocks of allocated memory do not overlap and interfere.
#
# (2) Freeing memory that have already been written to.
#
# (3) Handling of non-contiguous memory through temporal links.
#
#
# the system required hand-crafted input to accomplish its learning and inference. This is not an NLP system where unstructured text is applied at input.
#
# 3 forms of attention for heads
# - content lookup
# - records transitions between consecutively written locations in an N × N temporal link matrix L.
# This gives a DNC the native ability to recover sequences in the order in which it wrote them, even
# when consecutive writes did not occur in adjacent time-step
# - The third form of attention allocates memory for writing.
#
# Content lookup enables the formation of associative data structures;
# temporal links enable sequential retrieval of input sequences;
# and allocation provides the write head with unused locations.
#
# DNC memory modification is fast and can be one-shot, resembling the associative
# long-term potentiation of hippocampal CA3 and CA1 synapses
#
# Human ‘free recall’ experiments demonstrate the increased probability of
# item recall in the same order as first pre-sented (temporal links)
#
# DeepMind hopes that DNCs provide both a new tool for computer science and a new metaphor for cognitive science
# and neuroscience: here is a learning machine that, without prior programming, can organise information
# into connected facts and use those facts to solve problems.
import numpy as np
import tensorflow as tf
import os
# +
class DNC:
def __init__(self, input_size, output_size, seq_len, num_words = 256, word_size = 64, num_heads = 4):
'''
Initialize the DNC:
In this tutorial we are basically using the DNC to understand the mapping between the input
and output data.
input data: [[0,0], [0,1], [1,0], [1,1]]
output data: [[1,0], [0,0], [0,0], [0,1]]
'''
# define input and output sizes
self.input_size = input_size
self.output_size = output_size
# define read and write vectors
self.num_words = num_words # N
self.word_size = word_size # W
# define number of read and write heads
self.num_heads = num_heads # R
# size of output vector from controller
# the magic numbers are just a type of hyper-parameters
# set them according to your own use-case, they come from the way we divide our
# interface vector into read, write, gate and read mode variables
self.interface_size = num_heads*word_size + 3*word_size + 5*num_heads + 3
# define input size
# this comes after the flatten the input and concatenate it with the
# previously read vectors from the memory
self.nn_input_size = num_heads*word_size + input_size
# define output size
self.nn_output_size = output_size + self.interface_size
# gaussian normal distribution for both outputs
# # ???
self.nn_out = tf.truncated_normal([1, self.output_size], stddev = 0.1)
self.interface_vec = tf.truncated_normal([1, self.interface_size], stddev = 0.1)
# define memory matrix
self.mem_mat = tf.zeros([num_words, word_size]) # N*W
# define usage vector
# it tells which part of the memory have been used so far
self.usage_vec = tf.fill([num_words, 1], 1e-6) # W*1
# define temporal link matrix
# it tells in which order the locations were written
self.link_mat = tf.zeros([num_words, num_words]) # N*N
# define precedence weight
# it tell to the degree which the last weight was written to
self.precedence_weight = tf.zeros([num_words, 1]) # N*1
# define read and write weight variables
self.read_weights = tf.fill([num_words, num_heads], 1e-6) # N*R
self.write_weights = tf.fill([num_words, 1], 1e-6) # N*1
self.read_vec = tf.fill([num_heads, word_size], 1e-6) # N*W
#######################
## Network Variables ##
#######################
# parameters
hidden_layer_size = 32
# define placeholders
self.i_data = tf.placeholder(tf.float32, [seq_len*2, self.input_size], name = 'input_placeholder')
self.o_data = tf.placeholder(tf.float32, [seq_len*2, self.output_size], name = 'output_placeholder')
# define feedforward network weights
self.W1 = tf.Variable(tf.truncated_normal([self.nn_input_size, hidden_layer_size], stddev = 0.1),
name = 'layer1_weights', dtype = tf.float32)
self.b1 = tf.Variable(tf.truncated_normal([hidden_layer_size], stddev = 0.1),
name = 'layer1_bias', dtype = tf.float32)
self.W2 = tf.Variable(tf.truncated_normal([hidden_layer_size, self.nn_output_size], stddev = 0.1),
name = 'layer2_weights', dtype = tf.float32)
self.b2 = tf.Variable(tf.truncated_normal([self.nn_output_size], stddev = 0.1),
name = 'layer2_bias', dtype = tf.float32)
# define DNC output weights
# self.nn_out_weights to convert the output of neural network into proper output
self.nn_out_weights = tf.Variable(
tf.truncated_normal([self.nn_output_size, self.output_size],
stddev = 0.1),
name = 'nn_output_weights', dtype = tf.float32)
# self.interface_weights to convert the output of neural network to proper interface vector
self.interface_weights = tf.Variable(
tf.truncated_normal([self.nn_output_size, self.interface_size],
stddev = 0.1),
name = 'interface_weights', dtype = tf.float32)
#
self.read_vec_out_weights = tf.Variable(
tf.truncated_normal([self.num_heads*self.word_size, self.output_size],
stddev = 0.1),
name = 'read_vector_output_weights', dtype = tf.float32)
##########################
## Attention Mechanisms ##
##########################
'''
In DNC we have three different attention mechanisms:
1. Content Lookup (Content-Addressing in paper):
{From NTM paper} For content-addressing, each head (whether employed for reading or
writing) first produces a key-vector k, that is then compared to each vector in memory by
a similarity measure. The content-based system produces a normalized weighting based
on similarity [and a positive key-strength (beta), which can amplify or attenuate the
precision of the focus.]
2. Allocation weighting:
{From DNC paper} To allow controller to free and allocate memory as needed, we developed
a differentiable analogue to 'free-list' memory scheme, whereby a a list of available
memory location is maintained by adding to and removig from a linked list.
{From tutorial} The ‘usage’ of each location is represented as a number between 0 and 1,
and a weighting that picks out unused locations is delivered to the write head. This is
independent of the size and contents of the memory, meaning that DNCs can be trained to
solve a task using one size of memory and later upgraded to a larger memory without
retraining
3. Temporal Linking:
{From DNC paper} The memory location defined [till now] stores no information about the
order in which memory locations are written to. However, there are many situation where
retaining this information is useful: for example, when a sequence inrtuctions must be
recorded and retrieved in order. We therefore use a temporal link matrix to keep track
of consecutively modified memory locations.
'''
# define content lookup
def content_lookup(self, key, str):
# str is 1*1 or 1*R
# l2 normalization of a vector is the square root of sum of absolute values squared
norm_mem = tf.nn.l2_normalize(self.mem_mat, 1) # N*W
norm_key = tf.nn.l2_normalize(key, 0) # 1*W for write, R*W for read
sim = tf.matmul(norm_mem, norm_key, transpose_b = True) # N*1 for write, N*R for read
return tf.nn.softmax(sim*str, 0) # N*1 or N*R
# define allocation weighting
def allocation_weighting(self):
# tf.nn.top_k() returns
# 1.The k largest elements along each last dimensional slice and
# 2.The indices of values within the last dimension of input
sorted_usage_vec, free_list = tf.nn.top_k(-1*self.usage_vec, k = self.num_words)
sorted_usage_vec *= -1
# tf.cumprod() calculates cumulative product
# tf.cumprod([a, b, c]) --> [a, a*b, a*b*c]
# tf.cumprod([a, b, c], exclusive=True) --> [1, a, a * b]
cumprod = tf.cumprod(sorted_usage_vec, axis = 0, exclusive = True)
unorder = (1-sorted_usage_vec)*cumprod
# allocation weight
alloc_weights = tf.zeros([self.num_words])
I = tf.constant(np.identity(self.num_words, dtype = np.float32))
# for each usage vector
for pos, idx in enumerate(tf.unstack(free_list[0])):
m = tf.squeeze(tf.slice(I, [idx, 0], [1, -1]))
alloc_weights += m*unorder[0, pos]
# allocation weighting for each row in memory
return tf.reshape(alloc_weights, [self.num_words, 1])
###################
## Step Function ##
###################
# define the step function
'''
This is the function that we call while we are running our session at each iteration the
controller recieves two inputs that are concatenated, the input vector and the read vector
from previous time step it also gives two outputs, the output vector and the interface
vector that defines it's interaction with the memory at the current time step.
'''
def step_m(self, input_seq):
'''print('input_seq:',input_seq)
print('self.read_vec:', self.read_vec)
print('reshape',tf.reshape(self.read_vec, [1, self.num_words*self.word_size]))
print(tf.concat([input_seq, tf.reshape(self.read_vec, [1, self.num_heads*self.word_size])], 1))'''
# reshape the input
input_vec_nn = tf.concat([input_seq, tf.reshape(self.read_vec, [1, self.num_heads*self.word_size])], 1)
# forward propogation
l1_out = tf.matmul(input_vec_nn, self.W1) + self.b1
l1_act = tf.nn.tanh(l1_out)
l2_out = tf.matmul(l1_out, self.W2) + self.b2
l2_act = tf.nn.tanh(l2_out)
# output vector, the output of the DNC
self.nn_out = tf.matmul(l2_act, self.nn_out_weights)
# interface vector, how to interact with the memory
self.interface_vec = tf.matmul(l2_act, self.interface_weights)
# define partition vector
'''
We need to get lot of information from the interface vector, which will help us get various
vectors such as read vectors, write vectors, degree to which locations will be freed
'''
p_array = [0]*(self.num_heads * self.word_size) # read keys
p_array += [1]*(self.num_heads) # read string
p_array += [2]*(self.word_size) # write key
p_array += [3] # write string
p_array += [4]*(self.word_size) # erase vector
p_array += [5]*(self.word_size) # write vector
p_array += [6]*(self.num_heads) # free gates
p_array += [7] # allocation gates
p_array += [8] # write gates
p_array += [9]*(self.num_heads*3) # read mode
partition = tf.constant([p_array])
# convert interface vector to set of read write vectors
(read_keys, read_str, write_key, write_str, erase_vec,
write_vec, free_gates, alloc_gate, write_gate, read_modes) = \
tf.dynamic_partition(self.interface_vec, partition, 10)
# read vectors
read_keys = tf.reshape(read_keys, [self.num_heads, self.word_size]) # R*W
read_str = 1 + tf.nn.softplus(tf.expand_dims(read_str, 0)) # 1*R
# write vectors
write_key = tf.expand_dims(write_key, 0) # 1*W
write_str = 1 + tf.nn.softplus(tf.expand_dims(write_str, 0)) # 1*1
erase_vec = tf.nn.sigmoid(tf.expand_dims(erase_vec, 0)) # 1*W
write_vec = tf.expand_dims(write_vec, 0) # 1*w
# gates
# free gates, the degree to which the locations at read head will be freed
free_gates = tf.nn.sigmoid(tf.expand_dims(free_gates, 0)) # 1*R
# the fraction of writing that is being allocated in a new location
alloc_gate = tf.nn.sigmoid(alloc_gate) # 1
# the amount of information to be written to memory
write_gate = tf.nn.sigmoid(write_gate) # 1
# read modes
# we do a softmax distribution between 3 read modes (backward, forward, lookup)
# The read heads can use gates called read modes to switch between content lookup
# using a read key and reading out locations either forwards or backwards
# in the order they were written.
read_modes = tf.nn.softmax(tf.reshape(read_modes, [3, self.num_heads])) # 3*R
## WRITING
# the memory retention vector tells by how much each location will not be freed
# by the free gates, helps in determining usage vector
retention_vec = tf.reduce_prod(1 - free_gates*self.read_weights, reduction_indices = 1)
self.usage_vec = (self.usage_vec + self.write_weights - \
self.usage_vec*self.write_weights) * retention_vec
# allocation weighting is used to provide new locations for writing
alloc_weights = self.allocation_weighting()
write_lookup_weights = self.content_lookup(write_key, write_str)
# define write weights
self.write_weights = write_gate*(alloc_gate*alloc_weights + \
(1-alloc_gate)*write_lookup_weights)
# write -> erase -> write to memory
self.mem_mat = self.mem_mat*(1 - tf.matmul(self.write_weights, erase_vec)) + \
tf.matmul(self.write_weights, write_vec)
# temporal link matrix
nnweight_vec = tf.matmul(self.write_weights, tf.ones([1, self.num_words])) # N*N
self.link_mat = self.link_mat*(1 - nnweight_vec - tf.transpose(nnweight_vec)) + \
tf.matmul(self.write_weights, self.precedence_weight, transpose_b = True)
self.link_mat *= tf.ones([self.num_words, self.num_words]) - \
tf.constant(np.identity(self.num_words, dtype = np.float32))
# update precedence weight
self.precedence_weight = (1 - tf.reduce_sum(self.write_weights, reduction_indices = 0)) * \
self.precedence_weight + self.write_weights
# 3 read modes
forw_w = read_modes[2] * tf.matmul(self.link_mat, self.read_weights)
look_w = read_modes[1] * self.content_lookup(read_keys, read_str)
back_w = read_modes[0] * tf.matmul(self.link_mat, self.read_weights, transpose_a = True)
# initialize read weights
self.read_weights = forw_w + look_w + back_w
# read vector
self.read_vec = tf.transpose(tf.matmul(self.mem_mat, self.read_weights, transpose_a = True))
# get final read output
read_vec_mut = tf.matmul(tf.reshape(self.read_vec, [1, self.num_heads*self.word_size]),
self.read_vec_out_weights)
# return the final output
return self.nn_out + read_vec_mut
# output the list of numbers (one hot encoded) by running step function
def run(self):
big_out = []
for t, seq in enumerate(tf.unstack(self.i_data, axis = 0)):
seq = tf.expand_dims(seq, 0)
y = self.step_m(seq)
big_out.append(y)
return tf.stack(big_out, axis = 0)
# -
# generate randomly generated input, output sequences
num_seq = 10
seq_len = 6
seq_width = 4
num_epochs = 1000
con = np.random.randint(0, seq_width,size=seq_len)
seq = np.zeros((seq_len, seq_width))
seq[np.arange(seq_len), con] = 1
end = np.asarray([[-1]*seq_width])
zer = np.zeros((seq_len, seq_width))
# final i/o data
final_i_data = np.concatenate((seq, zer), axis = 0)
final_o_data = np.concatenate((zer, seq), axis = 0)
# +
# define compute graph
graph = tf.Graph()
# running the graph
with graph.as_default():
with tf.Session() as sess:
# define the DNC
dnc = DNC(input_size = seq_width,
output_size = seq_width,
seq_len = seq_len,
num_words = 10,
word_size = 4,
num_heads = 1)
#calculate the predicted output
output = tf.squeeze(dnc.run())
#compare prediction to reality, get loss via sigmoid cross entropy
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=dnc.o_data))
#use regularizers for each layer of the controller
regularizers = (tf.nn.l2_loss(dnc.W1) + tf.nn.l2_loss(dnc.W2) +
tf.nn.l2_loss(dnc.b1) + tf.nn.l2_loss(dnc.b2))
#to help the loss convergence faster
loss += 5e-4 * regularizers
#optimize the entire thing (memory + controller) using gradient descent. dope
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
#initialize input output pairs
sess.run(tf.global_variables_initializer())
#for each iteration
for i in range(0, num_epochs+1):
#feed in each input output pair
feed_dict = {dnc.i_data: final_i_data, dnc.o_data: final_o_data}
#make predictions
l, _, predictions = sess.run([loss, optimizer, output], feed_dict=feed_dict)
if i%100==0:
print(i,l)
# print predictions
print(np.argmax(final_i_data, 1))
print(np.argmax(final_o_data, 1))
print(np.argmax(predictions, 1))
# -
| Examples/DNC_Simple_Siraj.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pathlib import Path
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
# +
# Convert categorical data to numeric and separate target feature for training data
YesNo = {'Y':1,'N':0}
train_df2 = train_df.replace({'hardship_flag':YesNo, 'debt_settlement_flag':YesNo})
HomeOwner = {'ANY':0,'RENT':1,'MORTGAGE':2,'OWN':3}
train_df3 = train_df2.replace({'home_ownership':HomeOwner})
verification_dict = {'Not Verified':0,'Source Verified':1,'Verified':1}
train_df4 = train_df3.replace({'verification_status':verification_dict})
LoanStatus = {'low_risk':1,'high_risk':0}
train_df5 = train_df4.replace({'loan_status':LoanStatus})
InitialStatus = {'w':0,'f':1}
train_df6 = train_df5.replace({'initial_list_status':InitialStatus})
ApplicationType = {'Individual':1,'Joint App':0}
train_df8 = train_df7.drop(['index','pymnt_plan'],axis='columns')
file_path = Path('Resources/cleaned_2019_credit_data.csv')
train_df8.to_csv(file_path, index=False)
train_df9 = train_df8.drop(['Unnamed: 0'],axis='columns')
train_df9.head()
X_train = train_df9.drop('loan_status', axis=1)
y_train = train_df9['loan_status'].values
print(X_train.select_dtypes(include=[object]))
# +
# Convert categorical data to numeric and separate target feature for testing data
test_df2 = test_df.replace({'hardship_flag':YesNo, 'debt_settlement_flag':YesNo})
test_df3 = test_df2.replace({'home_ownership':HomeOwnership})
test_df4 = test_df3.replace({'verification_status':verification_dict})
test_df5 = test_df4.replace({'loan_status':LoanStatus})
test_df6 = test_df5.replace({'initial_list_status':InitialList})
test_df7 = test_df6.replace({'application_type':ApplicationType})
test_df8 = test_df7.drop(['index','pymnt_plan'],axis='columns')
file_path = Path('Resources/cleaned_2020_credit_data.csv')
test_df8.to_csv(file_path, index=False)
test_df9 = test_df8.drop(['Unnamed: 0'],axis='columns')
test_df9.head()
# +
# add missing dummy variables to testing set
test_df10 = pd.get_dummies(test_df9)
X_test = test_df10.drop('loan_status', axis=1)
y_test = test_df10['loan_status'].values
# -
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs',max_iter=200)
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# +
# Train a Random Forest Classifier model and print the model score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f'Training Score: {clf.score(X_train, y_train)}')
print(f'Testing Score: {clf.score(X_test, y_test)}')
# -
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
| Credit Risk Evaluator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### The EDA is an attempt to gain any inspiration and hopefully reveal any relationship among odors and between odors and "comfort" (explained below) with the available data.
import re
import pandas as pd
import matplotlib.pyplot as plt
from numpy import cumsum, argwhere, meshgrid
from mpl_toolkits import mplot3d
from sklearn import preprocessing
from sklearn.decomposition import PCA
from sklearn.cluster import DBSCAN
df = pd.read_csv('odor.csv')
df.head()
# The OSHAPEL of a chemical, in short, is a legal upper limit in the U.S. for exposure of an employee to it. We assume the the agent that we will use in simulations has sensitive sensory system, and thus has a degree of preference proportional to the attribute.
df.loc[df['Character'].isna(), 'Character'] = '' # remove nan in Odor Character
odors_OSHA = df.loc[:, ['Character', 'OSHAPEL']].copy()
odors_OSHA.dropna(inplace=True) # consider those with OSHA limit
odors_OSHA.reset_index(inplace=True) # reindex the dataframe
# make it a complete disjunctive table
def make_CDT(df):
index=range(df.shape[0])
odor_characters = set()
for i, row in df.iterrows():
found = re.split(r' +', row['Character']) # preprocessing already removed all beginning and ending spaces
if len(found) == 0:
continue
found = set([a for a in found if a not in odor_characters])
odor_characters |= found
df = df.join(pd.DataFrame(0, index=index, columns=found))
df.loc[i, found] = 1
df.drop('Character', axis=1, inplace=True) # drop the redundant column Odor Character
# each column is divided by the percentage of presence of the corresponding label
for i in range(2, df.shape[1]):
presence = df.iloc[:, i].sum() / df.shape[0]
df.iloc[:, i] = df.iloc[:, i] / presence - 1
return df
odors_OSHA = make_CDT(odors_OSHA)
# Standardization
OSHA = odors_OSHA[['OSHAPEL']].values
standard = preprocessing.StandardScaler()
OSHA_normalized = standard.fit_transform(OSHA)
odors_OSHA.loc[:, 'OSHAPEL'] = OSHA_normalized
# Multiple Correspondance Analysis
pca = PCA()
pca.fit(odors_OSHA.iloc[:, 2:])
plt.figure()
cumulative_var = cumsum(pca.explained_variance_ratio_)
plt.plot(cumulative_var)
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Variance (%)')
plt.show()
n_components = argwhere(cumulative_var >= 0.3)[0, 0]
pca = PCA(n_components)
odors_pca = pca.fit_transform(odors_OSHA.iloc[:, 2:])
n_components
fig, ax = plt.subplots(n_components, 1)
indices_ori = list(odors_OSHA.iloc[:, 0])
for i in range(n_components):
ax[i].scatter(df.iloc[indices_ori, -1], odors_pca[:, i], s=3)
ax[i].set_ylabel('PC{}'.format(i+1))
ax[i].set_xlim(0, 150)
plt.xlabel('OSHAPEL')
fig.set_size_inches((15, 25))
plt.show()
odors_dbscan = DBSCAN().fit_predict(odors_pca)
def within_cluster_relation(k, df, odors_pca, odors_dbscan, odors, n_components):
plt.figure(figsize=(15, 5))
indices = argwhere(odors_dbscan == k).T[0]
indices_ori = list(odors.iloc[indices, 0])
for i in range(n_components):
plt.scatter(df.iloc[indices_ori, -1], odors_pca[indices, i], s=5, label='PC{}'.format(i+1))
plt.xlabel('OSHAPEL')
plt.ylabel('PC')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
odors_dbscan
within_cluster_relation(-1, df, odors_pca, odors_dbscan, odors_OSHA, n_components)
# Most components are uniformly distributed in their projections onto OSHAPEL.
within_cluster_relation(0, df, odors_pca, odors_dbscan, odors_OSHA, n_components)
within_cluster_relation(1, df, odors_pca, odors_dbscan, odors_OSHA, n_components)
# Does it means PC2, PC3, and PC9 have some relationships?
plt.figure(figsize=(15, 5))
plt.scatter(odors_pca[:, 0], odors_pca[:, 1], s=5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.figure(figsize=(15, 5))
plt.scatter(odors_pca[:, 0], odors_pca[:, 1], s=5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.ylim(-15, 5)
plt.xlim(-1, 1)
odors = df.loc[:, ['Character']].copy()
odors = make_CDT(odors)
# Multiple Correspondance Analysis
pca = PCA()
pca.fit(odors)
plt.figure()
cumulative_var = cumsum(pca.explained_variance_ratio_)
plt.plot(cumulative_var)
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Variance (%)')
plt.show()
n_components = argwhere(cumulative_var >= 0.3)[0, 0]
pca = PCA(n_components)
odors_pca = pca.fit_transform(odors)
n_components
cov = pca.get_covariance()
plt.imshow(cov[:31, :31], cmap='bwr')
plt.colorbar()
odors_dbscan = DBSCAN().fit_predict(odors_pca)
plt.figure(figsize=(15, 5))
plt.scatter(odors_pca[:, 0], odors_pca[:, 1], s=5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.figure(figsize=(15, 5))
plt.scatter(odors_pca[:, 0], odors_pca[:, 1], s=5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.xlim(-6, 2)
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(odors_pca[:, 0], odors_pca[:, 1], odors_pca[:, 2])
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_zlabel('PC3')
plt.show()
# ##### In general, the EDA fails to reveal the relationship among odors, possibly because of not choosing the most suitable method and not gathering enough desired data; as well as not experienced enough in EDA. This experience however helps me deciding the proper setup of stimuli.
| experiment/odor_EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Upload to Dataverse
#
# PyEnzyme offers the upload to any Dataverse installation that supports the official [EnzymeML metadatablock](https://doi.org/10.18419/darus-2105) by utilizing the Dataverse API [PyDaRUS](https://github.com/JR-1991/pyDaRUS) to map all relevant fields and perform upload. The following steps will be done in this example:
#
# - Convert an EnzymeML spreadsheet to an `EnzymeMLDocument`
# - Upload the dataset to Dataverse
import pyenzyme as pe
# Load the EnzymeMLDocument
enzmldoc = pe.EnzymeMLDocument.fromTemplate("EnzymeML_Template_Example.xlsm")
# Upload it to Dataverse (Dataset is private)
enzmldoc.uploadToDataverse(dataverse_name="playground")
# For reasons of data quality, the resulting dataset cant be viewed on the web. In order to visit examples that have utilized the method, see the [EnzymeML at Work](https://darus.uni-stuttgart.de/dataverse/enzymeml_at_work) collection.
# ------
| docs/_getstarted/04_UploadToDataverse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simulating Power Spectra
#
# In this notebook we will explore how to simulate the data that we will use to investigate how different spectral parameters can influence band ratios.
#
# Simulated power spectra will be created with varying aperiodic and periodic parameters, and are created using the [FOOOF](https://github.com/fooof-tools/fooof) tool.
#
# In the first set of simulations, each set of simulated spectra will vary across a single parameter while the remaining parameters remain constant. In a secondary set of simulated power spectra, we will simulate pairs of parameters changing together.
#
# For this part of the project, this notebook demonstrates the simulations with some examples, but does not create the actual set simulations used in the project. The full set of simulations for the project are created by the standalone scripts, available in the `scripts` folder.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from fooof.sim import *
from fooof.plts import plot_spectra
# -
# Import custom project code
import sys
sys.path.append('../bratios')
from settings import *
from paths import DATA_PATHS as dp
# +
# Settings
FREQ_RANGE = [1, 40]
LO_BAND = [4, 8]
HI_BAND = [13, 30]
# Define default parameters
EXP_DEF = [0, 1]
CF_LO_DEF = np.mean(LO_BAND)
CF_HI_DEF = np.mean(HI_BAND)
PW_DEF = 0.4
BW_DEF = 1
# Set a range of values for the band power to take
PW_START = 0
PW_END = 1
W_INC = .1
# Set a range of values for the aperiodic exponent to take
EXP_START = .25
EXP_END = 3
EXP_INC = .25
# -
# ## Simulate power spectra with one parameter varying
#
# First we will make several power spectra with varying band power.
#
# To do so, we will continue to use the example of the theta beta ratio, and vary the power of the higher (beta) band.
# +
# The Stepper object iterates through a range of values
pw_step = Stepper(PW_START, PW_END, PW_INC)
num_spectra = len(pw_step)
# `param_iter` creates a generator can be used to step across ranges of parameters
pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF], [CF_HI_DEF, pw_step, BW_DEF]])
# +
# Simulate power spectra
pw_fs, pw_ps, pw_syns = gen_group_power_spectra(num_spectra, FREQ_RANGE, EXP_DEF, pw_iter)
# Collect together simulated data
pw_data = [pw_fs, pw_ps, pw_syns]
# Save out data, to access from other notebooks
np.save(dp.make_file_path(dp.demo, 'PW_DEMO', 'npy'), pw_data)
# -
# Plot our series of generated power spectra, with varying high-band power
plot_spectra(pw_fs, pw_ps, log_powers=True)
# Above, we can see each of the spectra we generated plotted, with the same properties for all parameters, except for beta power.
#
# The same approach can be used to simulate data that vary only in one parameter, for each isolated spectral feature.
# ## Simulate power spectra with two parameters varying
#
# In this section we will explore generating data in which two parameters vary simultaneously.
#
# Specifically, we will simulate the case in which the aperiodic exponent varies while power for a higher band oscillation also varies.
#
# The total number of trials will be: `(n_pw_changes) * (n_exp_changes)`.
# +
data = []
exp_step = Stepper(EXP_START, EXP_END, EXP_INC)
for exp in exp_step:
# Low band sweeps through power range
pw_step = Stepper(PW_START, PW_END, PW_INC)
pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF],
[CF_HIGH_DEF, pw_step, BW_DEF]])
# Generates data
pw_apc_fs, pw_apc_ps, pw_apc_syns = gen_group_power_spectra(
len(pw_step), FREQ_RANGE, [0, exp], pw_iter)
# Collect together all simulated data
data.append(np.array([exp, pw_apc_fs, pw_apc_ps], dtype=object))
# Save out data, to access from other notebooks
np.save(dp.make_file_path(dp.demo, 'EXP_PW_DEMO', 'npy'), data)
# -
# Extract some example power spectra, sub-sampling ones that vary in both exp & power
# Note: this is just a shortcut to step across the diagonal of the matrix of simulated spectra
plot_psds = [data[ii][2][ii, :] for ii in range(min(len(exp_step), len(pw_step)))]
# Plot a selection of power spectra in the paired parameter simulations
plot_spectra(pw_apc_fs, plot_psds, log_powers=True)
# In the plot above, we can see a selection of the data we just simulated, selecting a group of power spectra that vary across both exponent and beta power.
#
# In the next notebook we will calculate band ratios and see how changing these parameters affects ratio measures.
# ### Simulating the full set of data
#
# Here we just simulated example data, to show how the simulations work.
#
# The full set of simulations for this project are re-created with scripts, available in the `scripts` folder.
#
# To simulate full set of single parameter simulation for this project, run this script:
#
# `python gen_single_param_sims.py`
#
# To simulate full set of interacting parameter simulation for this project, run this script:
#
# `python gen_interacting_param_sims.py`
#
# These scripts will automatically save all the regenerated data into the `data` folder.
# Check all the available data files for the single parameter simulations
dp.list_files('sims_single')
# Check all the available data files for the interacting parameter simulations
dp.list_files('sims_interacting')
| notebooks/3-Sims-Generation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text output from callbacks
import ipywidgets
# ## Plain `print`
#
# (doesn't work)
# +
def callback1(w):
print('callback1')
button1 = ipywidgets.Button(description='Run')
button1.on_click(callback1)
button1
# -
# ## Output widget
# +
def callback2(w):
with output2:
print('callback2')
output2 = ipywidgets.Output()
button2 = ipywidgets.Button(description='Run')
button2.on_click(callback2)
ipywidgets.VBox(children=[button2, output2])
# -
# ## HTML widget
# +
def callback3(w):
html3.value = 'callback3'
html3 = ipywidgets.HTML()
button3 = ipywidgets.Button(description='Run')
button3.on_click(callback3)
ipywidgets.VBox(children=[button3, html3])
# -
| voila/callback_text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: traja
# language: python
# name: traja
# ---
# # Analyzing Spatial Trajectories with Traja
# Full documentation is available at [traja.readthedocs.io](http://traja.readthedocs.io).
# !pip install traja
import traja
# +
# Create sample random walk
df = traja.generate()
# Visualize x and y values with built-in pandas methods
df.plot()
# -
# ## Plot Trajectory
# ### Plot trajectory with traja accessor method (`.traja.plot()`)
fig = df.traja.plot()
# ### Visualize distribution of angles and turn-angles
df.traja.calc_angle().hist() # w.r.t x-axis
df.traja.calc_turn_angle().hist() # deviation from strait ahead
# ### Visualize flow between grid units
for kind in ['stream', 'quiver', 'contourf']:
fig = df.traja.plot_flow(kind=kind)
# ### Visualize distribution of turn angles over time
df.traja.calc_turn_angle().plot()
for log in [True, False]:
for bins in [8, 32]:
print(f"Bins: {bins}")
df.traja.trip_grid(bins=bins, log=log)
# ### Plot polar bar chart showing turn preference
traja.plotting.polar_bar(df)
# Show non-overlapping histogram
traja.plotting.polar_bar(df, overlap=False)
# ## Resample Trajectory
# Resample to arbitrary step length (here, 20 meters)
fig = df.traja.rediscretize(R=20).traja.plot()
# Resample to arbitrary time (here, 1 second)
fig = df.traja.resample_time(step_time='1s').traja.plot()
| demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %pylab inline
# %config InlineBackend.figure_format = 'retina'
import daft
import scipy.stats as ss
import seaborn as sns
sns.set_style('ticks')
sns.set_context('notebook')
sns.set_palette('colorblind')
# -
# Let's set up a simple hierarchical problem and then show how the various approaches to reweighting work for it.
#
# Our simple problem is a set of (noisy) measurements of a quantity, $x$, whose population is Normal, with unknown mean:
# $$
# x \sim N\left( \mu, 1 \right),
# $$
# and we observe $x_\mathrm{obs}$ unbiasedly with uncertainty $1$:
# $$
# x_\mathrm{obs} \sim N \left( x , 1 \right).
# $$
#
# We have a set of observations,
# $$
# D \equiv \left\{ x_\mathrm{obs}^{(i)} \mid i = 1, \ldots, N \right\}
# $$
# and we want to infer the values of $X = \left\{ x^{(i)} \mid x = 1, \ldots, N \right\}$ and $\mu$.
#
# The full posterior is
# $$
# \pi \left( X, \mu \mid D \right) \propto \left[ \prod_{i = 1}^{N} N\left( x_\mathrm{obs} \mid x, 1\right) N\left( x \mid \mu, 1 \right) \right] p\left( \mu \right).
# $$
# From now on we will assume a flat prior on $\mu$ so that $p\left( \mu \right)$ is a constant.
#
# If we just want to infer $\mu$, we can integrate over $X$ to derive
# $$
# p\left( \mu \mid D \right) = \left[ \prod_{i=1}^{N} N\left( x_\mathrm{obs} \mid \mu, \sqrt{2} \right) \right] p\left( \mu \right) \propto N\left( \mu \mid \left\langle x_\mathrm{obs} \right\rangle, \sqrt{\frac{2}{N}} \right)
# $$
# This exploints that the Evidence for an observation of $x_\mathrm{obs}$ at fixed $\mu$ is
# $$
# p\left( d^{(i)} \mid \mu \right) \equiv \int \mathrm{d} x \, N\left( x_\mathrm{obs}^{(i)} \mid x, 1 \right) N\left( x \mid \mu, 1 \right) = N\left( x_\mathrm{obs}^{(i)} \mid \mu, \sqrt{2} \right).
# $$
#
# The marginal distribution of $x^{(i)}$ is
# $$
# p\left( x^{(i)} \mid D \right) = \int \mathrm{d} \mu \, \mathrm{d} \left( X \backslash \left\{ x^{(i)} \right\} \right) \, p\left( X, \mu \mid D \right) \propto N\left( x_\mathrm{obs}^{(i)} \mid x, 1 \right) N\left(x \mid \left\langle x_\mathrm{obs}^{\backslash (i)} \right\rangle, \sqrt{1 + \frac{2}{N-1}} \right)
# $$
#
# A graphical description of our model is:
# +
column_width = 433.62 / 72 # inches
with rc_context(rc={'figure.figsize': (column_width,column_width),
'text.usetex': True}):
pgm = daft.PGM()
pgm.add_node('lambda', r'$\lambda$', 0.5, 2.5)
pgm.add_node('theta', r'$\theta_i$', 0.5, 1.5)
pgm.add_node('d', r'$d_i$', 0.5, 0.5, observed=True)
pgm.add_plate([0,-0.25, 1.25, 2.25], label=r'$i = 1, \ldots, N$')
pgm.add_edge('lambda', 'theta')
pgm.add_edge('theta', 'd')
pgm.render()
pgm.savefig('../note/pgm.pdf')
# -
# ## The Dataset
np.random.randint(1<<32)
# +
mu_true = 0
N = 32
Nsamp = 1024
rstate = np.random.get_state()
try:
np.random.seed(1443652990)
x_true = np.random.normal(loc=mu_true, scale=1, size=N)
x_obs = np.random.normal(loc=x_true, scale=1, size=N)
x_likelihood = np.random.normal(loc=x_obs, scale=1, size=(Nsamp, N))
finally:
np.random.set_state(rstate)
# +
sns.distplot(x_true, label=r'$x$')
sns.distplot(x_obs, label=r'$x_\mathrm{obs}$')
legend(loc='best')
xlabel(r'$x$')
ylabel(r'$p\left( x \right)$')
# -
# ## "Will's" Reweighting
# In Will's suggested re-weighting scheme, we first draw samples of $\mu$ from the marginal posterior for $\mu$. In more complicated problems we need to do this by MCMC sampling, but here we can draw directly:
# +
mu_samples = np.random.normal(loc=mean(x_obs), scale=sqrt(2/N), size=Nsamp)
sns.distplot(mu_samples)
axvline(0)
xlabel(r'$\mu$')
ylabel(r'$p\left( \mu \right)$')
# -
# Now, for each sample in $\mu$, we draw a sample for each of the $x$s from the conditional distribution
# $$
# x^{(i)} \sim N\left( x_\mathrm{obs}^{(i)} \mid x^{(i)}, 1 \right) N\left( x^{(i)} \mid \mu, 1 \right),
# $$
# which is equivalent to re-weighting the samples in the likelihood by the population distribution at that fixed value of $\mu$, and drawing a random one.
x_samples_will = []
for mu in mu_samples:
wts = ss.norm(loc=mu, scale=1).pdf(x_likelihood)
wts /= np.sum(wts, axis=0)
x = []
for j in range(wts.shape[1]):
x.append(np.random.choice(x_likelihood[:,j], p=wts[:,j]))
x_samples_will.append(x)
x_samples_will = array(x_samples_will)
# Here is the distribution of $x^{(0)}$, and compare to the theoretical distribution:
# +
def x0_theoretical(xobs):
mu_xminusi = mean(xobs[1:])
xs = linspace(-4, 4, 2048)
ps = ss.norm(loc=xs, scale=1).pdf(xobs[0])*ss.norm(loc=mu_xminusi, scale=sqrt(1 + 2/(N-1))).pdf(xs)
ps /= trapz(ps, xs)
return xs, ps
sns.distplot(x_samples_will[:,0])
axvline(x_obs[0])
xs, ps = x0_theoretical(x_obs)
plot(xs, ps, color='k', label='Theoretical')
xlabel(r'$x^{(0)}$')
ylabel(r'$p\left( x^{(0)} \right)$')
# -
# ## Tom's Method
# In Tom's method, in contrast, to draw samples for $x^{(i)}$, we compute a modified PPD:
# $$
# \tilde{p}\left( x^{(i)} \mid D \right) \equiv \int \mathrm{d} \mu \, p\left( x^{(i)} \mid \mu \right) \frac{p\left( \mu \mid D \right)}{p\left( d^{(i)} \mid \mu \right)}
# $$
# and use it to re-weight samples from the likelihood function.
# +
mu = mu_samples[:,newaxis]
modified_ppd_wts = ss.norm(loc=mean(x_obs), scale=sqrt(2/N)).pdf(mu)*ss.norm(loc=mu, scale=1).pdf(x_likelihood)/ss.norm(loc=mu, scale=sqrt(2)).pdf(x_obs[0])
modified_ppd_wts /= np.sum(modified_ppd_wts, axis=0)
x_samples_tom = []
for j in range(x_likelihood.shape[1]):
x_samples_tom.append(np.random.choice(x_likelihood[:,j], p=modified_ppd_wts[:,j], size=Nsamp, replace=True))
x_samples_tom = array(x_samples_tom).T
# +
sns.distplot(x_samples_tom[:,0])
axvline(x_obs[0])
xs, ps = x0_theoretical(x_obs)
plot(xs, ps, color='k', label='Theoretical')
xlabel(r'$x^{(0)}$')
ylabel(r'$p\left( x^{(0)} \right)$')
# -
# ## Let's Check if They Come from the Same Distribution
ss.ks_2samp(x_samples_will[:,0], x_samples_tom[:,0])
# Looks pretty good.
| notebooks/ResampleExample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
# -
# this placeholder will contain our input digits, as flat vectors
img = tf.placeholder(tf.float32, shape=(None, 784))
# +
from keras.layers import Dense
# Keras layers can be called on TensorFlow tensors:
x = Dense(128, activation='relu')(img) # fully-connected layer with 128 units and ReLU activation
x = Dense(128, activation='relu')(x)
preds = Dense(10, activation='softmax')(x) # output layer with 10 units and a softmax activation
# +
labels = tf.placeholder(tf.float32, shape=(None, 10))
from keras.objectives import categorical_crossentropy
loss = tf.reduce_mean(categorical_crossentropy(labels, preds))
# +
from tensorflow.examples.tutorials.mnist import input_data
mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
with sess.as_default():
for i in range(100):
batch = mnist_data.train.next_batch(50)
train_step.run(feed_dict={img: batch[0],
labels: batch[1]})
# -
mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
| deeplearning2/nbs/Keras-Tensorflow-Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np
np.random.seed(42)
import pandas as pd
import string
import re
import gensim
from collections import Counter
import pickle
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, f1_score
from sklearn import metrics
from keras.models import Model
from keras.layers import Input, Dense, Dropout, Conv1D, Embedding, SpatialDropout1D, concatenate
from keras.layers import GRU, LSTM,Bidirectional, GlobalAveragePooling1D, GlobalMaxPooling1D
from keras.layers import CuDNNLSTM, CuDNNGRU
from keras.preprocessing import text, sequence
from keras.callbacks import Callback
from keras import optimizers
from keras.layers import Lambda
from keras.callbacks import *
import warnings
warnings.filterwarnings('ignore')
from nltk.corpus import stopwords
import os
os.environ['OMP_NUM_THREADS'] = '4'
import gc
from keras import backend as K
from sklearn.model_selection import KFold
from unidecode import unidecode
import time
eng_stopwords = set(stopwords.words("english"))
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
# 1. preprocessing
train = pd.read_csv("../input/train.csv")
test = pd.read_csv("../input/test.csv")
print("Train shape : ",train.shape)
print("Test shape : ",test.shape)
# + _uuid="5b42925bff7c0275450d2cb4e2fcbfeef00f5252"
# 1-a. Count non ascii characters
special_character = re.compile(r'[A-Za-z0-9\.\-\?\!\,\#\@\% \'\/\"]',re.IGNORECASE)
train['spl_chars'] = train['question_text'].apply(lambda x: len(special_character.sub('', str(x))))
test['spl_chars'] = test['question_text'].apply(lambda x: len(special_character.sub('', str(x))))
# + _uuid="1b2aa6f947fc1850b8644e461090a83e59337c6c"
train['num_exclamation_marks'] = train['question_text'].apply(lambda comment: comment.count('!'))
#train['num_question_marks'] = train['question_text'].apply(lambda comment: comment.count('?'))
train['num_punctuation'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in '.,;:'))
train['num_symbols'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in '*&$%'))
test['num_exclamation_marks'] = test['question_text'].apply(lambda comment: comment.count('!'))
#test['num_question_marks'] = test['question_text'].apply(lambda comment: comment.count('?'))
test['num_punctuation'] = test['question_text'].apply(lambda comment: sum(comment.count(w) for w in '.,;:'))
test['num_symbols'] = test['question_text'].apply(lambda comment: sum(comment.count(w) for w in '*&$%'))
#train['num_smilies'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in (':-)', ':)', ';-)', ';)')))
#train['num_sad'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in (':-<', ':()', ';-()', ';(')))
#test['num_smilies'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in (':-)', ':)', ';-)', ';)')))
#test['num_sad'] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in (':-<', ':()', ';-()', ';(')))
# + _uuid="b0e81d9671b97501c884a4d260bbb02008cb5587"
# List Of Bad Words by Google-Profanity Words
bad_words = ['cockknocker', 'n1gger', 'ing', 'fukker', 'nympho', 'fcuking', 'gook', 'freex', 'arschloch', 'fistfucked', 'chinc', 'raunch', 'fellatio', 'splooge', 'nutsack', 'lmfao', 'wigger', 'bastard', 'asses', 'fistfuckings', 'blue', 'waffle', 'beeyotch', 'pissin', 'dominatrix', 'fisting', 'vullva', 'paki', 'cyberfucker', 'chuj', 'penuus', 'masturbate', 'b00b*', 'fuks', 'sucked', 'fuckingshitmotherfucker', 'feces', 'panty', 'coital', 'wh00r.', 'whore', 'condom', 'hells', 'foreskin', 'wanker', 'hoer', 'sh1tz', 'shittings', 'wtf', 'recktum', 'dick*', 'pr0n', 'pasty', 'spik', 'phukked', 'assfuck', 'xxx', 'nigger*', 'ugly', 's_h_i_t', 'mamhoon', 'pornos', 'masterbates', 'mothafucks', 'Mother', 'Fukkah', 'chink', 'pussy', 'palace', 'azazel', 'fistfucking', 'ass-fucker', 'shag', 'chincs', 'duche', 'orgies', 'vag1na', 'molest', 'bollock', 'a-hole', 'seduce', 'Cock*', 'dog-fucker', 'shitz', 'Mother', 'Fucker', 'penial', 'biatch', 'junky', 'orifice', '5hit', 'kunilingus', 'cuntbag', 'hump', 'butt', 'fuck', 'titwank', 'schaffer', 'cracker', 'f.u.c.k', 'breasts', 'd1ld0', 'polac', 'boobs', 'ritard', 'fuckup', 'rape', 'hard', 'on', 'skanks', 'coksucka', 'cl1t', 'herpy', 's.o.b.', 'Motha', 'Fucker', 'penus', 'Fukker', 'p.u.s.s.y.', 'faggitt', 'b!tch', 'doosh', 'titty', 'pr1k', 'r-tard', 'gigolo', 'perse', 'lezzies', 'bollock*', 'pedophiliac', 'Ass', 'Monkey', 'mothafucker', 'amcik', 'b*tch', 'beaner', 'masterbat*', 'fucka', 'phuk', 'menses', 'pedophile', 'climax', 'cocksucking', 'fingerfucked', 'asswhole', 'basterdz', 'cahone', 'ahole', 'dickflipper', 'diligaf', 'Lesbian', 'sperm', 'pisser', 'dykes', 'Skanky', 'puuker', 'gtfo', 'orgasim', 'd0ng', 'testicle*', 'pen1s', 'piss-off', '@$$', 'fuck', 'trophy', 'arse*', 'fag', 'organ', 'potty', 'queerz', 'fannybandit', 'muthafuckaz', 'booger', 'pussypounder', 'titt', 'fuckoff', 'bootee', 'schlong', 'spunk', 'rumprammer', 'weed', 'bi7ch', 'pusse', 'blow', 'job', 'kusi*', 'assbanged', 'dumbass', 'kunts', 'chraa', 'cock', 'sucker', 'l3i+ch', 'cabron', 'arrse', 'cnut', 'how', 'to', 'murdep', 'fcuk', 'phuked', 'gang-bang', 'kuksuger', 'mothafuckers', 'ghey', 'clit', 'licker', 'feg', 'ma5terbate', 'd0uche', 'pcp', 'ejaculate', 'nigur', 'clits', 'd0uch3', 'b00bs', 'fucked', 'assbang', 'mutha', 'goddamned', 'cazzo', 'lmao', 'godamn', 'kill', 'coon', 'penis-breath', 'kyke', 'heshe', 'homo', 'tawdry', 'pissing', 'cumshot', 'motherfucker', 'menstruation', 'n1gr', 'rectus', 'oral', 'twats', 'scrot', 'God', 'damn', 'jerk', 'nigga', 'motherfuckin', 'kawk', 'homey', 'hooters', 'rump', 'dickheads', 'scrud', 'fist', 'fuck', 'carpet', 'muncher', 'cipa', 'cocaine', 'fanyy', 'frigga', 'massa', '5h1t', 'brassiere', 'inbred', 'spooge', 'shitface', 'tush', 'Fuken', 'boiolas', 'fuckass', 'wop*', 'cuntlick', 'fucker', 'bodily', 'bullshits', 'hom0', 'sumofabiatch', 'jackass', 'dilld0', 'puuke', 'cums', 'pakie', 'cock-sucker', 'pubic', 'pron', 'puta', 'penas', 'weiner', 'vaj1na', 'mthrfucker', 'souse', 'loin', 'clitoris', 'f.ck', 'dickface', 'rectal', 'whored', 'bookie', 'chota', 'bags', 'sh!t', 'pornography', 'spick', 'seamen', 'Phukker', 'beef', 'curtain', 'eat', 'hair', 'pie', 'mother', 'fucker', 'faigt', 'yeasty', 'Clit', 'kraut', 'CockSucker', 'Ekrem*', 'screwing', 'scrote', 'fubar', 'knob', 'end', 'sleazy', 'dickwhipper', 'ass', 'fuck', 'fellate', 'lesbos', 'nobjokey', 'dogging', 'fuck', 'hole', 'hymen', 'damn', 'dego', 'sphencter', 'queef*', 'gaylord', 'va1jina', 'a55', 'fuck', 'douchebag', 'blowjob', 'mibun', 'fucking', 'dago', 'heroin', 'tw4t', 'raper', 'muff', 'fitt*', 'wetback*', 'mo-fo', 'fuk*', 'klootzak', 'sux', 'damnit', 'pimmel', 'assh0lez', 'cntz', 'fux', 'gonads', 'bullshit', 'nigg3r', 'fack', 'weewee', 'shi+', 'shithead', 'pecker', 'Shytty', 'wh0re', 'a2m', 'kkk', 'penetration', 'kike', 'naked', 'kooch', 'ejaculation', 'bang', 'hoare', 'jap', 'foad', 'queef', 'buttwipe', 'Shity', 'dildo', 'dickripper', 'crackwhore', 'beaver', 'kum', 'sh!+', 'qweers', 'cocksuka', 'sexy', 'masterbating', 'peeenus', 'gays', 'cocksucks', 'b17ch', 'nad', 'j3rk0ff', 'fannyflaps', 'God-damned', 'masterbate', 'erotic', 'sadism', 'turd', 'flipping', 'the', 'bird', 'schizo', 'whiz', 'fagg1t', 'cop', 'some', 'wood', 'banger', 'Shyty', 'f', 'you', 'scag', 'soused', 'scank', 'clitorus', 'kumming', 'quim', 'penis', 'bestial', 'bimbo', 'gfy', 'spiks', 'shitings', 'phuking', 'paddy', 'mulkku', 'anal', 'leakage', 'bestiality', 'smegma', 'bull', 'shit', 'pillu*', 'schmuck', 'cuntsicle', 'fistfucker', 'shitdick', 'dirsa', 'm0f0']
print(">> Words in bad_word list:", len(bad_words))
# + _uuid="6bf31f248051c70cb837a71bc7392853d5834c66"
train["badwordcount"] = train['question_text'].apply(lambda comment: sum(comment.count(w) for w in bad_words))
test["badwordcount"] = test['question_text'].apply(lambda comment: sum(comment.count(w) for w in bad_words))
# + _uuid="7ee6cecdafe05905fb1a87b0d0f0d7a0a3c3b2bf"
import string
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.tokenize import TweetTokenizer
def tag_part_of_speech(text):
text_splited = text.split(' ')
text_splited = [''.join(c for c in s if c not in string.punctuation) for s in text_splited]
text_splited = [s for s in text_splited if s]
pos_list = pos_tag(text_splited)
noun_count = len([w for w in pos_list if w[1] in ('NN','NNP','NNPS','NNS')])
adjective_count = len([w for w in pos_list if w[1] in ('JJ','JJR','JJS')])
verb_count = len([w for w in pos_list if w[1] in ('VB','VBD','VBG','VBN','VBP','VBZ')])
return[noun_count, adjective_count, verb_count]
for df in ([train, test]):
df['nouns'], df['adjectives'], df['verbs'] = zip(*df['question_text'].apply(
lambda comment: tag_part_of_speech(comment)))
# + _uuid="e173c94e68a61528291cc9473cba37f30847bef8"
train.loc[train.target==1]['nouns'].mean()
# + _uuid="ab6d45bc22305cfe00ec7bdceb8a9764f67bd346"
# 2. remove numbers
def clean_numbers(x):
x = re.sub('[0-9]{5,}', '#####', x)
x = re.sub('[0-9]{4}', '####', x)
x = re.sub('[0-9]{3}', '###', x)
x = re.sub('[0-9]{2}', '##', x)
return x
train['clean_text'] = train['question_text'].apply(lambda x: clean_numbers(str(x)))
test['clean_text'] = test['question_text'].apply(lambda x: clean_numbers(str(x)))
# + _uuid="91b0a8237489bbc87efb394ce8a83dc9988831fd"
#train['clean_text']
# + _uuid="848325747b9d511eb48b382a2c565324908e5a83"
#3. remove non-ascii
special_character_removal = re.compile(r'[^A-Za-z\.\-\?\!\,\#\@\% ]',re.IGNORECASE)
def clean_text(x):
x_ascii = unidecode(x)
x_clean = special_character_removal.sub('',x_ascii)
return x_clean
train['clean_text'] = train['clean_text'].apply(lambda x: clean_text(str(x)))
test['clean_text'] = test['clean_text'].apply(lambda x: clean_text(str(x)))
# + _uuid="5c21bc47b5bd5476b7632c43f6dd4546798508b4"
X_train = train['clean_text'].fillna("something").values
y_train = train.target.values
X_test = test['clean_text'].fillna("something").values
# + _uuid="478cbe489de98a6366b82e65ebbbf83264014365"
#X_train
# + _uuid="0a1ddc438df214446749d198ae34fc910ce9e7d9"
def add_features(df):
df['comment_text'] = df['clean_text'].fillna('something').apply(lambda x:str(x))
df['total_length'] = df['comment_text'].apply(len)
df['capitals'] = df['comment_text'].apply(lambda comment: sum(1 for c in comment if c.isupper()))
df['caps_vs_length'] = df['capitals']/df['total_length']
df['num_words'] = df.comment_text.str.count('\S+')
df['num_unique_words'] = df['comment_text'].apply(lambda comment: len(set(w for w in comment.split())))
df['words_vs_unique'] = df['num_unique_words'] / df['num_words']
df['spl_chars_vs_len'] = df['spl_chars']/df['total_length']
df.loc[np.isinf(train.caps_vs_length),'caps_vs_length'] =0
df.loc[np.isinf(train.words_vs_unique),'words_vs_unique'] =0
df.loc[np.isinf(train.spl_chars_vs_len),'spl_chars_vs_len'] =0
return df
train = add_features(train)
test = add_features(test)
# + _uuid="30b2b9829e2c92852aa32f4f00b6f72b351e6440"
# + _uuid="d8e0ed4c1f9e2e3ed785155b1c4fd7609e66eed8"
features = train[['caps_vs_length', 'words_vs_unique', 'spl_chars_vs_len']].fillna(0)
test_features = test[['caps_vs_length', 'words_vs_unique', 'spl_chars_vs_len']].fillna(0)
# + _uuid="de760831762e47616e9f29d89096d1149506b0f9"
#test[test.num_words>=50].count()
# + _uuid="df06403bb00b1af4fe06bca7dc490e8890988d80"
ss = StandardScaler()
ss.fit(np.vstack((features, test_features)))
features = ss.transform(features)
test_features = ss.transform(test_features)
# + _uuid="da25bfaddf3ace9b4319fa5c226813bd429aa4f2"
max_features = 180000
maxlen = 50
tokenizer = text.Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(X_train) + list(X_test))
X_train_sequence = tokenizer.texts_to_sequences(X_train)
X_test_sequence = tokenizer.texts_to_sequences(X_test)
x_train = sequence.pad_sequences(X_train_sequence, maxlen=maxlen)
x_test = sequence.pad_sequences(X_test_sequence, maxlen=maxlen)
print(len(tokenizer.word_index))
# + _uuid="fcb923a725451e2e03a6ae5b04d5be496f3c78a0"
# Load the FastText Web Crawl vectors
EMBEDDING_FILE_FASTTEXT='../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
EMBEDDING_FILE_TWITTER='../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32')
# switching as glove has better support fot this text
embeddings_index_tw = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE_FASTTEXT,encoding='utf-8'))
embeddings_index_ft = dict(get_coefs(*o.strip().split(' ')) for o in open(EMBEDDING_FILE_TWITTER,encoding='utf-8'))
spell_model = gensim.models.KeyedVectors.load_word2vec_format(EMBEDDING_FILE_FASTTEXT)
# + _uuid="5e8f9f2bb8a57148e11ed3661d0a834d70a83556"
# This code is based on: Spellchecker using Word2vec by CPMP
# https://www.kaggle.com/cpmpml/spell-checker-using-word2vec
words = spell_model.index2word
w_rank = {}
for i,word in enumerate(words):
w_rank[word] = i
WORDS = w_rank
# Use fast text as vocabulary
def words(text): return re.findall(r'\w+', text.lower())
def P(word):
"Probability of `word`."
# use inverse of rank as proxy
# returns 0 if the word isn't in the dictionary
return - WORDS.get(word, 0)
def correction(word):
"Most probable spelling correction for word."
return max(candidates(word), key=P)
def candidates(word):
"Generate possible spelling corrections for word."
return (known([word]) or [word])# or known(edits1(word)) or known(edits2(word))
def known(words):
"The subset of `words` that appear in the dictionary of WORDS."
return set(w for w in words if w in WORDS)
def edits1(word):
"All edits that are one edit away from `word`."
letters = 'abcdefghijklmnopqrstuvwxyz'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"All edits that are two edits away from `word`."
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
def singlify(word):
return "".join([letter for i,letter in enumerate(word) if i == 0 or letter != word[i-1]])
# + _uuid="1a5245ab137d836d57f56eb0f71392b28f049da7"
WORDS
# + _uuid="d9063c74e4b0dcae6bc499b4de88c3dc7ce99e84"
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.zeros((nb_words,601))
something_tw = embeddings_index_tw.get("something")
something_ft = embeddings_index_ft.get("something")
something = np.zeros((601,))
something[:300,] = something_ft
something[300:600,] = something_tw
something[600,] = 0
# + _uuid="14bf0e68627109d96ded3a11c511d9c829a5c5c2"
def all_caps(word):
return len(word) > 1 and word.isupper()
def embed_word(embedding_matrix,i,word):
embedding_vector_ft = embeddings_index_ft.get(word)
if embedding_vector_ft is not None:
if all_caps(word):
last_value = np.array([1])
else:
last_value = np.array([0])
embedding_matrix[i,:300] = embedding_vector_ft
embedding_matrix[i,600] = last_value
embedding_vector_tw = embeddings_index_tw.get(word)
if embedding_vector_tw is not None:
embedding_matrix[i,300:600] = embedding_vector_tw
# Glove vector is used by itself if there is no glove vector but not the other way around.
for word, i in word_index.items():
if i >= max_features: continue
if embeddings_index_ft.get(word) is not None:
embed_word(embedding_matrix,i,word)
else:
# change to > 20 for better score.
if len(word) > 26:
embedding_matrix[i] = something
#print(word)
else:
word2 = correction(word)
#print(word2)
if embeddings_index_ft.get(word2) is not None:
embed_word(embedding_matrix,i,word2)
else:
word2 = correction(singlify(word))
if embeddings_index_ft.get(word2) is not None:
embed_word(embedding_matrix,i,word2)
else:
embedding_matrix[i] = something
# + _uuid="746f34d01074e5536b5b85b74084b1a378fffce0"
embedding_matrix.shape
# + _uuid="0d20c805d3c19e56a2418f2e1c90ebcd97999cb5"
del(embeddings_index_tw, embeddings_index_ft); gc.collect()
# + _uuid="caa275f237ec4a5eceb3f6329e0c40ec6bce78a6"
class RocAucEvaluation(Callback):
def __init__(self, validation_data=(), interval=1):
super(Callback, self).__init__()
self.interval = interval
self.X_val, self.y_val = validation_data
self.max_score = 0
self.not_better_count = 0
def on_epoch_end(self, epoch, logs={}):
if epoch % self.interval == 0:
y_pred = self.model.predict(self.X_val, verbose=1)
score = roc_auc_score(self.y_val, y_pred)
print("\n ROC-AUC - epoch: %d - score: %.6f \n" % (epoch+1, score))
if (score > self.max_score):
print("*** New High Score (previous: %.6f) \n" % self.max_score)
model.save_weights("best_weights.h5")
self.max_score=score
self.not_better_count = 0
else:
self.not_better_count += 1
if self.not_better_count > 3:
print("Epoch %05d: early stopping, high score = %.6f" % (epoch,self.max_score))
self.model.stop_training = True
# + _uuid="9ff70836e3f36bcbd8a1fe096ad275c8af1e5ede"
def get_model(features,clipvalue=1.,num_filters=40,dropout=0.5,embed_size=601):
features_input = Input(shape=(features.shape[1],))
inp = Input(shape=(maxlen, ))
# Layer 1: concatenated fasttext and glove twitter embeddings.
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
# Uncomment for best result
# Layer 2: SpatialDropout1D(0.5)
x = SpatialDropout1D(dropout)(x)
# Uncomment for best result
# Layer 3: Bidirectional CuDNNLSTM
x = Bidirectional(CuDNNLSTM(num_filters, return_sequences=True))(x)
# Layer 4: Bidirectional CuDNNGRU
x, x_h, x_c = Bidirectional(CuDNNGRU(num_filters, return_sequences=True, return_state = True))(x)
# Layer 5: A concatenation of the last state, maximum pool, average pool and
# two features: "Unique words rate" and "Rate of all-caps words"
avg_pool = GlobalAveragePooling1D()(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([avg_pool, x_h, max_pool,features_input])
# Layer 6: output dense layer.
outp = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp,features_input], outputs=outp)
adam = optimizers.adam(clipvalue=clipvalue)
model.compile(loss='binary_crossentropy',
optimizer=adam,
metrics=['accuracy'])
return model
# + _uuid="48c3d67be5fe44871b5b6c926740f1d93761e9db"
model = get_model(features)
batch_size = 512
# Used epochs=100 with early exiting for best score.
epochs = 7
gc.collect()
K.clear_session()
# Change to 5
num_folds = 5 #number of folds
y_test = np.zeros((test.shape[0],1))
# Uncomment for out-of-fold predictions
scores = []
oof_predict = np.zeros((train.shape[0],1))
kf = KFold(n_splits=num_folds, shuffle=True, random_state=239)
# + _uuid="d0fe5c5f79a3857df30bfbfda75ed44fbbddf3e2"
def f1_smart(y_true, y_pred):
args = np.argsort(y_pred)
tp = y_true.sum()
fs = (tp - np.cumsum(y_true[args[:-1]])) / np.arange(y_true.shape[0] + tp - 1, tp, -1)
res_idx = np.argmax(fs)
return 2 * fs[res_idx], (y_pred[args[res_idx]] + y_pred[args[res_idx + 1]]) / 2
# + _uuid="f77fd6268d31e91d85af3b9714398393737de8df"
bestscore = []
for train_index, test_index in kf.split(x_train):
filepath="weights_best.h5"
kfold_y_train,kfold_y_test = y_train[train_index], y_train[test_index]
kfold_X_train = x_train[train_index]
kfold_X_features = features[train_index]
kfold_X_valid = x_train[test_index]
kfold_X_valid_features = features[test_index]
gc.collect()
K.clear_session()
model = get_model(features)
#ra_val = RocAucEvaluation(validation_data=([kfold_X_valid,kfold_X_valid_features], kfold_y_test), interval = 1)
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=2, save_best_only=True, mode='min')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=1, min_lr=0.0001, verbose=2)
earlystopping = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=2, verbose=2, mode='auto')
if i == 0:print(model.summary())
model.fit([kfold_X_train,kfold_X_features], kfold_y_train, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=([kfold_X_valid,kfold_X_valid_features], kfold_y_test),
callbacks = [checkpoint, reduce_lr, earlystopping])#ra_val,
gc.collect()
#model.load_weights(bst_model_path)
model.load_weights(filepath)
y_test += model.predict([x_test,test_features], batch_size=1024,verbose=1) / num_folds
gc.collect()
# uncomment for out of fold predictions
oof_predict[test_index] = model.predict([kfold_X_valid, kfold_X_valid_features],batch_size=batch_size, verbose=1)
cv_score = roc_auc_score(kfold_y_test, oof_predict[test_index])
f1, threshold = f1_smart(np.squeeze(kfold_y_test), np.squeeze(oof_predict[test_index]))
print('Optimal F1: {:.4f} at threshold: {:.4f}'.format(f1, threshold))
bestscore.append(threshold)
scores.append(cv_score)
print('score: ',cv_score)
print("Done")
print('Total CV score is {}'.format(np.mean(scores)))
# + _uuid="2361998e3847bc4a874a46cc0b27b510826a7ded"
# + _uuid="0f3013249d7f16ca1fa6dea824a4c399046715a6"
from sklearn.metrics import f1_score
def threshold_search(y_true, y_proba):
best_threshold =0
best_score = 0
for threshold in [i * 0.01 for i in range(100)]:
score = f1_score(y_true=y_true, y_pred=y_proba > threshold)
if score > best_score:
best_threshold = threshold
best_score = score
search_result = {'threshold': best_threshold, 'f1': best_score}
return search_result
search_result = threshold_search(y_train, oof_predict)
print(search_result)
print("Mean of Best Score ::: {}".format(np.mean(bestscore)))
# + _uuid="d9850e3a5ff5045d549fe66cc873b037f5c05d47"
#sum((y_test>.38).reshape(-1)==1)
#sum(y_train)
# + _uuid="72a6df50370a51a36fbcb3203f8cd33eb8d2e0e7"
sub = test[['qid']]
y_test = y_test.reshape((-1, 1))
pred_test_y = (y_test>search_result['threshold']).astype(int)#np.mean(bestscore)
sub['prediction'] = pred_test_y
sub.to_csv("submission.csv", index=False)
# + _uuid="a86ceaa0021086bc57f4deff49cbfeeb2828ebdf"
| kaggle-quora-insincere-question/handling-GLOVe-embeddings-final-code-wo-cleanups.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This is a notebook, used in the screencast video. Note, that the data files are not present here in Jupyter hub and you will not be able to run it. But you can always download the notebook to your local machine as well as the competition data and make it interactive. Competition data can be found here: https://www.kaggle.com/c/springleaf-marketing-response/data
# +
import os
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import seaborn
# +
def autolabel(arrayA):
''' label each colored square with the corresponding data value.
If value > 20, the text is in black, else in white.
'''
arrayA = np.array(arrayA)
for i in range(arrayA.shape[0]):
for j in range(arrayA.shape[1]):
plt.text(j,i, "%.2f"%arrayA[i,j], ha='center', va='bottom',color='w')
def hist_it(feat):
plt.figure(figsize=(16,4))
feat[Y==0].hist(bins=range(int(feat.min()),int(feat.max()+2)),normed=True,alpha=0.8)
feat[Y==1].hist(bins=range(int(feat.min()),int(feat.max()+2)),normed=True,alpha=0.5)
plt.ylim((0,1))
def gt_matrix(feats,sz=16):
a = []
for i,c1 in enumerate(feats):
b = []
for j,c2 in enumerate(feats):
mask = (~train[c1].isnull()) & (~train[c2].isnull())
if i>=j:
b.append((train.loc[mask,c1].values>=train.loc[mask,c2].values).mean())
else:
b.append((train.loc[mask,c1].values>train.loc[mask,c2].values).mean())
a.append(b)
plt.figure(figsize = (sz,sz))
plt.imshow(a, interpolation = 'None')
_ = plt.xticks(range(len(feats)),feats,rotation = 90)
_ = plt.yticks(range(len(feats)),feats,rotation = 0)
autolabel(a)
# -
def hist_it1(feat):
plt.figure(figsize=(16,4))
feat[Y==0].hist(bins=100,range=(feat.min(),feat.max()),normed=True,alpha=0.5)
feat[Y==1].hist(bins=100,range=(feat.min(),feat.max()),normed=True,alpha=0.5)
plt.ylim((0,1))
# # Read the data
train = pd.read_csv('train.csv.zip')
Y = train.target
test = pd.read_csv('test.csv.zip')
test_ID = test.ID
# # Data overview
# Probably the first thing you check is the shapes of the train and test matrices and look inside them.
print 'Train shape', train.shape
print 'Test shape', test.shape
train.head()
test.head()
# There are almost 2000 anonymized variables! It's clear, some of them are categorical, some look like numeric. Some numeric feateures are integer typed, so probably they are event conters or dates. And others are of float type, but from the first few rows they look like integer-typed too, since fractional part is zero, but pandas treats them as `float` since there are NaN values in that features.
#
# From the first glance we see train has one more column `target` which we should not forget to drop before fitting a classifier. We also see `ID` column is shared between train and test, which sometimes can be succesfully used to improve the score.
# It is also useful to know if there are any NaNs in the data. You should pay attention to columns with NaNs and the number of NaNs for each row can serve as a nice feature later.
# Number of NaNs for each object
train.isnull().sum(axis=1).head(15)
# Number of NaNs for each column
train.isnull().sum(axis=0).head(15)
# Just by reviewing the head of the lists we immediately see the patterns, exactly 56 NaNs for a set of variables, and 24 NaNs for objects.
# # Dataset cleaning
# ### Remove constant features
# All 1932 columns are anonimized which makes us to deduce the meaning of the features ourselves. We will now try to clean the dataset.
#
# It is usually convenient to concatenate train and test into one dataframe and do all feature engineering using it.
traintest = pd.concat([train, test], axis = 0)
# First we schould look for a constant features, such features do not provide any information and only make our dataset larger.
# `dropna = False` makes nunique treat NaNs as a distinct value
feats_counts = train.nunique(dropna = False)
feats_counts.sort_values()[:10]
# We found 5 constant features. Let's remove them.
# +
constant_features = feats_counts.loc[feats_counts==1].index.tolist()
print (constant_features)
traintest.drop(constant_features,axis = 1,inplace=True)
# -
# ### Remove duplicated features
# Fill NaNs with something we can find later if needed.
traintest.fillna('NaN', inplace=True)
# Now let's encode each feature, as we discussed.
# +
train_enc = pd.DataFrame(index = train.index)
for col in tqdm_notebook(traintest.columns):
train_enc[col] = train[col].factorize()[0]
# -
# We could also do something like this:
# +
# train_enc[col] = train[col].map(train[col].value_counts())
# -
# The resulting data frame is very very large, so we cannot just transpose it and use .duplicated. That is why we will use a simple loop.
# +
dup_cols = {}
for i, c1 in enumerate(tqdm_notebook(train_enc.columns)):
for c2 in train_enc.columns[i + 1:]:
if c2 not in dup_cols and np.all(train_enc[c1] == train_enc[c2]):
dup_cols[c2] = c1
# -
dup_cols
# Don't forget to save them, as it takes long time to find these.
import cPickle as pickle
pickle.dump(dup_cols, open('dup_cols.p', 'w'), protocol=pickle.HIGHEST_PROTOCOL)
# Drop from traintest.
traintest.drop(dup_cols.keys(), axis = 1,inplace=True)
# # Determine types
# Let's examine the number of unique values.
nunique = train.nunique(dropna=False)
nunique
# and build a histogram of those values
plt.figure(figsize=(14,6))
_ = plt.hist(nunique.astype(float)/train.shape[0], bins=100)
# Let's take a looks at the features with a huge number of unique values:
mask = (nunique.astype(float)/train.shape[0] > 0.8)
train.loc[:, mask]
# The values are not float, they are integer, so these features are likely to be even counts. Let's look at another pack of features.
mask = (nunique.astype(float)/train.shape[0] < 0.8) & (nunique.astype(float)/train.shape[0] > 0.4)
train.loc[:25, mask]
# These look like counts too. First thing to notice is the 23th line: 99999.., -99999 values look like NaNs so we should probably built a related feature. Second: the columns are sometimes placed next to each other, so the columns are probably grouped together and we can disentangle that.
# Our conclusion: there are no floating point variables, there are some counts variables, which we will treat as numeric.
#
# And finally, let's pick one variable (in this case 'VAR_0015') from the third group of features.
train['VAR_0015'].value_counts()
cat_cols = list(train.select_dtypes(include=['object']).columns)
num_cols = list(train.select_dtypes(exclude=['object']).columns)
# # Go through
# Let's replace NaNs with something first.
train.replace('NaN', -999, inplace=True)
# Let's calculate how many times one feature is greater than the other and create cross tabel out of it.
# +
# select first 42 numeric features
feats = num_cols[:42]
# build 'mean(feat1 > feat2)' plot
gt_matrix(feats,16)
# -
# Indeed, we see interesting patterns here. There are blocks of geatures where one is strictly greater than the other. So we can hypothesize, that each column correspondes to cumulative counts, e.g. feature number one is counts in first month, second -- total count number in first two month and so on. So we immediately understand what features we should generate to make tree-based models more efficient: the differences between consecutive values.
# ## VAR_0002, VAR_0003
# +
hist_it(train['VAR_0002'])
plt.ylim((0,0.05))
plt.xlim((-10,1010))
hist_it(train['VAR_0003'])
plt.ylim((0,0.03))
plt.xlim((-10,1010))
# -
train['VAR_0002'].value_counts()
train['VAR_0003'].value_counts()
# We see there is something special about 12, 24 and so on, sowe can create another feature x mod 12.
# ## VAR_0004
train['VAR_0004_mod50'] = train['VAR_0004'] % 50
hist_it(train['VAR_0004_mod50'])
plt.ylim((0,0.6))
# # Categorical features
# Let's take a look at categorical features we have.
train.loc[:,cat_cols].head().T
# `VAR_0200`, `VAR_0237`, `VAR_0274` look like some georgraphical data thus one could generate geography related features, we will talk later in the course.
#
# There are some features, that are hard to identify, but look, there a date columns `VAR_0073` -- `VAR_0179`, `VAR_0204`, `VAR_0217`. It is useful to plot one date against another to find relationships.
# +
date_cols = [u'VAR_0073','VAR_0075',
u'VAR_0156',u'VAR_0157',u'VAR_0158','VAR_0159',
u'VAR_0166', u'VAR_0167',u'VAR_0168',u'VAR_0169',
u'VAR_0176',u'VAR_0177',u'VAR_0178',u'VAR_0179',
u'VAR_0204',
u'VAR_0217']
for c in date_cols:
train[c] = pd.to_datetime(train[c],format = '%d%b%y:%H:%M:%S')
test[c] = pd.to_datetime(test[c], format = '%d%b%y:%H:%M:%S')
# +
c1 = 'VAR_0217'
c2 = 'VAR_0073'
# mask = (~test[c1].isnull()) & (~test[c2].isnull())
# sc2(test.ix[mask,c1].values,test.ix[mask,c2].values,alpha=0.7,c = 'black')
mask = (~train[c1].isnull()) & (~train[c2].isnull())
sc2(train.loc[mask,c1].values,train.loc[mask,c2].values,c=train.loc[mask,'target'].values)
# -
# We see that one date is strictly greater than the other, so the difference between them can be a good feature. Also look at horizontal line there -- it also looks like NaN, so I would rather create a new binary feature which will serve as an idicator that our time feature is NaN.
| old_files_unsorted_archive/EDA_Springleaf_screencast.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p align="center">
# <img src="https://github.com/jessepisel/energy_analytics/blob/master/EA_logo.jpg?raw=true" width="220" height="240" />
#
# </p>
#
# ## GeostatsPy: Multivariate Analysis for Subsurface Data Analytics in Python
#
#
# ### <NAME>, Associate Professor, University of Texas at Austin
#
# #### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#
# ### PGE 383 Exercise: Multivariate Analysis for Subsurface Data Analytics in Python
#
# Here's a simple workflow, demonstration of multivariate analysis for subsurface modeling workflows. This should help you get started with building subsurface models that integrate uncertainty in the sample statistics.
#
# #### Bivariate Analysis
#
# Understand and quantify the relationship between two variables
#
# * example: relationship between porosity and permeability
# * how can we use this relationship?
#
# What would be the impact if we ignore this relationship and simply modeled porosity and permeability independently?
#
# * no relationship beyond constraints at data locations
# * independent away from data
# * nonphysical results, unrealistic uncertainty models
#
# #### Bivariate Statistics
#
# Pearson’s Product‐Moment Correlation Coefficient
# * Provides a measure of the degree of linear relationship.
# * We refer to it as the 'correlation coefficient'
#
# Let's review the sample variance of variable $x$. Of course, I'm truncating our notation as $x$ is a set of samples a locations in our modeling space, $x(\bf{u_\alpha}), \, \forall \, \alpha = 0, 1, \dots, n - 1$.
#
# \begin{equation}
# \sigma^2_{x} = \frac{\sum_{i=1}^{n} (x_i - \overline{x})^2}{(n-1)}
# \end{equation}
#
# We can expand the the squared term and replace on of them with $y$, another variable in addition to $x$.
#
# \begin{equation}
# C_{xy} = \frac{\sum_{i=1}^{n} (x_i - \overline{x})(y_i - \overline{y})}{(n-1)}
# \end{equation}
#
# We now have a measure that represents the manner in which variables $x$ and $y$ co-vary or vary together. We can standardized the covariance by the product of the standard deviations of $x$ and $y$ to calculate the correlation coefficent.
#
# \begin{equation}
# \rho_{xy} = \frac{\sum_{i=1}^{n} (x_i - \overline{x})(y_i - \overline{y})}{(n-1)\sigma_x \sigma_y}, \, -1.0 \le \rho_{xy} \le 1.0
# \end{equation}
#
# In summary we can state that the correlation coefficient is related to the covariance as:
#
# \begin{equation}
# \rho_{xy} = \frac{C_{xy}}{\sigma_x \sigma_y}
# \end{equation}
#
# The Person's correlation coefficient is quite sensitive to outliers and depature from linear behavoir (in the bivariate sense). We have an altenrative known as the Spearman's rank correlations coefficient.
#
# \begin{equation}
# \rho_{R_x R_y} = \frac{\sum_{i=1}^{n} (R_{x_i} - \overline{R_x})(R_{y_i} - \overline{R_y})}{(n-1)\sigma_{R_x} \sigma_{R_y}}, \, -1.0 \le \rho_{xy} \le 1.0
# \end{equation}
#
# The rank correlation applies the rank transform to the data prior to calculating the correlation coefficent. To calculate the rank transform simply replace the data values with the rank $R_x = 1,\dots,n$, where $n$ is the maximum value and $1$ is the minimum value.
#
# \begin{equation}
# x_\alpha, \, \forall \alpha = 1,\dots, n, \, | \, x_i \ge x_j \, \forall \, i \gt j
# \end{equation}
#
# \begin{equation}
# R_{x_i} = i
# \end{equation}
#
# The corelation coefficients provide useful metrics to quantify relationships between two variables at a time. We can also consider bivariate scatter plots and matrix scatter plots to visualize multivariate data. In general, current practical subsurface modeling is bivariate, two variables at a time.
#
# #### Multivariate Statistics
#
# See lecture on Multivariate Statistics, including the concepts of joint, conditional and marginal probability.
#
# #### Objective
#
# In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
#
# The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#
# #### Getting Started
#
# Here's the steps to get setup in Python with the GeostatsPy package:
#
# 1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
# 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
# 3. In the terminal type: pip install geostatspy.
# 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
#
# You will need to copy the data file to your working directory. They are available here:
#
# * Tabular data - sample_data_MV_biased.csv at https://git.io/fhgu0.
#
# There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
# We will also need some standard packages. These should have been installed with Anaconda 3.
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
from scipy import stats # summary statistics
import math # trig etc.
import scipy.signal as signal # kernel for moving window calculation
import random
import seaborn as sns
# #### Set the working directory
#
# I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
os.chdir("c:/PGE383") # set the working directory
# #### Loading Tabular Data
#
# Here's the command to load our comma delimited data file in to a Pandas' DataFrame object.
df = pd.read_csv('sample_data_MV_biased.csv') # load our data table (wrong name!)
# Visualizing the DataFrame would be useful and we already learned about these methods in this demo (https://git.io/fNgRW).
#
# We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset.
print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head(n=13) # we could also use this command for a table preview
# #### Summary Statistics for Tabular Data
#
# The table includes X and Y coordinates (meters), Facies 1 and 0 (1 is sandstone and 0 interbedded sand and mudstone), Porosity (fraction), and permeability as Perm (mDarcy).
#
# There are a lot of efficient methods to calculate summary statistics from tabular data in DataFrames. The describe command provides count, mean, minimum, maximum, and quartiles all in a nice data table. We use transpose just to flip the table so that features are on the rows and the statistics are on the columns.
df.describe().transpose()
# #### Visualizing Tabular Data with Location Maps
#
# It is natural to set the x and y coordinate and feature ranges manually. e.g. do you want your color bar to go from 0.05887 to 0.24230 exactly? Also, let's pick a color map for display. I heard that plasma is known to be friendly to the color blind as the color and intensity vary together (hope I got that right, it was an interesting Twitter conversation started by <NAME> from Agile if I recall correctly). We will assume a study area of 0 to 1,000m in x and y and omit any data outside this area.
xmin = 0.0; xmax = 1000.0 # range of x values
ymin = 0.0; ymax = 1000.0 # range of y values
pormin = 0.05; pormax = 0.25; # range of porosity values
permmin = 0.01; permmax = 2000.0 # range of permeability values
AImin = 2000.0; AImax = 8000.0 # range of AI values
nx = 100; ny = 100; csize = 10.0
cmap = plt.cm.plasma # color map
# Let's try out locmap. This is a reimplementation of GSLIB's locmap program that uses matplotlib. I hope you find it simpler than matplotlib, if you want to get more advanced and build custom plots lock at the source. If you improve it, send me the new code.
# Now we can populate the plotting parameters and visualize the porosity data.
# +
plt.subplot(221)
GSLIB.locmap_st(df,'X','Y','Facies',xmin,xmax,ymin,ymax,0,1,'Well Data - Porosity','X(m)','Y(m)','Facies (0-shale, 1-sand)',cmap)
plt.subplot(222)
GSLIB.locmap_st(df,'X','Y','Porosity',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity','X(m)','Y(m)','Porosity (fraction)',cmap)
plt.subplot(223)
GSLIB.locmap_st(df,'X','Y','Perm',xmin,xmax,ymin,ymax,permmin,permmax,'Well Data - Permeability','X(m)','Y(m)','Permeability (md)',cmap)
plt.subplot(224)
GSLIB.locmap_st(df,'X','Y','AI',xmin,xmax,ymin,ymax,AImin,AImax,'Well Data - Acoustic Impedance','X(m)','Y(m)','Acoustic Impedance (m/s x g/cm^3)',cmap)
plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=3.2, wspace=0.2, hspace=0.2)
plt.show()
# -
# #### Bivariate Analysis
#
# Let's start with some simple bivariate plotting and calculations. Firsty some scatter plots.
# +
plt.subplot(121)
plt.plot(df['Porosity'].values,df['Perm'].values, 'o', label='', markerfacecolor='red', markeredgecolor='black', alpha=0.2)
plt.title('Well Data Permeability vs. Porostiy')
plt.xlabel('Porosity (fraction)')
plt.ylabel('Permeability (mD)')
#plt.legend()
plt.subplot(122)
plt.plot(df['AI'].values,df['Porosity'].values, 'o', label='', markerfacecolor='red', markeredgecolor='black', alpha=0.2)
plt.title('Well Data Porostiy vs. Acoustic Impedance')
plt.ylabel('Porosity (fraction)')
plt.xlabel('Acoustic Impedance (m/s x g/cm^3)')
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# -
# #### Correlation and Covariance
#
# It is straight forward to calculat the covariance and correlation from the pairs of data in our dataset. Here's the covariance. Notice that the matrix is symmetrical? Makes sense, as the $C_{Por,Perm} = C_{Perm,Por}$. Also, note that the diagonal values ($C_{i,j}$ where $i=j$) equal to the variance. We check porosity by calculating the variance.
print(df.iloc[:,3:7].cov()) # the covariance matrix for columns 3,4,5 and 6 and all rows
print('The variance of porosity is ' + str(round(np.var(df['Porosity'].values),6)))
# Here's the correlation coefficient.
df.iloc[:,3:7].corr()
# #### Matrix Scatter Plots
#
# If we have 3 or more variables to consider then matrix scatter plot offer an efficient method to display the multivariate relationships, 2 variables at a time. Once can identify:
#
# 1. the range, envelope of the paired data
# 2. homoscedastic and heteroscedastic behavoirs
# 3. non-linear features
#
# Here's the seaborn package matrix scatter plot function, pairplot. Let's color the results by facies.
sns.pairplot(df, hue='Facies',vars=['Facies','Porosity','Perm','AI'],markers='o')
# #### Joint, Conditional and Marginals
#
# We can use kernel density estimation to estimate the joint probabilities density function (pdf) for the paired data, a 2D pdf! We could use this to estimate any required joint, marginal and conditional probability (care must be taken with normalization). Let's use the seaborn package 'kdeplot' function to estimate the joint pdf for porosity and acoustic impedance.
ax = sns.kdeplot(df['AI'].values,df['Porosity'].values, shade=True, n_levels = 10,cmap=cmap,cbar= True, shade_lowest = False)
ax.set_xlabel('Acoustic Impedance (m/s x g/cm^3)'); ax.set_ylabel('Porosity (fraction)'); ax.set_title('Porosity vs. Acoustic Impedance')
# I think is it useful to visualize the joint pdfs with the marginal pdfs on a single plot. We can use seaborn's 'jointplot' to accomplish this.
ax = sns.jointplot('AI','Porosity', df, kind='kde',shade = False, n_levels = 10,cmap=cmap, shade_lowest = True);
# The correlation coefficient and the p-value of the correlation coefficient (significant if < $\alpha/2$ or > $1-\alpha/2$).
#
# #### Calculating Conditional Statistics
#
# Of course, we could just calculate the conditional statistics by-hand. We need to select some bins over the variable that we will condition to. Let's calculate conditional statistical of porosity given acoustic impedance. We will select 9 equal spaced bins.
AI_bins = np.linspace(2000,8000,10) # set the bin boundaries and then the centroids for plotting
AI_centroids = np.linspace((AI_bins[0]+AI_bins[1])*0.5,(AI_bins[8]+AI_bins[9])*0.5,9)
print(AI_bins) # check the boundaries
print(AI_centroids) # check the centroids
df['AI_bins'] = pd.cut(df['AI'], AI_bins,labels = AI_centroids) # cut on bondaries and lable with centroids
df.head() # check the new column in the DataFrame
# Now we can use the 'groupby' function built-in to Pandas' DataFrames to extract subsets of porosity values in each bin from the DataFrame and then to calculate the conditional statistics: expectation, P90 and P10. Let's plot the result.
# +
cond_exp = df.groupby('AI_bins')['Porosity'].mean()
cond_P90 = df.groupby('AI_bins')['Porosity'].quantile(.9)
cond_P10 = df.groupby('AI_bins')['Porosity'].quantile(.1)
plt.subplot(111)
plt.plot(AI_centroids,cond_exp,color='black')
plt.plot(AI_centroids,cond_P90,'r--',color='black',linewidth = 1.0)
plt.plot(AI_centroids,cond_P10,'r--',color='black',linewidth = 1.0)
plt.xlabel('Acoustic Impedance (m/s x g/cm^3)')
plt.ylabel('Porosity (fraction) | Acoustic Impedance')
t = plt.title('Porosity Conditional to Accoustic Impedance')
plt.ylim(pormin,pormax)
plt.xlim(AImin,AImax)
plt.text(3200, .10, 'P10')
plt.text(3200, .15, 'Expectation')
plt.text(3200, .19, 'P90')
plt.grid(True)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# -
# Does acoustic impedance provide information about porosity?
#
# Yes, clearly the conditional statistics vary over acoustic impedance, knowing the acoustic impedance reduces the uncertainty about porosity.
#
# #### Comments
#
# This was a basic demonstration of multivariate analysis. A lot more could be done, for example, there are methods that reduce the dimensionality, and remove dependency to allow for independent variable modeling workflows etc.
#
# I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
#
# I hope this was helpful,
#
# *Michael*
#
# <NAME>, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#
# #### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#
| Python Workflows/Week_08a_Multivariate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# %matplotlib inline
from datetime import datetime
import matplotlib as mpl
import matplotlib.pylab
import matplotlib.font_manager as fm
from matplotlib import pyplot as plt
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 16, 14
matplotlib.rc('font',family='NanumGothic')
import warnings
import itertools
warnings.filterwarnings("ignore") # specify to ignore warning messages
# # ! python -m pip install statsmodels
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
import warnings
import itertools
warnings.filterwarnings("ignore") # specify to ignore warning messages
df = pd.read_excel('./unit_time_data.xls')
df = df.replace(['농업, 임업 및 어업','광업', '제조업', '전기, 가스, 증기 및 수도사업', '하수폐기처리원료재생환경복원', '건설업', '도매 및 소매업', '운수업'], ['primary', 'mining', 'manufacture', 'energy', 'recycle', 'building', 'retail', 'transportation'])
df
in_d = 'primary'
pl_c = '강원도'
com = in_d + pl_c
df = df[(df.industry == in_d) & (df.place == pl_c)]
df = df.transpose()
df.columns =[com]
df = df[2:]
df
df[com] = pd.to_numeric(df[com])
p = d = q = range(0, 2) # Define the p, d and q parameters to take any value between 0 and 2
pdq = list(itertools.product(p, d, q)) # Generate all different combinations of p, q and q triplets
pdq_x_QDQs = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))] # Generate all different combinations of seasonal p, q and q triplets
# +
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
# best_model =None
for param in pdq:
for seasonal_param in pdq_x_QDQs:
mod = sm.tsa.statespace.SARIMAX(df[[com]],
order=param,
seasonal_order=seasonal_param,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = seasonal_param
# best_model = results
print('Best ARIMA{}x{} - AIC:{}'.format(best_pdq, best_seasonal_pdq, best_aic))
# -
mod = sm.tsa.statespace.SARIMAX(df,
order=best_pdq,
seasonal_order=best_seasonal_pdq,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
# Get forecast of 10 years or 120 months steps ahead in future
forecast = results.get_forecast(steps=48)
# forecast = results.get_forecast(steps=pd.date_range('20200802', '20230802', freq='M'))
# Get confidence intervals of forecasts
forecast_ci = forecast.conf_int()
forecast_ci['mean'] = forecast_ci.mean(axis=1)
forecast_ci
dff = df.join(forecast_ci, how='outer')
dff
# +
ax = dff[[com]].plot(label='observed',figsize=(12, 8))
dff['mean'].plot(label='Dynamic Forecast', color='r', ax=ax)
ax.fill_between(forecast_ci.index,
forecast_ci.iloc[:, 0],
forecast_ci.iloc[:, 1], color='g', alpha=.4)
ax.fill_betweenx(ax.get_ylim(),
pd.to_datetime('2018-01-01'),
dff.index[-1],
alpha=.1, zorder=-1)
ax.set_xlabel('Time')
ax.set_ylabel('CO2 Emissions')
plt.legend()
plt.show()
# -
# # def -----------------------------------------------------------------
def show_ts(industry, place):
column = str(industry) + str(place)
df = pd.read_excel('./unit_time_data.xls')
df = df.replace(['농업, 임업 및 어업','광업', '제조업', '전기, 가스, 증기 및 수도사업', '하수폐기처리원료재생환경복원', '건설업', '도매 및 소매업', '운수업'], ['primary', 'mining', 'manufacture', 'energy', 'recycle', 'building', 'retail', 'transportation'])
df = df[(df.industry == industry) & (df.place == place)]
df = df.transpose()
df.columns =[column]
df = df[2:]
df[column] = pd.to_numeric(df[column])
p = d = q = range(0, 2) # Define the p, d and q parameters to take any value between 0 and 2
pdq = list(itertools.product(p, d, q)) # Generate all different combinations of p, q and q triplets
pdq_x_QDQs = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))] # Generate all different combinations of seasonal p, q and q triplets
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
# best_model =None
for param in pdq:
for seasonal_param in pdq_x_QDQs:
mod = sm.tsa.statespace.SARIMAX(df[[column]],
order=param,
seasonal_order=seasonal_param,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = seasonal_param
mod = sm.tsa.statespace.SARIMAX(df,
order=best_pdq,
seasonal_order=best_seasonal_pdq,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
# Get forecast of 10 years or 120 months steps ahead in future
forecast = results.get_forecast(steps=48)
# forecast = results.get_forecast(steps=pd.date_range('20200802', '20230802', freq='M'))
# Get confidence intervals of forecasts
forecast_ci = forecast.conf_int()
forecast_ci['mean'] = forecast_ci.mean(axis=1)
forecast_ci
dff = df.join(forecast_ci, how='outer')
dff
ax = dff[[column]].plot(label='observed',figsize=(12, 8))
dff['mean'].plot(label='Dynamic Forecast', color='r', ax=ax)
ax.fill_between(forecast_ci.index,
forecast_ci.iloc[:, 0],
forecast_ci.iloc[:, 1], color='g', alpha=.4)
ax.fill_betweenx(ax.get_ylim(),
pd.to_datetime('2018-01-01'),
dff.index[-1],
alpha=.1, zorder=-1)
ax.set_xlabel('Time')
ax.set_ylabel('CO2 Emissions')
plt.legend()
plt.show()
return dff
show_ts('primary', '경기도')
| project/ML/ML_i/practice_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''base'': conda)'
# name: python_defaultSpec_1594237964650
# ---
# # Experiments related to General Modbus Reader
import time
import struct
import logging
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
import pymodbus
# + tags=[]
host = 'EM56akvill.dyndns.org'
port = 30000
with ModbusClient(host=host, port=port) as client:
#for r in range(2080, 2090):
r = 2080
result = client.read_input_registers(r, 10, unit=1)
print(result.registers)
# + tags=[]
import base_reader
settings = base_reader.DummySettings()
device1 = ('abc.dyndns.org', 30000, dict(endian='little'))
device1_sensors = (
(2084, 'heat_rate', dict(register_type='input')),
(2086, 'total_heat', dict(datatype='uint32', reading_type='counter')),
(2105, 'ret_temp', dict(transform='val/10')),
(2104, 'sup_temp', dict(transform='val/10')),
)
device2 = ('abc.dyndns.org', 30000, dict(endian='little'))
device2_sensors = (
(2103, 'hsout', dict(transform='val/10')),
(2102, 'hsin', dict(transform='val/10')),
)
device3 = ('def.dyndns.org', 37770, dict(endian='little'))
device3_sensors = (
(30, 'heat_rate', dict(datatype='float32')),
(6, 'flow', dict(datatype='float32', transform='val/60')),
)
settings.MODBUS_TARGETS = (
(device1, device1_sensors),
(device2, device2_sensors),
(device3, device3_sensors)
)
class ModbusTCPreader(base_reader.Reader):
def read(self):
# list to hold final readings
readings = []
for device_info, sensors in self._settings.MODBUS_TARGETS:
# use the same timestamp for all of the sensors on this device
ts = time.time()
try:
try:
host, port, kwargs = device_info
except:
host, port = device_info
kwargs = {}
device_addr = kwargs.get('device_addr', 1)
endian = kwargs.get('endian', 'big')
if endian not in ('big', 'little'):
raise ValueError(f'Improper endian value for Modbus device {device_info}')
print('\n', host, port, device_addr, endian)
with ModbusClient(host=host, port=port) as client:
for sensor_info in sensors:
try:
try:
register, sensor_name, kwargs = sensor_info
except:
register, sensor_name = sensor_info
kwargs = {}
datatype = kwargs.get('datatype', 'uint16')
transform = kwargs.get('transform', None)
register_type = kwargs.get('register_type', 'holding')
reading_type = kwargs.get('reading_type', 'value')
# determine number of registers to read and the correct struct
# unpacking code based upon the data type for this sensor.
try:
reg_count, unpack_fmt = {
'uint16': (1, 'H'),
'int16': (1, 'h'),
'uint32': (2, 'I'),
'int32': (2, 'i'),
'float': (2, 'f'),
'float32': (2, 'f'),
'double': (4, 'd'),
'float64': (4, 'd'),
}[datatype]
except:
logging.exception(f'Invalid Modbus Datatype: {datatype} for Sensor {sensor_info}')
continue
# Determine the correct function to use for reading the values
try:
read_func = {
'holding': client.read_holding_registers,
'input': client.read_input_registers,
'coil': client.read_coils,
'discrete': client.read_discrete_inputs
}[register_type]
except:
logging.exception(f'Invalid Modbus register type for Sensor {sensor_info}')
continue
try:
reading_type_code = {
'value': base_reader.VALUE,
'state': base_reader.STATE,
'counter': base_reader.COUNTER
}[reading_type]
except:
logging.exception(f'Invalid Reading Type for Sensor {sensor_info}')
continue
result = read_func(register, reg_count, unit=device_addr)
if not hasattr(result, 'registers'):
raise ValueError(f'An error occurred while reading Sensor {sensor_info} from Modbus Device {device_info}')
# make an array of register values with least-signifcant value first
registers = result.registers
# calculate the integer equivalent of the registers read
if endian == 'big':
registers = reversed(registers)
val = 0
mult = 1
for reg in registers:
val += reg * mult
mult *= 2**16
# Use the struct module to convert this number into the appropriate data type.
# First, create a byte array that encodes this unsigned number according to
# how many words it contains.
reg_count_to_pack_fmt = {
1: 'H',
2: 'I',
4: 'Q'
}
pack_fmt = reg_count_to_pack_fmt[reg_count]
packed_bytes = struct.pack(pack_fmt, val)
# unpack bytes to convert to correct datatype
val = struct.unpack(unpack_fmt, packed_bytes)[0]
if transform:
val = eval(transform)
sensor_id = f'{self._settings.LOGGER_ID}_{sensor_name}'
readings.append( (ts, sensor_id, val, reading_type_code) )
except Exception as err:
logging.exception(str(err))
continue # on to next sensor
except Exception as err:
logging.exception(str(err))
continue # on to next device
return readings
ModbusTCPreader(settings).read()
# -
| experiments/test_modbus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import pyplot as plt
# +
sampleX = np.random.normal(loc=0, scale=1, size=1000)
plt.hist(sampleX, label='norm pdf', alpha=0.5)
sampleY = np.random.normal(loc=0, scale=2, size=1000)
plt.hist(sampleY, label='norm pdf', alpha=0.5)
plt.legend()
# -
plt.hist(sampleX, label='X', alpha=0.5)
plt.hist(sampleY, label='Y', alpha=0.5)
plt.hist(sampleX - sampleY, label='X-Y', alpha=0.5)
plt.legend()
plt.hist(sampleX, label='X', alpha=0.5)
plt.hist(sampleY, label='Y', alpha=0.5)
plt.hist(sampleX + sampleY, label='X+Y', alpha=0.5)
plt.legend()
| concepts/normal_transform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="GTgzBEEhESPq" colab_type="text"
# #Question 3
#
# + id="w9prxPGEEXeb" colab_type="code" colab={}
# Make a Lambda function for capitalizing the whole sentence passed using arguments.
# And map all the sentences in the List, with the lambda functions
lst = ["hey this i ajay","i am from chennai"]
print("\nOriginal list : \n")
print(lst)
result = list(map(lambda words: " ".join([word.capitalize() for word in words.split( )]) ,lst))
print("\nCapitalized list is : \n")
print(result)
| Day 5/Day5Assignment_Question_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython.display import HTML
import pprint
from ecgtools import parsers
# +
# %%time
yaml_path = '../sample_yaml/cmip6.yaml'
csv_path = '/path/to/put/output/file/cmip6_new.csv'
Parser = parsers.YAMLParser(yaml_path, csv_path=csv_path)
b = Parser.parser()
# -
b.df.columns
b.df.info()
b.save(csv_path)
| notebooks/Generic_Interface_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# importa a biblioteca do opencv
import cv2
def read_image(filename):
# carrega a imagem colorida e transforma ela em uma matriz do numpy
image = cv2.imread(filename)
# calcula o pixel com menor intensidade
# inicia a cor com o maior valor possivel
color = 255 * 3
# pega as dimensoes da imagem
(rows, cols, depth) = image.shape
# varre todos os pixels
for row in range(rows):
for col in range(cols):
# acessa um pixel da imagem
pixel = image[row][col]
(r, g, b) = pixel.tolist()
if color > (r + g + b):
color = r + g + b
for row in range(rows):
for col in range(cols):
pixel = image[row][col]
(r, g, b) = pixel.tolist()
c = (r + g + b)
# pinta de branco o que nao for letra
if c != color:
image[row][col] = (255, 255, 255)
# pinta de preto o que for letra
else:
image[row][col] = (0, 0, 0)
return image
# -
def recursive_bounding(image, row, col, letter_color):
min_x = col
min_y = row
max_x = col
max_y = row
(rows, cols, depth) = image.shape
for r in range( max(0, row - 1), min(row + 2, rows)):
for c in range(max(0, col - 1), min(col + 2, cols)):
if r == row and c == col:
continue
(r1, g1, b1) = image[r][c].tolist()
t = (r1 + g1 + b1)
# vizinho nao pertence a letra, ignora ele
if t != letter_color:
continue
# apaga esse pixel
# nao salvamos ele como branco pq precisaremos dele
# no proximo processamento
image[r][c] = (20, 20, 20)
points = recursive_bounding(image, r, c, letter_color)
(min_x1, max_x1, min_y1, max_y1) = points;
min_x = min(min_x, min_x1, c)
max_x = max(max_x, max_x1, c)
min_y = min(min_y, min_y1, r)
max_y = max(max_y, max_y1, r)
return (min_x, max_x, min_y, max_y)
def compute_boxes(image):
boxes = []
letter_color = 0
(rows, cols, depth) = image.shape
for row in range(rows):
for col in range(cols):
pixel = image[row][col]
(r, g, b) = pixel.tolist();
t = (r + g + b)
# nao eh letra, ignora o pixel
if t != letter_color:
continue
# calcula a bounding box dessa letra
box = recursive_bounding(image, row, col, letter_color)
# salva o bounding box numa lista de boxes
boxes.append(box)
boxes.sort()
return boxes
def cut_boxes(image, boxes):
letters = []
for box in boxes:
(col0, col1, row0, row1) = box
letter = image[row0:row1 + 1, col0:col1 + 1]
letters.append(letter)
return letters
def break_captcha(letters, templates):
captcha = ''
unrec = 0
for letter in letters:
(rows, cols, depth) = letter.shape
best_error = rows * cols
best_letter = ''
# copia a imagem, transformando de RGB para tons de cinza
gray = cv2.cvtColor(letter, cv2.COLOR_RGB2GRAY)
# binariza a imagem
(_, gray) = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY);
for letter in templates.keys():
for template in templates[letter]:
# se a imagem for de tamanho diferente do template
# ignora o template
if template.shape != gray.shape:
continue
xor = template ^ gray
errors = xor.sum()
if errors < best_error:
best_error = errors
best_letter = letter
# match perfeito
if errors == 0:
break;
if best_error == 0:
break;
# nao casou exatamente com nenhuma letra
# provavelmente eh uma nova letra
if best_error != 0:
cv2.imwrite("image_%d.png" % (unrec), gray)
unrec = unrec + 1
captcha = captcha + best_letter
return captcha
# +
import os
import glob
import collections
def load_templates():
# filtra os arquivos .png
files = glob.glob('templates/*.png')
# dicionario com as imagens
# chave = letra
# valor = lista de imagem
templates = collections.defaultdict(list)
for file in files:
f = os.path.basename(file)
# o primeiro caracter do nome do arquivo eh a letra
# correspondente
letter = f[0]
# carrega a imagem
img = cv2.imread(file)
# muda a imagem pra tons de cinza
img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# binariza a imagem
(_, img) = cv2.threshold(img, 245, 255, cv2.THRESH_BINARY)
# uma letra pode ter varios templates
templates[letter].append(img)
return templates
# -
# %matplotlib inline
import matplotlib.pyplot as plt
file = 'a.png'
# Imagem Original
plt.imshow(cv2.imread(file))
# Imagem Limpa e Binarizada
image = read_image(file)
plt.imshow(image)
# Coordenadas das letras
boxes = compute_boxes(image)
boxes
# Letras recortadas
letters = cut_boxes(image, boxes)
plt.imshow(letters[0], cmap='gray')
plt.imshow(letters[1], cmap='gray')
plt.imshow(letters[2], cmap='gray')
plt.imshow(letters[3], cmap='gray')
# Carrega os templates
templates = load_templates()
# Quebra o captcha
break_captcha(letters, templates)
| notebooks/2015-12-30-quebrando-captcha-com-opencv-e-python/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="88c8f888-4eb5-438e-8428-0d6f9280aa70" _uuid="3cf23599bb2587214d3f8b50d3b512bb025159f1"
# ### Importing the needed libraries
# + _cell_guid="6b324c96-b92c-4c71-835a-cc6adb1c7a0c" _uuid="9d65868c23446ed123810c4187c0e598ce01c652"
import os
import sys
import random
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
from itertools import chain
from skimage.io import imread, imshow, imread_collection, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from keras.models import Model, load_model
from keras.layers import Input
from keras.layers.core import Dropout, Lambda
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import backend as K
import tensorflow as tf
# Set some parameters
BATCH_SIZE = 10 # the higher the better
IMG_WIDTH = 128 # for faster computing on kaggle
IMG_HEIGHT = 128 # for faster computing on kaggle
IMG_CHANNELS = 3
TRAIN_PATH = '../input/stage1_train/'
TEST_PATH = '../input/stage1_test/'
warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
seed = 42
# + [markdown] _cell_guid="b2464b8e-77ba-44b1-8c78-06b1e690910d" _uuid="a3c37b92fa214f1be75ac3630927555ff40674b2"
# ### 1. Preparing the data
# + _cell_guid="bb7da2b8-5921-4769-9bee-afab2135472d" _uuid="2ff390c2a99e276c65e34d9ed61208347cacafe2"
# Get train and test IDs
train_ids = next(os.walk(TRAIN_PATH))[1]
test_ids = next(os.walk(TEST_PATH))[1]
np.random.seed(10)
# + _cell_guid="c6db52ac-98df-4e0f-bab2-b83565c0fde2" _uuid="ef50f72d80920e9b53c73919994199c0c9a8c955"
# Get and resize train images and masks
X_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
Y_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool)
print('Getting and resizing train images and masks ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
path = TRAIN_PATH + id_
img = imread(path + '/images/' + id_ + '.png')[:,:,:IMG_CHANNELS]
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_train[n] = img
mask = np.zeros((IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool)
for mask_file in next(os.walk(path + '/masks/'))[2]:
mask_ = imread(path + '/masks/' + mask_file)
mask_ = np.expand_dims(resize(mask_, (IMG_HEIGHT, IMG_WIDTH), mode='constant',
preserve_range=True), axis=-1)
mask = np.maximum(mask, mask_)
Y_train[n] = mask
# Get and resize test images
X_test = np.zeros((len(test_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
sizes_test = []
print('Getting and resizing test images ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(test_ids), total=len(test_ids)):
path = TEST_PATH + id_
img = imread(path + '/images/' + id_ + '.png')[:,:,:IMG_CHANNELS]
sizes_test.append([img.shape[0], img.shape[1]])
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_test[n] = img
print('Done!')
# + [markdown] _cell_guid="5f70481c-7b7b-4f60-8d11-e0e36030697c" _uuid="1593b0eb0503758424ce1ec5ded8d59557d4d1df"
# ### 2. Data Augmentation
# + _cell_guid="0e78cfea-5120-4c18-80a0-fd09964e024d" _uuid="7448169cf1949461f30735a89f96b4c4003dccea"
from keras.preprocessing import image
# Creating the training Image and Mask generator
image_datagen = image.ImageDataGenerator(shear_range=0.5, rotation_range=50, zoom_range=0.2, width_shift_range=0.2, height_shift_range=0.2, fill_mode='reflect')
mask_datagen = image.ImageDataGenerator(shear_range=0.5, rotation_range=50, zoom_range=0.2, width_shift_range=0.2, height_shift_range=0.2, fill_mode='reflect')
# Keep the same seed for image and mask generators so they fit together
image_datagen.fit(X_train[:int(X_train.shape[0]*0.9)], augment=True, seed=seed)
mask_datagen.fit(Y_train[:int(Y_train.shape[0]*0.9)], augment=True, seed=seed)
x=image_datagen.flow(X_train[:int(X_train.shape[0]*0.9)],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
y=mask_datagen.flow(Y_train[:int(Y_train.shape[0]*0.9)],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
# Creating the validation Image and Mask generator
image_datagen_val = image.ImageDataGenerator()
mask_datagen_val = image.ImageDataGenerator()
image_datagen_val.fit(X_train[int(X_train.shape[0]*0.9):], augment=True, seed=seed)
mask_datagen_val.fit(Y_train[int(Y_train.shape[0]*0.9):], augment=True, seed=seed)
x_val=image_datagen_val.flow(X_train[int(X_train.shape[0]*0.9):],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
y_val=mask_datagen_val.flow(Y_train[int(Y_train.shape[0]*0.9):],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
# + _cell_guid="a6a85b26-695e-4975-b22f-80a79209c370" _uuid="22785cbdcdf3e7712ba2423a90869e44bd74e28c"
# Checking if the images fit
from matplotlib import pyplot as plt
# %matplotlib inline
imshow(x.next()[0].astype(np.uint8))
plt.show()
imshow(np.squeeze(y.next()[0].astype(np.uint8)))
plt.show()
imshow(x_val.next()[0].astype(np.uint8))
plt.show()
imshow(np.squeeze(y_val.next()[0].astype(np.uint8)))
plt.show()
# + _cell_guid="54ec711a-1380-426f-8223-b03a62c659a2" _uuid="a23269d50bba3f2521202ff835fd5a4d34c785e9"
#creating a training and validation generator that generate masks and images
train_generator = zip(x, y)
val_generator = zip(x_val, y_val)
# + [markdown] _cell_guid="ae011253-08b8-44db-8be9-c1092e171553" _uuid="17f9af12f2fc93a2e57c3cfdc3c122f97f1fa7e4"
# ### 3. Creating the U-net model
# + _cell_guid="1b9b4831-27b2-4a9d-a371-b31ef3a423e1" _uuid="1f14f1661097ea33049514f442492e0d4d44480c"
# Define IoU metric
def mean_iou(y_true, y_pred):
prec = []
for t in np.arange(0.5, 1.0, 0.05):
y_pred_ = tf.to_int32(y_pred > t)
score, up_opt = tf.metrics.mean_iou(y_true, y_pred_, 2)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([up_opt]):
score = tf.identity(score)
prec.append(score)
return K.mean(K.stack(prec), axis=0)
# + _cell_guid="0f97a2d9-a9a0-4399-bbca-198cdda087b6" _uuid="ab499cefbbe8dc514c6347fe203d87eb7974adf0"
# Build U-Net model
inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = Lambda(lambda x: x / 255) (inputs)
c1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (s)
c1 = Dropout(0.1) (c1)
c1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p1)
c2 = Dropout(0.1) (c2)
c2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p2)
c3 = Dropout(0.2) (c3)
c3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p3)
c4 = Dropout(0.2) (c4)
c4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p4)
c5 = Dropout(0.3) (c5)
c5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c5)
u6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u6)
c6 = Dropout(0.2) (c6)
c6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c6)
u7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u7)
c7 = Dropout(0.2) (c7)
c7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c7)
u8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u8)
c8 = Dropout(0.1) (c8)
c8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c8)
u9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u9)
c9 = Dropout(0.1) (c9)
c9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c9)
outputs = Conv2D(1, (1, 1), activation='sigmoid') (c9)
model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[mean_iou])
model.summary()
# + [markdown] _cell_guid="5d17d35a-753d-47c5-b1d7-7effa1af04a7" _uuid="a9378262b0194c0aa54df3e2ea5696b447f8ef83"
# ### 4. Training
# + _cell_guid="2cc45263-7607-4d9d-b0e0-88b3135ba60b" _uuid="440055f1d1feb25fe08f942d15257e2454d2022b"
# Fit model
earlystopper = EarlyStopping(patience=3, verbose=1)
checkpointer = ModelCheckpoint('model-dsbowl2018-1.h5', verbose=1, save_best_only=True)
results = model.fit_generator(train_generator, validation_data=val_generator, validation_steps=10, steps_per_epoch=250,
epochs=3, callbacks=[earlystopper, checkpointer])
# + [markdown] _cell_guid="6e12a9f1-279f-4031-b8c5-4f825f84cc13" _uuid="168a4d55c79c92cd17a398cff13876fb0b32cdbf"
# ### 5. Prediction
# + _cell_guid="ef72a295-7187-41d6-9ecf-c5ee201f000d" _uuid="f9b92b3ce2079288fe8a2ed75f3c8679ee93113f"
# Predict on train, val and test
model = load_model('model-dsbowl2018-1.h5', custom_objects={'mean_iou': mean_iou})
preds_train = model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1)
preds_val = model.predict(X_train[int(X_train.shape[0]*0.9):], verbose=1)
preds_test = model.predict(X_test, verbose=1)
# Threshold predictions
preds_train_t = (preds_train > 0.5).astype(np.uint8)
preds_val_t = (preds_val > 0.5).astype(np.uint8)
preds_test_t = (preds_test > 0.5).astype(np.uint8)
# Create list of upsampled test masks
preds_test_upsampled = []
for i in range(len(preds_test)):
preds_test_upsampled.append(resize(np.squeeze(preds_test[i]),
(sizes_test[i][0], sizes_test[i][1]),
mode='constant', preserve_range=True))
# + _cell_guid="fbad21c9-d1ef-4aee-9e16-8b340e38cd69" _uuid="b050929802713d41a75fc97a960ac6534e5cbde1"
# Perform a sanity check on some random training samples
ix = random.randint(0, len(preds_train_t))
imshow(X_train[ix])
plt.show()
imshow(np.squeeze(Y_train[ix]))
plt.show()
imshow(np.squeeze(preds_train_t[ix]))
plt.show()
# + _cell_guid="0c9fed3a-fa91-4957-833f-c2b8adf64743" _uuid="bf24083f20c4e618eeee2933dc1fd1a36413f8b7"
# Perform a sanity check on some random validation samples
ix = random.randint(0, len(preds_val_t))
imshow(X_train[int(X_train.shape[0]*0.9):][ix])
plt.show()
imshow(np.squeeze(Y_train[int(Y_train.shape[0]*0.9):][ix]))
plt.show()
imshow(np.squeeze(preds_val_t[ix]))
plt.show()
# + _cell_guid="e2e17c4a-e84e-4552-950d-a49e95393ed9" _uuid="b66a4b8ebd2a804d8d102436b0953183bdb4f30b"
# Run-length encoding stolen from https://www.kaggle.com/rakhlin/fast-run-length-encoding-python
def rle_encoding(x):
dots = np.where(x.T.flatten() == 1)[0]
run_lengths = []
prev = -2
for b in dots:
if (b>prev+1): run_lengths.extend((b + 1, 0))
run_lengths[-1] += 1
prev = b
return run_lengths
def prob_to_rles(x, cutoff=0.5):
lab_img = label(x > cutoff)
for i in range(1, lab_img.max() + 1):
yield rle_encoding(lab_img == i)
# + _cell_guid="46ff9859-1d81-4a7b-be2a-f0421a67ad79" _uuid="b27241fd5a881fa0b17711ed71b7a99b7a9b4859"
new_test_ids = []
rles = []
for n, id_ in enumerate(test_ids):
rle = list(prob_to_rles(preds_test_upsampled[n]))
rles.extend(rle)
new_test_ids.extend([id_] * len(rle))
| data science bowl 2018/data_science_bowl_2018.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# +
trained_model_dir = "results/"
template = "cifar10/ResNet18OAT-2BN/{}_e30-b100_sgd-lr0.1-m0.9-wd0.0005_cos_disc-ew-[0.0, 0.1, 0.2, 0.3, 0.4, 1.0]-rand-d128/eval/{}.txt"
def get_robust_accuracy(test_adversarial):
train_adversarial = "original"
path = trained_model_dir + template.format("original", test_adversarial)
file = open(path)
acc = file.readlines()[-2]
file.close()
robust_acc = float(acc.split()[-1])
return robust_acc
def get_trained_robust_accuracy(test_adversarial):
path = trained_model_dir + template.format(test_adversarial, test_adversarial)
file = open(path)
acc = file.readlines()[-2]
file.close()
trained_robust_acc = float(acc.split()[-1])
return trained_robust_acc
def get_cross_robust_accuracy(train_adversarial, test_adversarial):
path = trained_model_dir + template.format(train_adversarial, test_adversarial)
file = open(path)
acc = file.readlines()[-2]
file.close()
cross_robust_acc = float(acc.split()[-1])
return cross_robust_acc
# +
train = "autoattack"
test = "pgd"
trained_robust_accuracy = get_trained_robust_accuracy(test)
cross_robust_accuracy = get_cross_robust_accuracy(train, test)
print("Train Data: ", train)
print("Test Data: ", test)
print("=================")
print("RobustAccuracy({},{}): {}".format(test, test, trained_robust_accuracy))
print("RobustAccuracy({},{}): {} ".format(train, test, cross_robust_accuracy))
# +
train = "pgd"
test = "autoattack"
trained_robust_accuracy = get_trained_robust_accuracy(train)
cross_robust_accuracy = get_cross_robust_accuracy(train, test)
print("Train Data: ", train)
print("Test Data: ", test)
print("=================")
print("RobustAccuracy({},{}): {}".format(test, test, trained_robust_accuracy))
print("RobustAccuracy({},{}): {} ".format(train, test, cross_robust_accuracy))
# +
def robust_accuracies(attacks) :
accs = []
names = []
for a in attacks :
accs.append(get_robust_accuracy(a))
names.append(a)
return accs, names
def trained_robust_accuracies(attacks) :
accs = []
names = []
for a in attacks :
accs.append(get_trained_robust_accuracy(a))
names.append(a)
return accs, names
# +
# from constant import TOOLBOX_ADV_ATTACK_LIST
TOOLBOX_ADV_ATTACK_LIST = ["autoattack", "autopgd", "bim", "cw", "fgsm", "pgd", "squareattack", "deepfool", "newtonfool", "pixelattack", "spatialtransformation"]
attacks = TOOLBOX_ADV_ATTACK_LIST
# -
robust_accs, names = robust_accuracies(attacks)
trained_robust_accs, names = trained_robust_accuracies(attacks)
robust_accs
trained_robust_accs
names
df = pd.DataFrame(data={"attack": names, "robust_accuracy": robust_accs, "trained_robust_accuracy": trained_robust_accs})
df["improvement"] = df["trained_robust_accuracy"] - df["robust_accuracy"]
df
# $A(M, X_B)$ = robust_accuracy
#
# $A(M_B, X_B)$ = trained_robust_accuracy
#
# $A(M_B, X_C)$ = cross_robust_accuracy
# ## RQ1 - Motivating work
#
# We hypothesize that each adversarial attack has a unique characteristic, thus the test cases generated from each adversarial attack are not targeting the same bug. To prove if the accuracy is not enough in measuring the performance of adversarial attacks, we need to demonstrate that $bugs_{X_C} \nsubseteq bugs_{X_B}$ and $bugs_{X_B} \nsubseteq bugs_{X_C}$. In the other word, there exist at least 2 adversarial examples $X_B$ and $X_C$ such that $A(M_C, X_B) < A(M_B, X_B)$ and $A(M_B, X_C) < A(M_C, X_C)$.
def compare(a1, a2):
trained_robust_acc = get_trained_robust_accuracy(a1)
cross_robust_acc = get_cross_robust_accuracy(a2, a1)
# print("Trained Robust Accuracy: ", trained_robust_acc)
# print("Cross Robust Accuracy: ", cross_robust_acc)
return trained_robust_acc - cross_robust_acc
# +
def get_comparison_metrics(attacks) :
metrics = {}
for a1 in attacks :
m = {}
for a2 in attacks :
m[a2] = compare(a1, a2)
metrics[a1] = m
return metrics
def plot_heatmap(metrics, cmap, fpath, vmin, vmax, annot=True):
df = pd.DataFrame(data=metrics)
plt.figure(figsize=(12,9))
fig = sns.heatmap(df, cmap=cmap, vmin=vmin, vmax=vmax, annot=annot, fmt=".3f", linewidth=0.7)
# fig.set(xlabel='Train', ylabel='Test')
fig.figure.savefig(fpath, bbox_inches='tight')
plt.show()
# -
metrics = get_comparison_metrics(attacks)
plot_heatmap(metrics, "coolwarm", "plot/rq1.png", vmin=-0.13, vmax=0.13, annot=True)
# $A(M_B, X_C) < A(M_C, X_C)$ is satisfied when $A(M_C, X_C) - A(M_B, X_C) > 0$. In the heatmap above, the value represent the difference between $A(M_C, X_C)$ and $A(M_B, X_C)$. We can see that several heatmaps have the red-like color which prove that this happen.
#
# For an example, if we take SpatialTransformation and PixelAttack then we get
#
#
# * $A(M'_{SpatialTransformation}, X'_{PixelAttack}) - A(M'_{PixelAttack}, X'_{PixelAttack})$ = 0.65
# * $A(M'_{PixelAttack}, X'_{SpatialTranformation}) - A(M'_{SpatialTranformation}, X'_{SpatialTranformation})$ = 0.53
#
#
# It proves that SpatialTransformation has test cases that are attacking different bugs from any test cases in PixelAttack and vice versa. From this heatmap, we get 26 pairs of adversarial attacks that satisfy $A(M'_2, X'_2) - A(M'_1, X'_2) > 0$ and $A(M'_1, X'_1) - A(M'_2, X'_1) > 0$.
# +
# Make Sure
train = "spatialtransformation"
test = "pixelattack"
trained_robust_accuracy = get_trained_robust_accuracy(train)
cross_robust_accuracy = get_cross_robust_accuracy(test, train)
print("Train Data: ", train)
print("Test Data: ", test)
print("=================")
print("Trained Robust Accuracy: ", trained_robust_accuracy)
print("Cross Robust Accuracy: ", cross_robust_accuracy)
# -
# $A(M'_{SpatialTransformation}, X'_{PixelAttack}) - A(M'_{PixelAttack}, X'_{PixelAttack})$ = 0.7551 - 0.1 = 0.65
# +
indexs =[]
attack_types = []
attack_the_same_bugs = "Attack The Same Bug"
attack_the_different_bugs = "Attack The DIfferent Bug"
proof = set()
for a1 in attacks :
for a2 in attacks :
if a1 != a2 :
index = None
if a1 > a2 :
index = a1 + "-" + a2
else :
index = a2 + "-" + a1
indexs.append(index)
if metrics[a1][a2] > 0 and metrics[a2][a1] > 0 :
proof.add(index)
attack_types.append(attack_the_different_bugs)
else :
attack_types.append(attack_the_same_bugs)
# len(proof)
# proof
# +
df_attack_types = pd.DataFrame(data={"index": indexs, "attack_types": attack_types})
df_attack_types = df_attack_types.drop_duplicates().reset_index(drop=True)
df_attack_types.head()
# -
df_attack_types.groupby("attack_types").count()
plt.figure(figsize=(10,5))
fig = sns.countplot(data=df_attack_types, x="attack_types")
fpath = "plot/rq1-histogram.png"
fig.figure.savefig(fpath)
plt.show()
# +
different_bugs_metrics = {}
for a1 in attacks :
sm = {}
for a2 in attacks :
id = a1 + "-" + a2
if a1 + "-" + a2 in proof or a2 + "-" + a1 in proof :
sm[a2] = 1
else :
sm[a2] = 0
different_bugs_metrics[a1] = sm
different_bugs_metrics = pd.DataFrame(data=different_bugs_metrics)
# +
def plot_half_heatmap(data, cmap, path) :
sns.set_theme(style="white")
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(different_bugs_metrics, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(8, 5))
# Draw the heatmap with the mask and correct aspect ratio
f = sns.heatmap(data, mask=mask, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar=False)
f.figure.savefig(path, bbox_inches='tight')
def plot_heatmap(data, cmap, path, annot=False) :
sns.set_theme(style="white")
# Draw the heatmap with the mask and correct aspect ratio
if annot :
f, ax = plt.subplots(figsize=(12, 6))
f = sns.heatmap(data, cmap=cmap, vmax=1, center=0, annot=annot, fmt=".3f",
linewidths=.5, cbar_kws={"shrink": .5})
f.figure.savefig(path, bbox_inches='tight')
else :
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(8, 5))
f = sns.heatmap(data, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar=False)
f.figure.savefig(path, bbox_inches='tight')
# Generate a custom diverging colormap
cmap = sns.diverging_palette(h_neg=240, h_pos=0,s=75, l=50, n=1, as_cmap=True)
path = "plot/rq1-attack-diffferent-bugs.png"
plot_half_heatmap(different_bugs_metrics, cmap, path)
# -
# In the contrary, the AutoAttack, AutoPGD, BIM, CW , FGSM, and PGD, both have a 0 value. The test cases generated from those adversarial attacks don't attack the different bugs. we know that $bugs_{X_B} \subset bugs_{X_C}$ is satisfied if $A(M_C, X_B) > A(M_B, X_B)$ and $A(M_B, X_C) < A(M_C, X_C)$.
# +
subset_bugs_metrics = {}
for a1 in attacks :
sm = {}
for a2 in attacks :
id = a1 + "-" + a2
if metrics[a1][a2] < 0 and metrics[a2][a1] > 0 :
sm[a2] = -1
else :
sm[a2] = 0
subset_bugs_metrics[a1] = sm
subset_bugs_metrics = pd.DataFrame(data=subset_bugs_metrics)
# Generate a custom diverging colormap
cmap = sns.diverging_palette(h_neg=240, h_pos=0,s=75, l=50, n=1, as_cmap=True)
path = "plot/rq1-subset-bugs.png"
plot_heatmap(subset_bugs_metrics, cmap, path)
# -
# In the Figure above, the horisontal axis means the first term used and the vertical axis is the second term used. For example, from the above metric, $bugs_{X_{AutoPGD}} \subset bugs_{X_{AutoAttack}}$, $bugs_{X_{FGSM}} \subset bugs_{X_{PGD}}$, $bugs_{X_{DeepFool}} \subset bugs_{X_{NewtonFool}}$, etc.
# ## RQ2 - How to use BSEM? How BSEM compared to the existing metric?
#
# Given a model $M$, an original data $X$, a list of $n$ adversarial attack $AA = \{ aa_1, aa_2, aa_3, ... , aa_n \}$, an adversarial defense technique $AD$. First, we generate adversarial examples using each adversarial attack from $AA$. For each pair of adversarial examples generated from different adversarial attack, we measure BSEM. Then we make a leaderboard that mimic the existing evaluation relative to the adversarial defense $AD$.
# ### One Way Relation
# +
def one_pov_relation(a1, a2):
robust_acc = get_robust_accuracy(a2)
trained_robust_acc = get_trained_robust_accuracy(a2)
cross_robust_acc = get_cross_robust_accuracy(a1, a2)
# print("RobustAccuracy(original, {}): {:.4f}".format(a2, robust_acc))
# print("RobustAccuracy({}, {}): {:.4f}".format(a2, a2, trained_robust_acc))
# print("RobustAccuracy({}, {}): {:.4f}".format(a1, a2, cross_robust_acc))
return (min(cross_robust_acc, trained_robust_acc) - robust_acc) / (trained_robust_acc - robust_acc)
def measure_relation(a1, a2) :
# print((one_pov_relation(a1, a2) + one_pov_relation(a2, a1))/2)
# return max(0, (one_pov_relation(a1, a2) + one_pov_relation(a2, a1))/2)
return max(0, one_pov_relation(a1, a2))
# -
# measure_relation("spatialtransformation", "ffgsm")
measure_relation("pgd", "autoattack")
measure_relation("squareattack", "autoattack")
measure_relation("autoattack", "squareattack")
# +
#TODO:
# cek formula -> what does it means?
# benerin angka dibelakakgn koma
# +
# owr: one way relation
owr = {}
for a1 in attacks :
m = {}
for a2 in attacks :
m[a2] = measure_relation(a1, a2)
owr[a1] = m
owr = pd.DataFrame(data=owr)
# Generate a custom diverging colormap
cmap = sns.diverging_palette(h_neg=300, h_pos=12.2,s=75, l=37, n=1, as_cmap=True)
path = "plot/rq2-bsem.png"
plot_heatmap(owr, cmap, path, annot=True)
# -
leaderboard = df[["attack","robust_accuracy"]]
leaderboard.sort_values(["robust_accuracy"]).reset_index(drop=True)
# From the leaderboard, AutoAttack is the best one. But it seems that PixelAttack and SpatialTransformation are attacking the different bugs compared to another attacks. None of attacks are looks similar with PixelAttack nor Spatial Transformatrion.
| oat-evaluation-metric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="EfFzkeyx7wrm"
# # Simple word correlation for SERPs
#
# By <NAME>
# + [markdown] id="JGHipLx226CY"
# Set project name for Google Drive directory
# + id="MMjC06gc29Ai"
PROJECT_NAME = 'coffee machine'
# + [markdown] id="-jcyjBFfPhpA"
# # Configurations for D4S
# + [markdown] id="HzKMOVP0C5rZ"
# Set your D4S email in this variable.
# + id="vHtZzqSLNUOl"
D4S_API_EMAIL = '<EMAIL>'
LANGUAGE = 'English'
LOCATION = 'United States'
# + [markdown] id="K7yusaXQSjXP"
# Run this cell and enter your D4S API password:
# + id="YLId1rvTScyB" colab={"base_uri": "https://localhost:8080/"} outputId="c4eac9f4-ee82-46e0-e7a4-4b678b7fc3c9"
from getpass import getpass
D4S_API_PASSWORD = getpass()
# + [markdown] id="kfS7j9IG0nNV"
# # Google Drive mount
# + colab={"base_uri": "https://localhost:8080/"} id="gK57EqVA0oxO" outputId="98b6df44-01bc-47a2-d323-82b8be662632"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="SL8cL0jOPD1h"
# # Installs
# + id="qE2cURgHJNdu" colab={"base_uri": "https://localhost:8080/"} outputId="b205d3d7-db23-48a7-dc45-7390aab04a98"
# !apt -qq install chromium-chromedriver
# + id="NA_FOwHmPGfT" colab={"base_uri": "https://localhost:8080/"} outputId="77f5617b-280d-43d4-d397-0f69f8cf0708"
# !pip install pyppeteer --quiet
# + id="wgFlEm3dQBm8"
# !pip install nest-asyncio --quiet
# + [markdown] id="ijeQGo_cB8O4"
# # Imports
# + id="9sUgEys2QFAo"
import re
import os
import json
import nltk
import asyncio
import hashlib
import string
import scipy
import random
import multiprocessing
from collections import Counter
from pyppeteer import launch
from bs4 import BeautifulSoup
import nest_asyncio
from collections import defaultdict
from http.client import HTTPSConnection
from base64 import b64encode
from json import loads, dumps, dump
from datetime import datetime
from pathlib import Path
from gensim.models.word2vec import Word2Vec
from gensim.models.phrases import Phraser, Phrases
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import spacy
import nest_asyncio
nest_asyncio.apply()
# + colab={"base_uri": "https://localhost:8080/"} id="p_jfocRaM3yB" outputId="6a565ee2-e16d-4979-9254-116913aeee50"
nltk.download('punkt', quiet=True)
# + colab={"base_uri": "https://localhost:8080/"} id="JJ7ubqnHM7r2" outputId="ac18fa7b-8f83-4380-cb8b-44e34b23ee8e"
cores = multiprocessing.cpu_count()
cores
# + [markdown] id="N9Bd-FCu4jzv"
# # Create project Dir
# + id="l-5gtvas4dhZ"
BASE_PATH = f'/content/drive/MyDrive/seosecretlab/{PROJECT_NAME}'
if os.path.exists(BASE_PATH) == False:
os.mkdir(BASE_PATH)
# + [markdown] id="oRz9sAGxQw5A"
# # Keywords
# + [markdown] id="gFAMTS54TNQp"
# Enter one keyword per line.
# + id="C-3NM2o4QyNW"
base_keywords = """
coffee machine recommendations
coffee machine reviews
best coffee machines
"""
# + [markdown] id="MPkvG0nwTRmJ"
# we'll cleanup our keywords now:
# + id="AiUjyzFYTTaD" colab={"base_uri": "https://localhost:8080/"} outputId="9fd58824-4008-4d01-c0cb-ff579a4383fb"
keywords = [kw for kw in base_keywords.split('\n') if kw.strip() != ""]
keywords
# + [markdown] id="24aPOi7QDYe0"
# # Boilerplate for D4S
# + id="IopfXIwkDaC6"
today = datetime.today().strftime('%Y-%m-%d')
today
class RestClient:
domain = "api.dataforseo.com"
def __init__(self, username, password):
self.username = username
self.password = password
def request(self, path, method, data=None):
connection = HTTPSConnection(self.domain)
try:
base64_bytes = b64encode(("%s:%s" % (self.username, self.password)).encode("ascii")).decode("ascii")
headers = {'Authorization' : 'Basic %s' % base64_bytes, 'Content-Encoding' : 'gzip'}
connection.request(method, path, headers=headers, body=data)
response = connection.getresponse()
return loads(response.read().decode())
finally:
connection.close()
def get(self, path):
return self.request(path, 'GET')
def post(self, path, data):
if isinstance(data, str):
data_str = data
else:
data_str = dumps(data)
return self.request(path, 'POST', data_str)
async def get_serp(kw):
client = RestClient(D4S_API_EMAIL, D4S_API_PASSWORD)
post_data = dict()
post_data[len(post_data)] = dict(
language_name=LANGUAGE,
location_name=LOCATION,
keyword=kw
)
try:
response = client.post("/v3/serp/google/organic/live/regular", post_data)
except:
response = None
print(f'Error getting keyword {kw} SERP')
data = None
if response:
if response["status_code"] == 20000:
data = response['tasks'][0]["result"][0]["items"]
for item in data:
item["keyword"] = kw
item["date"] = today
else:
print(f"error. Code: {response['status_code']} Message: {response['status_message']}")
return data
async def get_multiple_serp(keywords):
limit = 0
tasks = []
results = []
for kw in keywords:
task = asyncio.create_task(get_serp(kw))
tasks.append(task)
if limit % 10:
current_results = await asyncio.gather(*tasks)
for current in current_results:
results.append(current)
tasks = []
limit += 1
if len(tasks) > 0:
current_results = await asyncio.gather(*tasks)
for current in current_results:
results.append(current)
return results
# + [markdown] id="7-rLZtxNQyzG"
# # Grab SERPs using D4S
# + id="0qLn2j7MQ4-2"
loop = asyncio.get_running_loop()
results = loop.run_until_complete(get_multiple_serp(keywords))
# + id="NMkCtsbVGwVx" colab={"base_uri": "https://localhost:8080/"} outputId="fb8b3e55-1a52-4e39-b076-bd54f4d9eefb"
len(results)
# + id="_wM8kFv3HY23" colab={"base_uri": "https://localhost:8080/"} outputId="b91e0888-8787-4da4-a212-1fbb9218b9c9"
len(results[0])
# + id="INq59Gv2RCl3" colab={"base_uri": "https://localhost:8080/"} outputId="68a5917d-1c71-4fcb-8999-08cf2fa6b4a0"
{
'keyword': results[0][0]['keyword'],
'position': results[0][0]['rank_absolute'],
'url': results[0][0]['url'],
'title': results[0][0]['title'],
'description': results[0][0]['description'],
}
# + [markdown] id="OLn7IElRIp9O"
# # Scraping boilerplate
# + id="fyHPvvSDIru6"
def get_url_pathname(url):
url_hash = hashlib.md5(url.encode('utf-8'))
return f'{BASE_PATH}/{url_hash.hexdigest()}.json'
async def get_html(url):
result = None
pathname = get_url_pathname(url)
if os.path.exists(pathname):
with open(pathname, "r") as f:
result = f.read()
print(f'Loaded from cache file for {url}')
return json.loads(result)
print(f'Getting page for {url}')
browser = await launch({'executablePath':"/usr/lib/chromium-browser/chromium-browser", 'args': ["--no-sandbox"], 'headless': True, 'timeout': 3000})
page = await browser.newPage()
html = ''
try:
await page.goto(url)
await page.waitFor(2000)
html = await page.content()
except:
html = ''
finally:
await page.close()
await browser.close()
result = {
'html': html,
'url': url,
}
with open(pathname, "w") as f:
f.write(json.dumps(result))
print(f'Finished with {url}')
return result
async def scrap(urls):
limit = 0
tasks = []
results = []
for url in urls:
task = asyncio.create_task(get_html(url))
tasks.append(task)
if limit % 4:
current_results = await asyncio.gather(*tasks)
for current in current_results:
results.append(current)
tasks = []
limit += 1
if len(tasks) > 0:
current_results = await asyncio.gather(*tasks)
for current in current_results:
results.append(current)
return results
# + [markdown] id="k47QzaTDQ-X1"
# # Grab HTML for our URLs
#
# + id="3hE2KA0-RC4j" colab={"base_uri": "https://localhost:8080/"} outputId="8053189d-bc9f-4754-aa1e-1701c441f8f6"
urls = []
for serp in results:
for position in serp:
urls.append(position['url'])
urls = list(set(urls))
len(urls)
# + id="MbbxMbJfJzvs"
new_loop = asyncio.get_event_loop()
_htmls = new_loop.run_until_complete(asyncio.gather(scrap(urls)))
htmls = _htmls[0]
# + id="DHjde77xKCKG" colab={"base_uri": "https://localhost:8080/"} outputId="57387fa2-6aca-4c7f-e41f-097dd0e4e004"
len(htmls)
# + id="ZIWbtywGL6vR" colab={"base_uri": "https://localhost:8080/"} outputId="1dd4c86e-5888-4bc4-aafb-60a7fbd5364d"
for html in htmls[0:5]:
print(html['html'][:150].replace('\n', ''))
# + [markdown] id="TNBS7EGnMpIS"
# # Boilerplate clean content
# + id="QPEhDs3VMq6e"
def clean_text(text):
text_lowercase = text.lower()
text = re.sub(r'[^a-zA-Z0-9-.,;\'"\n ]+', '', text_lowercase)
text = re.sub(r'[ ]+', ' ', text)
return re.sub(r'[\n]+', '\n', text)
def clean_word(word):
word_lowercase = word.lower()
return re.sub(r'[^a-zA-Z0-9-]+', '', word_lowercase)
# + [markdown] id="1ztr20JzRIYi"
# # Extract content from our data
# + id="xkikAPBMRKam"
ignored_tags = [
"nav",
"header",
"footer",
"iframe",
"script",
"style",
"code",
"pre",
"form",
"select",
"input",
"textarea",
"video",
"object",
"svg",
"object",
"title",
"button",
"sup",
"noscript",
"aside",
"img",
"var",
"link",
]
processed_serps = []
for html in htmls:
text = ''
if html['html']:
soup = BeautifulSoup(html['html'], "html.parser")
for tag in ignored_tags:
x = soup.select(tag)
for xtag in x:
xtag.replace_with(' ')
xtag.extract()
text = soup.body.get_text(' ')
text = clean_text(text)
processed_serps.append({
'url': html['url'],
'html': html['html'],
'text': text.strip()
})
# + id="ci7Qw38sdBME" colab={"base_uri": "https://localhost:8080/"} outputId="5107694e-62e3-4e4a-8822-0eb79126d5a7"
len(processed_serps)
# + id="9yF3aL0JeUBk" colab={"base_uri": "https://localhost:8080/"} outputId="3d195d5b-278a-473e-88fa-aa119257b3f2"
for doc in processed_serps[0:5]:
print(doc['text'][:100])
# + [markdown] id="-YS5OjCSQd0b"
# # Build corpus
# + colab={"base_uri": "https://localhost:8080/"} id="aNzJmMcDQfTh" outputId="61f49efa-7e09-4ad5-c0f1-a4c0a9f1dff3"
corpus = ""
for doc in processed_serps:
corpus += doc['text'] + "\n\n"
len(corpus)
# + colab={"base_uri": "https://localhost:8080/", "height": 86} id="r9U5TQvgQwNP" outputId="cec48adf-1ad5-45bf-dc92-90867a661ec4"
corpus[:5000]
# + [markdown] id="I6Bys8BvRMoP"
# # Find top terms using Word2Vec
# + id="tO0-wcicRW5E"
sentences_list = nltk.sent_tokenize(corpus)
# + id="ZznYoQiEUTXs"
sentences = []
PUNCT = list(string.punctuation)
STOP_WORDS = set(["its", "from", "also", "not", "all", "am", "an", "and", "another", "any", "are", "as", "at", "be", "been", "being", "but", "by", "came", "can", "come", "did", "do", "for", "get", "got", "has", "had", "he", "have", "her", "here", "him", "himself", "his", "how", "if", "in", "into", "is", "it", "like", "me", "my", "of", "on", "or", "other", "our", "out", "over", "see", "still", "such", "take", "than", "that", "the", "their", "them", "then", "there", "these", "they", "this", "those", "through", "to", "too", "up", "was", "way", "we", "well", "while", "with", "would", "you", "your", "a", "i", "will", "com", "may", "every", "using", "just", "need", "want", "years", "great", "good", "privacy", "next", "know", "found", "add", "even", "use", "one", "something", "choice", "some", "more", "away", "really", "put", "instead", "start"])
for sent in sentences_list:
clean_words = []
words = nltk.word_tokenize(sent)
for word in words:
w = clean_word(word)
if w and len(w) > 1 and not w.isdigit() and w not in PUNCT and w not in STOP_WORDS:
clean_words.append(w)
if len(clean_words) > 2:
sentences.append(clean_words)
# + colab={"base_uri": "https://localhost:8080/"} id="xEzUGRYdUtCk" outputId="2f597a57-76b6-4c8e-e30b-b4fc4b4220ae"
len(sentences)
# + colab={"base_uri": "https://localhost:8080/"} id="pwZIWEV9UuFV" outputId="d3ef3e05-02c5-466d-c057-d5ae3b37a449"
[" ".join(sent) for sent in sentences[0:10]]
# + colab={"base_uri": "https://localhost:8080/"} id="u9RBAKBzVWvg" outputId="e98b5fa0-9f93-465d-b36c-f71fc1487d2c"
MIN_WORD_COUNT = 5
bigram = Phrases(sentences, min_count=MIN_WORD_COUNT, threshold=MIN_WORD_COUNT, common_terms=STOP_WORDS)
bigram_model = Phraser(bigram)
trigram = Phrases(bigram[sentences], min_count=MIN_WORD_COUNT, threshold=10, common_terms=STOP_WORDS)
trigram_model = Phraser(trigram)
phraser = trigram_model[bigram_model[sentences]]
# + colab={"base_uri": "https://localhost:8080/"} id="b4EaEoBwV7yM" outputId="bbe5f717-4109-4d24-db91-d5f57a7d79f7"
trigram_model['skip content market new coffee machine probably run across variety breville models searches'.split()]
# + id="fuaqF9MHWEMV"
def most_frequent_words(phraser, sents, num, min_word_len=1, max_word_len=1):
if max_word_len < min_word_len:
max_word_len = min_word_len
word_freq = defaultdict(int)
for sent in phraser[sents]:
for i in sent:
_len = len(i.split("_"))
if i not in STOP_WORDS and _len >= min_word_len and _len <= max_word_len:
word_freq[i] += 1
words = []
for k in sorted(word_freq, key=word_freq.get, reverse=True)[:num]:
words.append(k)
return words
def less_frequent_words(phraser, sents, num, min_word_len=1, max_word_len=1):
if max_word_len < min_word_len:
max_word_len = min_word_len
word_freq = defaultdict(int)
for sent in phraser[sents]:
for i in sent:
_len = len(i.split("_"))
if i not in STOP_WORDS and _len >= min_word_len and _len <= max_word_len:
word_freq[i] += 1
words = []
for k in sorted(word_freq, key=word_freq.get)[:num]:
words.append(k)
return words
# + colab={"base_uri": "https://localhost:8080/"} id="fki-LFoDWRW6" outputId="5659c211-8825-4695-ba4f-944c68c189eb"
most_frequent_words(trigram_model, sentences, 10, 1)
# + colab={"base_uri": "https://localhost:8080/"} id="j_s-qn3PWF9B" outputId="7c722cf9-f145-492f-bbcc-a48ddbd0bd3b"
most_frequent_words(trigram_model, sentences, 10, 2)
# + colab={"base_uri": "https://localhost:8080/"} id="gR5UxuXThLRr" outputId="2c3d681c-3a3a-474f-94cb-6a13e2c181de"
most_frequent_words(trigram_model, sentences, 10, 1, 3)
# + colab={"base_uri": "https://localhost:8080/"} id="DqWMeUI7BsZs" outputId="58ec2911-d7a0-4c8b-b766-bd3b3865b977"
less_frequent_words(trigram_model, sentences, 10, 2, 3)
# + [markdown] id="IkRXZbUZWkVl"
# # Train word2vec
# + id="o_s2RjD2WlaZ"
workers = cores - 1
if cores == 1:
workers = 1
w2v_model = Word2Vec(
size=300,
min_count=10,
workers=workers,
)
# + id="pX_z4wdWWnVX"
w2v_model.build_vocab(phraser)
# + colab={"base_uri": "https://localhost:8080/"} id="kQm_UCvFWouN" outputId="e0e7c850-f506-4085-f5e8-d58c1d5f8f57"
len(w2v_model.wv.vocab)
# + colab={"base_uri": "https://localhost:8080/"} id="jNc5kU1IWqGT" outputId="7b702c28-edf5-4030-ce90-eb6db80d3401"
W2V_EPOCHS = 100
w2v_model.train(sentences, total_examples=w2v_model.corpus_count, epochs=W2V_EPOCHS)
# + colab={"base_uri": "https://localhost:8080/"} id="8bsS8IyrWtyq" outputId="29433fa9-a075-4bb0-9434-6b4db18259b2"
w2v_model.wv.most_similar('cold', topn=25)
# + [markdown] id="1_UoDit_Xd3Q"
# # Plot
# + id="1ugxSpafXfuh"
vocab = w2v_model.wv.vocab
X = w2v_model.wv[vocab]
# + id="I62y-xxrX0yU"
tsne = TSNE(n_components=2)
X_tsne = tsne.fit_transform(X)
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="MlHgr3Y9YqOZ" outputId="7d454a52-15d1-49b0-e73c-07e9d2a81cd2"
df = pd.DataFrame(X_tsne, index=vocab, columns=['x', 'y'])
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="dXkLKDc2YD77" outputId="61e00d89-b7b5-4ce5-ec23-52ab1b2f462e"
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(df['x'], df['y'])
for word, pos in df.iterrows():
ax.annotate(word, pos)
plt.show()
# + id="Rm69HvQAaDOH"
# https://stackoverflow.com/questions/56076714/gensim-plot-list-of-words-from-a-word2vec-model
def display_wordlist(model, wordlist):
vectors = [model[word] for word in wordlist if word in model.wv.vocab.keys()]
word_labels = [word for word in wordlist if word in model.wv.vocab.keys()]
word_vec_zip = zip(word_labels, vectors)
# Convert to a dict and then to a DataFrame
word_vec_dict = dict(word_vec_zip)
df = pd.DataFrame.from_dict(word_vec_dict, orient='index')
# Use tsne to reduce to 2 dimensions
tsne = TSNE(perplexity=65,n_components=2, random_state=0)
np.set_printoptions(suppress=True)
Y = tsne.fit_transform(df)
x_coords = Y[:, 0]
y_coords = Y[:, 1]
# display plot
plt.figure(figsize=(16, 8))
plt.plot(x_coords, y_coords, 'ro')
for label, x, y in zip(df.index, x_coords, y_coords):
plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points')
plt.xlim(x_coords.min()+0.00005, x_coords.max()+0.00005)
plt.ylim(y_coords.min()+0.00005, y_coords.max()+0.00005)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 527} id="8WXih8_jaES3" outputId="19c8d084-d00d-4ccb-f9ee-30fd276aea05"
words = [word[0] for word in w2v_model.wv.most_similar('cold', topn=15)]
display_wordlist(w2v_model, words)
# + [markdown] id="GWvWxgylcgFj"
# # Save model and load with spacy
# + id="4lJZQvvvciMh"
model_pathname = f'{BASE_PATH}/spacy.word2vec.txt'
model_pathname_gzip = f'{BASE_PATH}/spacy.word2vec.txt.gz'
model_pathname_spacy = f'{BASE_PATH}/spacy.word2vec.model'
w2v_model.wv.save_word2vec_format(model_pathname)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="3KwuP7XedZAc" outputId="365d8e18-0f12-4c23-8742-09da7439fc62"
model_pathname
# + id="gPeSAeXbcw2q"
# !gzip "$model_pathname"
# + colab={"base_uri": "https://localhost:8080/"} id="bx9AwiDTdAWm" outputId="409b2b97-3e6b-4e83-9b5e-9a2e4ce2a68c"
# !python3 -m spacy init-model en "$model_pathname_spacy" --vectors-loc "$model_pathname_gzip"
# + [markdown] id="e0wfTwcbdheR"
# # Plot with Spacy
# + id="LrrQYIokdjX2"
nlp = spacy.load(model_pathname_spacy)
# + id="FSANVhL5dvGo"
spacy_word1 = nlp.vocab['coffee']
spacy_word2 = nlp.vocab['brew']
spacy_word3 = nlp.vocab['cheese']
spacy_word4 = nlp.vocab['cold']
spacy_word5 = nlp.vocab['cold-brew']
# + colab={"base_uri": "https://localhost:8080/"} id="cdeAflR_eJtm" outputId="89206c7b-5ac6-4d08-f29d-2c862f663a7c"
spacy_word1.similarity(spacy_word1)
# + colab={"base_uri": "https://localhost:8080/"} id="jgaC1oG9eD_o" outputId="ee5472bb-77c0-4de6-fff9-0ed56454d153"
spacy_word1.similarity(spacy_word2)
# + colab={"base_uri": "https://localhost:8080/"} id="TaNYMqbaeGZW" outputId="c22330e5-aaa9-407d-82ae-74a12bb703ff"
spacy_word1.similarity(spacy_word3)
# + id="8abxxI4AeT6Z"
def most_similar_spacy(word, topn=10):
allwords = [w for w in nlp.vocab if w.has_vector and w.is_lower and w.lower_ != word.lower_]
by_similarity = sorted(allwords, key=lambda w: word.similarity(w), reverse=True)
return by_similarity[:topn]
# + colab={"base_uri": "https://localhost:8080/"} id="-SoagsSreWDt" outputId="4766d5d4-6176-407b-89fe-02ec22dd874f"
[w.text for w in most_similar_spacy(spacy_word1)]
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="D2mMUaFweuur" outputId="9c90527b-77e2-4325-ae9e-60f518555740"
tsne_model = TSNE(n_components=2)
data = np.array([spacy_word1.vector, spacy_word2.vector, spacy_word3.vector, spacy_word4.vector, spacy_word5.vector])
data_2d = tsne_model.fit_transform(data)
labels = ['coffee', 'brew', 'cheese', 'cold', 'cold-brew']
plt.scatter(data_2d[:, 0], data_2d[:, 1], s=100)
for i, txt in enumerate(labels):
plt.annotate(txt, (data_2d[i,0], data_2d[i,1]), xytext=(2, 3), textcoords='offset points')
plt.show()
# + [markdown] id="tMsSMy87gozw"
# # Build a list of popular words
# + colab={"base_uri": "https://localhost:8080/"} id="-Q4oA9_Egspw" outputId="202f91c1-4326-4a2c-a9a6-efe66aa17282"
popular = most_frequent_words(trigram_model, sentences, 25, 1, 3)
popular[:10]
# + colab={"base_uri": "https://localhost:8080/"} id="AYZvkbEE8st0" outputId="d2e684bf-859d-462e-8a80-f57eeb0dbca9"
unpopular = less_frequent_words(trigram_model, sentences, 25, 1, 3)
unpopular[:10]
# + [markdown] id="tZnZoR6DRXJA"
# # Calculate word counts (first keyword only)
# + id="D005VuVflPMG"
def count_in_content(word, content):
return sum(1 for _ in re.finditer(r'\b%s\b' % re.escape(word.replace('_', ' ')), content))
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="kx93lNZDsINR" outputId="e471836b-6858-440e-e3b2-4401a1a2dc15"
current = results[0]
current_keyword = current[0]['keyword']
current_keyword
# + id="Gu3muKO_-N77"
current_keyword_urls = []
for serp in current:
if serp['type'] == 'organic':
current_keyword_urls.append(serp['url'])
# + id="h-KyQE_KtpGL"
counts_by_url = dict()
for serp in processed_serps:
counts_by_url[serp['url']] = dict()
for word in all_words:
counts_by_url[serp['url']][word] = count_in_content(word, serp['text'])
# + colab={"base_uri": "https://localhost:8080/"} id="pODNKg5suMQt" outputId="4011ffff-88f8-41df-bc41-52896050f5aa"
counts_by_url[processed_serps[0]['url']][all_words[0]]
# + [markdown] id="4InyDknq0e_4"
# # Correlation per popular word (first keyword only)
# + id="8ac9tvImxOkE"
def show_plot(w, kw, i, axs, xlim=None, max_pos=100):
corr_range = [x+1 for x in list(range(0, len(current_keyword_urls)))]
corr_counts = []
for url in current_keyword_urls:
corr_counts.append(counts_by_url[url][w])
x = pd.Series(corr_range)
y = pd.Series(corr_counts)
xy = np.array([x, y])
slope, intercept, r, p, stderr = scipy.stats.linregress(x, y)
axs[i].plot(x, y, linewidth=0, marker='s', label=w)
axs[i].plot(x, intercept + slope * x)
axs[i].set_xlabel('x')
axs[i].set_ylabel('y')
axs[i].set_title(f'{w} ({max_pos})')
if xlim:
axs[i].set_xlim(xlim)
axs[i].legend(facecolor='white')
# + id="GIVENS12zhr4"
def plot_words_corr(words, max_pos):
fig, axs = plt.subplots(len(words), figsize=(4, 10))
fig.tight_layout()
for i, w in enumerate(words):
show_plot(w, current_keyword, i, axs, [0, max_pos], max_pos)
for ax in axs.flat:
ax.set(xlabel='position', ylabel='count')
# Hide x labels and tick labels for top plots and y ticks for right plots.
for ax in axs.flat:
ax.label_outer()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="f5yC7Cao51wA" outputId="65817b60-5b17-4439-bf7b-8fa9725af668"
_words = popular[:6] # most popular
_words
# + colab={"base_uri": "https://localhost:8080/"} id="YlaNYCiY_wPc" outputId="0d7d5d20-16ce-4307-d897-7a5162c53355"
words = []
min_count = 50
for _word in _words:
word_counts = []
for url in current_keyword_urls:
word_counts.append(counts_by_url[url][_word])
if sum(word_counts) >= min_count and _word in vocab and _word not in STOP_WORDS:
words.append(_word)
words
# + colab={"base_uri": "https://localhost:8080/", "height": 759} id="NWi6hkIA5pHM" outputId="b0bd5097-8326-4217-e88a-5b7a7f677219"
plot_words_corr(words, 100)
# + colab={"base_uri": "https://localhost:8080/", "height": 759} id="iHXcDfpy5y2U" outputId="2ae365c1-1344-4b90-cb48-440e5144bcee"
plot_words_corr(words, 50)
# + colab={"base_uri": "https://localhost:8080/", "height": 759} id="UnE9DATl6GSG" outputId="d010eb55-87a1-48f0-b6b0-d15597282edb"
plot_words_corr(words, 10)
| seo/notebooks/SERP_Experiment_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # 事例1: ulmoで気象データと戯れる
# + [markdown] slideshow={"slide_type": "slide"}
# ## 1-1 東京の最高気温のプロット
# + [markdown] slideshow={"slide_type": "-"}
# - ### ulmoとプロット系のライブラリを読み込む
# -
import ulmo
import pandas
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] slideshow={"slide_type": "slide"}
# - ### NOAAが提供する気象データで日本の日毎の気象データをさがす
# - ### その中の東京のデータをさがす
# -
st = ulmo.ncdc.ghcn_daily.get_stations(country='JA', as_dataframe=True)
st[st.name.str.contains('TOKYO')]
# + [markdown] slideshow={"slide_type": "slide"}
# - ### 東京の気象データがあったので,pandasのデータフレームに
# -
data = ulmo.ncdc.ghcn_daily.get_data('JA000047662', as_dataframe=True)
# + [markdown] slideshow={"slide_type": "slide"}
# - ### 最高気温だけを取り出して,スケーリングをキチンとしてプロット
# -
tm = data['TMAX'].copy()
tm.value = tm.value/10.0
tm['value'].plot()
# + [markdown] slideshow={"slide_type": "slide"}
# ## 1-2 Daymet気象データ
# + [markdown] slideshow={"slide_type": "slide"}
# - ### ORNL Daymet: https://daymet.ornl.gov/
# - ### 北米の1kmx1kmのグリッド解像度の日次気象データ
# + [markdown] slideshow={"slide_type": "slide"}
# - ### 2012年から2013年にかけて,
# - ### 緯度35.9313167, 経度-84.3104124の場所の気象
# -
from ulmo.nasa import daymet
ornl_lat, ornl_long = 35.9313167, -84.3104124
df = daymet.get_daymet_singlepixel(longitude=ornl_long, latitude=ornl_lat,
years=[2012,2013])
df.head()
# + [markdown] slideshow={"slide_type": "slide"}
# - ### 温度変化をグラフに
# - ### 15日の移動平均をとった最高気温,最低気温を同時に
# + slideshow={"slide_type": "-"}
fig, (ax1, ax2) = plt.subplots(2, figsize=(18, 10), sharex=True)
rolling15day = df.rolling(center=False,window=15).mean()
ax1.fill_between(rolling15day.index, rolling15day.tmin, rolling15day.tmax, alpha=0.5, lw=0)
ax1.plot(df.index, df[['tmax', 'tmin']].mean(axis=1), lw=2, alpha=0.5)
ax1.set_title('Daymet temp at ORNL', fontsize=20)
ax1.set_ylabel(u'Temp. (°C)', fontsize=20)
monthlysum = df.resample("M").sum()
ax2.bar(monthlysum.index, monthlysum.prcp, width=20,)
ax2.set_title('Daymet precip at ORNL', fontsize=20)
ax2.set_ylabel(u'Precip. (mm)', fontsize=20)
fig.tight_layout()
# + slideshow={"slide_type": "slide"}
fig
# + [markdown] slideshow={"slide_type": "slide"}
# - ### デンバーとマイアミの気温を通年で比較
# +
denver_loc = (-104.9903, 39.7392)
miami_loc = (-80.2089, 25.7753)
denver = daymet.get_daymet_singlepixel(longitude=denver_loc[0], latitude=denver_loc[1],
years=[2012, 2013, 2014])
miami = daymet.get_daymet_singlepixel(longitude=miami_loc[0], latitude=miami_loc[1],
years=[2012, 2013, 2014])
# + slideshow={"slide_type": "slide"}
sns.set_context("talk")
fig, ax1 = plt.subplots(1, figsize=(18, 10))
den_15day = denver.rolling(center=False,window=15).mean()
ax1.fill_between(den_15day.index, den_15day.tmin, den_15day.tmax,
alpha=0.4, lw=0, label='Denver', color=sns.xkcd_palette(['faded green'])[0])
ax1.set_title('Denver vs Miami temps (15 day rolling mean)', fontsize=20)
miami_15day = miami.rolling(center=False,window=15).mean()
ax1.fill_between(miami_15day.index, miami_15day.tmin, miami_15day.tmax,
alpha=0.4, lw=0, label='Miami', color=sns.xkcd_palette(['dusty purple'])[0])
ax1.set_ylabel(u'Temp. (°C)', fontsize=20)
fig.tight_layout()
plt.legend(fontsize=20)
# + [markdown] slideshow={"slide_type": "slide"}
# - ### フロリダは常夏,しかし夏はデンバーのほうが最高気温が高くなる
# - ### 一日の気温差も年間の気温差もデンバーのほうが幅がある
# -
fig
| meteology/ulmo_pyconjp2016.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ROOT C++
# language: c++
# name: root
# ---
# # homework 1.2
# + [1. 模拟生成数据](hw1_2_1.ipynb)
# + [2. 使用MakeClass方法生成`AddVeto`类](hw1_2_2.ipynb)
# + [3. 计算位置 tx,归一化飞行时间 ntof 和 中子能量 ce](hw1_2_3.ipynb)
# + [4. 计算沉积能量Q](hw1_2_4.ipynb)
# + [5. 分析](hw1_2_5.ipynb)
#
#
#
| hw1_2/hw1_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing the required libraries
import numpy as np
import torch
from torch import nn, optim
# +
#input data and converting to torch tensors
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype = 'float32')
inputs = torch.from_numpy(inputs)
#target data and converting to torch tensors
targets = np.array([[366],
[486],
[558],
[219],
[470]], dtype = 'float32')
targets = torch.from_numpy(targets)
# -
#Checking the shapes
inputs.shape , targets.shape
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(3, 10)
self.fc2 = nn.Linear(10, 1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
# Instantiating the model
model = Model()
# Loss function and optimizer
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
# Train the model
n_epochs = 20
for it in range(n_epochs):
# zero the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(inputs)
loss = criterion(outputs, targets)
# Backward and optimize
loss.backward()
optimizer.step()
print(f'Epoch {it+1}/{n_epochs}, Loss: {loss.item():.4f}')
# Prediction using the trained model
preds = model(inputs)
print(preds)
| Exercise01/Exercise01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/alirezash97/Titanic/blob/master/Titanic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="DbaES1NkK6Xe" colab_type="code" outputId="83564117-d8d7-447a-b39d-2c744d881636" colab={"base_uri": "https://localhost:8080/", "height": 195}
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
trainset = pd.read_csv("/content/drive/My Drive/titanic project/train.csv")
testset = pd.read_csv("/content/drive/My Drive/titanic project/test.csv")
testset.head()
# + id="AWRocjNBLOeZ" colab_type="code" colab={}
def preprocessing(table):
table = table.rename(columns = {'Pclass' : 'TicketClass'})
table = table.drop(['Name','Ticket','Cabin'],axis =1)
table['Sex'].replace({'male': 1, 'female':2 }, inplace=True)
table['Age'].replace({float('NaN'): 29 }, inplace=True)
table['Embarked'].replace({float('NaN'): 0,'S': 1 , 'C': 2, 'Q': 3}, inplace=True)
return table
# + id="7T6zAl8OjMVe" colab_type="code" colab={}
trainset = preprocessing(trainset)
testset = preprocessing(testset)
# + id="977cJAWT-Yeh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="3558143a-38ca-4cad-c0cf-0629a6909e2f"
print(testset.head())
# + id="JMg9xAYfoCQp" colab_type="code" outputId="b6d46a84-156f-41cc-a0e8-62705b53ae1c" colab={"base_uri": "https://localhost:8080/", "height": 195}
trainset = trainset[['PassengerId','TicketClass','Sex','Age','SibSp','Parch','Fare','Embarked','Survived']]
trainset.head()
# + id="z8pyitUv27G-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6494b464-7b71-49dc-ea80-6c4e33e3ca32"
print(trainset.shape)
# + id="aIbf42Z_oV0Y" colab_type="code" colab={}
X_train = trainset.iloc[:,:8]
y_train = trainset.iloc[:,8]
# + id="0W0SLKEiMFA4" colab_type="code" outputId="35b85118-7046-4d51-d6b1-890584282d25" colab={"base_uri": "https://localhost:8080/", "height": 286}
from numpy import loadtxt
from keras.models import Sequential, Model
from keras.layers import Input, Dense
model = Sequential()
model.add(Dense(64, input_dim=8, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
# + id="_tdyq74qMIME" colab_type="code" colab={}
model.compile(loss='binary_crossentropy', optimizer='Adam', metrics=['accuracy'])
# + id="aKPotOuWKkc4" colab_type="code" outputId="7fb23d10-5e40-4d46-e565-14ecc70e39c6" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model.fit(X_train, y_train, epochs=100, batch_size=32)
# + id="4TZPOEhhKpi1" colab_type="code" colab={}
predictions = model.predict(testset, batch_size=32)
# + id="pjt-vh5M_zCr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="4a314a15-2858-4348-b76e-84f7f4cdc140"
predictions = predictions.flatten()
print(predictions[:10])
for index,i in enumerate(predictions):
if i > 0.5:
predictions[index] = 1
else:
predictions[index] = 0
print(predictions[:10])
# + id="l0TszlnR_4u2" colab_type="code" colab={}
passid = np.array(testset.iloc[:, 0:1])
passid = passid.flatten()
# + id="J1PzmwZTAjnq" colab_type="code" colab={}
predictions = pd.DataFrame({'PassengerId': passid, 'Survived': predictions}, index=[i for i in range(418)])
# + id="IVkKoupCCAwq" colab_type="code" colab={}
predictions.to_csv('Submission.csv', header=False, index=False)
| Titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
from __future__ import print_function
from sympy import symbols, log, exp, limit, KroneckerDelta, diff, \
Product, factor, Pow, Symbol, simplify, Limit, Mul, expand, init_printing, latex, collect, Add
from optionloop import OptionLoop
from IPython.display import Latex, Math
init_printing()
def __get_dci(fall_type='chem', blend_type='troe', pr_type='mix', var='E'):
# create temperature
T = symbols('T')
# create kf's
kf, kinf = symbols('k_{f} k_{\\inf}', real=True, nonnegative=True)
# create third body efficiency & volume
V, C, P = symbols('V [C] P', real=True, nonnegative=True)
Xi, alphaNS, alphaj = symbols('X_{i} \\alpha_{NS} \\alpha_{j}', real=True)
# species
m, Ns, j = symbols('m Ns j', integer=True, nonnegative=True)
# create pr
Pr = kf * Xi / kinf
R = 8.314
# create Fi & Troe params
if blend_type == 'troe':
T1, T2, T3, a = symbols('T_1 T_2 T_3 a', real=True)
Fcent = a * exp(-T / T1) + (1 - a) * exp(-T / T3) + exp(-T2 / T)
Atroe = -0.68 * log(Fcent, 10) + log(Pr, 10) - 0.4
Btroe = -1.1762 * log(Fcent, 10) - 0.14 * log(Pr, 10) + 0.806
Fi = Fcent ** (1 / (1 + (Atroe / Btroe)**2))
elif blend_type == 'sri':
a, b, c, d, e = symbols('a b c d e', real=True)
X = 1 / (log(Pr, 10) ** 2 + 1)
Fi = d * T**e * (a * exp(-b / T) + exp(-T / c)) ** X
elif blend_type == 'lind':
Fi = 1
# chemically activated form
if fall_type == 'chem':
ci = Fi / (1 + Pr)
elif fall_type == 'fall':
ci = Pr * Fi / (1 + Pr)
# now create derivative temporary products (assuming mixture)
if var == 'T':
b0, binf, e0, einf = symbols('b_0 b_{\\inf} e_0 e_{\\inf}', real=True)
if pr_type in ['mix', 'spec']:
Theta_Pr = (b0 - binf + e0 / (R * T) - einf / (R * T)) / T
if pr_type == 'mix':
theta_Pr = -C * kf * alphaNS / (T * kinf)
else:
theta_Pr = -C * kf * KroneckerDelta(m, Ns) / (T * kinf)
elif pr_type == 'unity':
Theta_Pr = (b0 - binf + e0 / (R * T) - einf / (R * T)) / T
theta_Pr = 0
elif var == 'nj':
Theta_Pr = 0
if pr_type == 'mix':
theta_Pr = alphaj - alphaNS
elif pr_type == 'unity':
theta_Pr = 0
elif pr_type == 'spec':
theta_Pr = KroneckerDelta(m, j) - KroneckerDelta(m, Ns)
elif var == 'V':
# conp derivative w.r.t. volume
if pr_type == 'mix':
Theta_Pr = -1 / V
theta_Pr = C * kf * alphaNS / (kinf * T)
elif pr_type == 'unity':
Theta_Pr = 0
theta_Pr = 0
elif pr_type == 'spec':
Theta_Pr = -1 / V
theta_Pr = C * kf * KroneckerDelta(m, Ns) / (kinf * T)
elif var == 'P':
Theta_Pr = 0
# conv derivative w.r.t. pressure
if pr_type == 'mix':
theta_Pr = kf * alphaNS / (kinf * R * T)
elif pr_type == 'unity':
theta_Pr = 0
elif pr_type == 'spec':
theta_Pr = kf * KroneckerDelta(m, Ns) / (kinf * R * T)
# now create blending function products
if blend_type == 'lind':
Theta_Fi = 0
elif blend_type == 'troe':
if var == 'T':
Theta_Fi = - Btroe / (Fcent * Pr * (Atroe**2 + Btroe**2)**2 * log(10)) * (
2 * Atroe * Fcent * (0.14 * Atroe + Btroe) * (
Pr * Theta_Pr + theta_Pr) * log(Fcent) + Pr * diff(Fcent, T) * (
2 * Atroe * (1.1762 * Atroe - 0.67 * Btroe) * log(Fcent) -
Btroe * (Atroe**2 + Btroe**2) * log(10))
)
elif var == 'nj':
Theta_Fi = -2 * Atroe * Btroe * (0.14 * Atroe + Btroe) * log(Fcent) / (
Pr * (Atroe**2 + Btroe**2)**2 * log(10))
elif var == 'V':
Theta_Fi = (-2 * Atroe * Btroe * log(Fcent) /
(Pr * (Atroe**2 + Btroe**2)**2 * log(10))) * \
(0.14 * Atroe + Btroe) * (Pr * Theta_Pr + theta_Pr)
elif var == 'P':
Theta_Fi = -2 * Atroe * Btroe * theta_Pr * (0.14 * Atroe + Btroe) * log(Fcent) / (
Pr * (Atroe**2 + Btroe**2)**2 * log(10))
elif blend_type == 'sri':
if var == 'T':
Theta_Fi = -X * (exp(-T / c) / c - a * b * exp(-b / T) / (T**2)) / (
a * exp(-b / T) + exp(-T / c)) + e / T - ((
2 * X**2 * log(a * exp(-b / T) + exp(-T / c))) / (Pr * log(10)**2) * (
(Theta_Pr * Pr + theta_Pr) * log(Pr))
)
elif var == 'nj':
Theta_Fi = -2 * X**2 * \
log(a * exp(-b / T) + exp(-T / c)) * \
log(Pr) / (Pr * log(10)**2)
elif var == 'V':
Theta_Fi = (-2 * X**2 * log(Pr) / (Pr * log(10)**2)) * (Theta_Pr * Pr + theta_Pr) * log(
(a * exp(T / c) + exp(b / T)) * exp(-T / c - b / T))
elif var == 'P':
Theta_Pr = (-2 * X**2 * theta_Pr * log(Pr) /
(Pr * log(10)**2)) * log(a * exp(-b / T) + exp(-T / c))
# and finally give dci
if var == 'T':
if fall_type == 'fall':
dci = Fi * theta_Pr / (Pr + 1) + (-Pr * Theta_Pr / (Pr + 1) + Theta_Fi +
Theta_Pr - theta_Pr / (Pr + 1)) * ci
elif fall_type == 'chem':
dci = (-Pr * Theta_Fi / (Pr + 1) +
Theta_Fi - theta_Pr / (Pr + 1)) * ci
elif var == 'nj':
if fall_type == 'fall':
dci = (kf * theta_Pr / (V * kinf * (Pr + 1))) * \
(Fi * (Pr * Theta_Fi + 1) - ci)
elif fall_type == 'chem':
dci = kf * theta_Pr * (Fi * Theta_Fi - ci) / (kinf * V * (Pr + 1))
elif var == 'V':
if fall_type == 'fall':
dci = Fi * theta_Pr / (Pr + 1) + (-Pr * Theta_Pr / (Pr + 1) + Theta_Fi +
Theta_Pr - theta_Pr / (Pr + 1)) * ci
elif fall_type == 'chem':
dci = (-Pr * Theta_Pr / (Pr + 1) +
Theta_Fi - theta_Pr / (Pr + 1)) * ci
elif var == 'P':
if fall_type == 'fall':
dci = Fi * theta_Pr / (Pr + 1) + \
(Theta_Fi - theta_Pr / (Pr + 1)) * ci
elif fall_type == 'chem':
dci = (Theta_Fi - theta_Pr / (Pr + 1)) * ci
return Xi, dci
# -
def display(arg):
return Math(latex(arg))
xi, dci = __get_dci(fall_type='chem', blend_type='troe', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='troe', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='chem', blend_type='lind', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='lind', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='chem', blend_type='sri', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='sri', pr_type='mix', var='T')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='chem', blend_type='lind', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='lind', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='chem', blend_type='troe', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='troe', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='chem', blend_type='sri', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='sri', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
xi, dci = __get_dci(fall_type='fall', blend_type='sri', pr_type='mix', var='nj')
display(simplify(dci.subs(xi, 0)))
| derivations/limittest/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Function idft
#
# ## Synopse
#
# Inverse Discrete Fourier Transform.
#
# - **f = iaidft(F)**
#
# - **f**: Image.
#
#
# - **F**: Image.
# + deletable=true editable=true
import numpy as np
def idft(F):
import ia898.src as ia
s = F.shape
if F.ndim == 1: F = F[np.newaxis,np.newaxis,:]
if F.ndim == 2: F = F[np.newaxis,:,:]
(p,m,n) = F.shape
A = ia.dftmatrix(m)
B = ia.dftmatrix(n)
C = ia.dftmatrix(p)
Faux = np.conjugate(A).dot(F)
Faux = Faux.dot(np.conjugate(B))
f = np.conjugate(C).dot(Faux)/(np.sqrt(p)*np.sqrt(m)*np.sqrt(n))
return f.reshape(s)
# -
# ## Examples
# +
testing = (__name__ == "__main__")
if testing:
# ! jupyter nbconvert --to python idft.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
# + [markdown] deletable=true editable=true
# ### Example 1
# -
if testing:
f = np.arange(24).reshape(4,6)
F = ia.dft(f)
g = ia.idft(F)
print(np.round(g.real))
# + deletable=true editable=true
if False: #testing:
import matplotlib.image as mpimg
f = mpimg.imread('../data/cameraman.tif')
F = ia.dft(f)
print(F.shape)
H = ia.circle(F.shape, 50,[F.shape[0]/2,F.shape[1]/2] )
H = ia.normalize(H,[0,1])
FH = F * ia.idftshift(H)
print(ia.isdftsym(FH))
g= ia.idft(FH)
ia.adshow(f)
ia.adshow(ia.dftview(F))
ia.adshow(ia.normalize(H,[0,255]))
ia.adshow(ia.dftview(FH))
ia.adshow(ia.normalize(abs(g)))
# + [markdown] deletable=true editable=true
# ## Equation
#
# $$ \begin{matrix}
# f(x) &=& \frac{1}{N}\sum_{u=0}^{N-1}F(u)\exp(j2\pi\frac{ux}{N}) \\ & & 0 \leq x < N, 0 \leq u < N \\ \mathbf{f} &=& \frac{1}{\sqrt{N}}(A_N)^* \mathbf{F}
# \end{matrix} $$
# + [markdown] deletable=true editable=true
# $$ \begin{matrix}
# f(x,y) &=& \frac{1}{NM}\sum_{u=0}^{N-1}\sum_{v=0}^{M-1}F(u,v)\exp(j2\pi(\frac{ux}{N} + \frac{vy}{M})) \\ & & (0,0) \leq (x,y) < (N,M), (0,0) \leq (u,v) < (N,M) \\
# \mathbf{f} &=& \frac{1}{\sqrt{NM}} (A_N)^* \mathbf{F} (A_M)^*
# \end{matrix} $$
# + [markdown] deletable=true editable=true
# ## See also
#
# - `iadft iadft`
# - `iadftview iadftview`
# - `iafftshift iafftshift`
# - `iaisdftsym iaisdftsym`
#
# ## Contribution
#
# -
| src/idft.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="9EtssVJyGkmU"
# # Translate model
# -
# We are using [this nice dataset](https://github.com/BangBOOM/Classical-Chinese)
# ## Imports
from forgebox.imports import *
from forgebox.thunder.callbacks import DataFrameMetricsCallback
from gc_utils.env import *
from datasets import load_dataset
# from fastai.text.all import *
from unpackai.nlp import *
from tqdm.notebook import tqdm
import random
import pytorch_lightning as pl
# +
import re
def remove_all_punkt(text):
"""
Removes all punctuation from Chinese text.
:param text: text to remove punctuation from
:return: text with no punctuation
"""
return re.sub(r'[^\w\s]', '', text)
# -
remove_all_punkt("亳州水军千户胡进等领骑兵渡淝水,逾荆山,与宋兵战,杀获甚众,赏钞币有差。")
# ## Config
DATA = Path(sys_loc('DATA')/"nlp"/"zh"/"cc_vs_zh")
TO_CLASSICAL = False
# ## Download data
# + [markdown] id="ZbXuwqr0KEr8"
# ## Data
# -
# ### Combine data
all_file = list(DATA.rglob("data/*"))
# +
def open_file_to_lines(file):
with open(file) as f:
lines = f.read().splitlines()
return lines
def pairing_the_file(files,kw):
pairs = []
for file in files:
if kw not in file.name:
file1 = file
file2 = f"{file}{kw}"
pairs.append((file1,file2))
return pairs
# -
pairs = pairing_the_file(all_file,"翻译")
def open_pairs(pairs):
chunks = []
for pair in tqdm(pairs, leave=False):
file1,file2 = pair
lines1 = open_file_to_lines(file1)
lines2 = open_file_to_lines(file2)
chunks.append(pd.DataFrame({"classical":lines1,"modern":lines2}))
return pd.concat(chunks).sample(frac=1.).reset_index(drop=True)
data_df = open_pairs(pairs)
df = data_df.rename(
columns = dict(
zip(["modern","classical"],
["source","target"] if TO_CLASSICAL else ["target","source",]))
)
df.head()
# ### Loading tokenizer
# + id="ukyVGg8HmSd-"
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
AutoModel,
EncoderDecoderModel
)
# we find a English parsing encoder, as a pretrained bert is good at understanding english
# BERT is short for Bidirectional **Encoder** Representations from Transformers, which consists fully of encoder blocks
ENCODER_PRETRAINED = "bert-base-chinese"
# we find a Chinese writing model for decoder, as decoder is the part of the model that can write stuff
DECODER_PRETRAINED = "uer/gpt2-chinese-poem"
encoder_tokenizer = AutoTokenizer.from_pretrained(ENCODER_PRETRAINED)
decoder_tokenizer = AutoTokenizer.from_pretrained(
ENCODER_PRETRAINED # notice we use the BERT's tokenizer here
)
# -
# ### Pytoch Dataset
class Seq2Seq(Dataset):
def __init__(
self, df, tokenizer, target_tokenizer,
max_len=128,
no_punkt:bool = False,
):
"""
no_punkt, do we ramdomly remove punctuation
from source sentence
"""
super().__init__()
self.df = df
self.tokenizer = tokenizer
self.target_tokenizer = target_tokenizer
self.max_len = max_len
self.no_punkt = no_punkt
def __len__(self, ):
return len(self.df)
def __getitem__(self, idx):
return dict(self.df.iloc[idx])
def collate(self, batch):
batch_df = pd.DataFrame(list(batch))
x, y = batch_df.source, batch_df.target
# there is a random no punctuation mode
# for source text
# as some of the classical text we get
# might be whole chunk of paragraph without
# any punctuation
if self.no_punkt:
x = list(i if random.random()>.5
else remove_all_punkt(i)
for i in x)
else:
x = list(x)
x_batch = self.tokenizer(
x,
max_length=self.max_len,
padding='max_length',
truncation=True,
return_tensors='pt',
)
y_batch = self.target_tokenizer(
list(y),
max_length=self.max_len,
padding='max_length',
truncation=True,
return_tensors='pt',
)
x_batch['decoder_input_ids'] = y_batch['input_ids']
x_batch['labels'] = y_batch['input_ids'].clone()
x_batch['labels'][x_batch['labels'] == self.tokenizer.pad_token_id] = -100
return x_batch
def dataloader(self, batch_size, shuffle=True):
return DataLoader(
self,
batch_size=batch_size,
shuffle=shuffle,
collate_fn=self.collate,
)
def split_train_valid(self, valid_size=0.1):
split_index = int(len(self) * (1 - valid_size))
cls = type(self)
shuffled = self.df.sample(frac=1).reset_index(drop=True)
train_set = cls(
shuffled.iloc[:split_index],
tokenizer=self.tokenizer,
target_tokenizer=self.target_tokenizer,
max_len=self.max_len,
no_punkt=self.no_punkt,
)
valid_set = cls(
shuffled.iloc[split_index:],
tokenizer=self.tokenizer,
target_tokenizer=self.target_tokenizer,
max_len=self.max_len,
no_punkt=self.no_punkt,
)
return train_set, valid_set
# ### PL datamodule
class Seq2SeqData(pl.LightningDataModule):
def __init__(
self, df,
tokenizer,
target_tokenizer,
batch_size=12,
max_len=128,
no_punkt:bool=False):
super().__init__()
self.df = df
self.ds = Seq2Seq(df,
tokenizer,
target_tokenizer,
max_len=max_len,
no_punkt=no_punkt)
self.tokenizer = tokenizer
self.target_tokenizer = target_tokenizer
self.max_len = max_len
self.batch_size = batch_size
def setup(self, stage=None):
self.train_set, self.valid_set = self.ds.split_train_valid()
def train_dataloader(self):
return self.train_set.dataloader(
batch_size=self.batch_size, shuffle=True)
def val_dataloader(self):
return self.valid_set.dataloader(
batch_size=self.batch_size*2, shuffle=False)
data_module = Seq2SeqData(
df, encoder_tokenizer,
decoder_tokenizer,
batch_size=28,
max_len=256,
no_punkt=False if TO_CLASSICAL else True,)
data_module.setup()
inputs = next(iter(data_module.train_dataloader()))
inputs
# if we are doing clasical Chinese to modern Chinese, we can randomly set half of the input without any punctuation, as many data source might be
encoder_tokenizer.batch_decode(
inputs.input_ids,skip_special_tokens=True
)
# + [markdown] id="92iwRu6Oqbzb"
# ### Load pretrained models
# + colab={"base_uri": "https://localhost:8080/"} id="gZkPxJVTm8Ng" outputId="dcecf16e-22fe-4c25-9ffb-aae9d75785f3"
# encoder = AutoModel.from_pretrained(ENCODER_PRETRAINED, proxies={"http":"bifrost:3128"})
# decoder = AutoModelForCausalLM.from_pretrained(DECODER_PRETRAINED, add_cross_attention=True,
# proxies={"http":"bifrost:3128"})
# + [markdown] id="pajv5ridLamp"
# ## Model
# + [markdown] id="s1zqJXDsCUw-"
# We create a seq2seq model by using pretrained encoder + pretrained decoder
# -
# loading pretrained model
encoder_decoder = EncoderDecoderModel.from_encoder_decoder_pretrained(
encoder_pretrained_model_name_or_path=ENCODER_PRETRAINED,
decoder_pretrained_model_name_or_path=DECODER_PRETRAINED,
)
# + id="jBVyNeKUv6FU"
class Seq2SeqTrain(pl.LightningModule):
def __init__(self, encoder_decoder):
super().__init__()
self.encoder_decoder = encoder_decoder
def forward(self, batch):
return self.encoder_decoder(
**batch
)
def training_step(self, batch, batch_idx):
outputs = self(batch)
self.log('loss', outputs.loss)
return outputs.loss
def validation_step(self, batch, batch_idx):
outputs = self(batch)
self.log('val_loss', outputs.loss)
return outputs.loss
def configure_optimizers(self):
encoder_params = list(
{"params":param,"lr":1e-5}
for param in self.encoder_decoder.encoder.embeddings.parameters()) +\
list({"params":param,"lr":1e-5}
for param in self.encoder_decoder.encoder.encoder.parameters()) +\
list({"params":param,"lr":1e-3}
for param in self.encoder_decoder.encoder.pooler.parameters())
decoder_params = list()
for name, param in self.encoder_decoder.decoder.named_parameters():
if 'ln_cross_attn' in name:
decoder_params.append({"params":param,"lr":1e-3})
elif 'crossattention' in name:
decoder_params.append({"params":param,"lr":1e-3})
elif 'lm_head' in name:
decoder_params.append({"params":param,"lr":1e-4})
else:
decoder_params.append({"params":param,"lr":1e-5})
return torch.optim.Adam(
encoder_params + decoder_params,
lr=1e-3,
)
# + id="5uIjcPuXw0Fr"
module = Seq2SeqTrain(encoder_decoder)
# + [markdown] id="DBf3NTKSLcUb"
# ## Training
# +
save = pl.callbacks.ModelCheckpoint(
'/GCI/transformers/weights/cc_to_zh',
save_top_k=2,
verbose=True,
monitor='val_loss',
mode='min',
)
trainer = pl.Trainer(
gpus=[1],
max_epochs=10,
callbacks=[save],
)
# -
trainer.fit(module, datamodule=data_module)
| nbs/cc2zh_translate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in this lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions)**
# </i></small></small>
def fun():
print("something here inside fun()")
| test1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## xcube Local Data Cube Generator
#
# This notebook provides a walk-through demonstrating how to use the _local data cube generator_.
#
# An introduction to the xcube data cube generators can be found in the [Getting Started](./1_getting_started.ipynb) Notebook.
from xcube.core.gen2 import CubeGenerator
from xcube.core.gen2 import CubeGeneratorRequest
# The first example represents a simple conversion from a local NetCDF dataset (`input_config`) into a
# local Zarr dataset (`output_config`). As we only want a copy we do not want to specify any target
# cube parameters (`cube_config`).
#
# This is how a _cube generator request_ looks as a (JSON) dictionary:
request_json = {
"input_config": {
"store_id": "file",
"store_params": {
"root": "../../serve/demo"
},
"data_id": "cube.nc"
},
"cube_config": {
},
"output_config": {
"store_id": "file",
"store_params": {
"root": "."
},
"replace": True,
"data_id": "cube.zarr"
}
}
# Validate `request_json` and convert into a `CubeGeneratorRequest` object:
request = CubeGeneratorRequest.from_dict(request_json)
request
# Instantiate the generator object:
generator = CubeGenerator.new()
# Get some information about the cube that would be generated by `generator` for `request`:
result = generator.get_cube_info(request)
result
cube_info = result.result
cube_info
# Now perform the actual cube generation:
result = generator.generate_cube(request)
result
cube_id = result.result.data_id
# Note, you could have used the JSON request directly, i.e. `get_cube_info(request_json)` or `generator_cube(request_json)`. You can even pass a request file path, using either JSON (`*.json`) or YAML (`*.yml`, `*.yaml`) format.
#
# Let's open the generated cube:
import xarray as xr
cube = xr.open_zarr(cube_id)
cube
cube.conc_chl.isel(time=3).plot.imshow(figsize=(20, 10))
# _This is a work in progress. More material will follow in an upcoming xcube release._
| examples/notebooks/generators/2_local.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.5
# language: julia
# name: julia-0.4
# ---
using SymPy
@vars h0 alpha0 beta0 w t k C a phi psi c
u, rho, b = symbols("u, rho, b")
h = -h0*exp(im*w*t) #Minus because plunge is downward in Theo
P_h = -rho*b*b*pi*diff(h,t,2)-2*rho*u*pi*b*C*diff(h,t,1)
Cl_h = simplify(-2*P_h/(rho*u*u)) #Minus because lift is downward in Theo
Cl_h = simplify(subs(Cl_h,b*w,u*k))
# +
#Brilliant
# -
alpha = alpha0*exp(im*(w*t+phi))
P_alpha = -rho*b*b*(u*pi*diff(alpha,t,1)-pi*b*a*diff(alpha,t,2)) - 2*rho*u*b*C*pi*(u*alpha+b*(1/2-a)*diff(alpha,t,1))
Cl_alpha = simplify(-P_alpha/(rho*u*u*b)) #Minus because lift is downward in Theo
Cl_alpha = simplify(subs(Cl_alpha,b*w,u*k))
beta = beta0*exp(im*(w*t+psi))
T1 = -(2+c*c)*sqrt(1-c*c)/3+c*acos(c)
T4 = c*sqrt(1-c*c)-acos(c)
T11 = (2-c)*sqrt(1-c*c)+(1-2*c)*acos(c)
T10 = sqrt(1-c*c)+acos(c)
P_beta = -rho*b*b*(-u*T4*diff(beta,t,1) - T1*b*diff(beta,t,2)) - 2*rho*u*b*C*pi*(T10*u*beta/pi + (b*T11*diff(beta,t,1)/2)/pi)
Cl_beta = simplify(-P_beta/(rho*u*u*b)) #Minus because lift is downward in Theo
Cl_beta = simplify(subs(Cl_beta,b*w,u*k))
# +
#Lets also express it in terms of the Ts
# -
@vars t1 t4 t10 t11
P_beta = -rho*b*b*(-u*t4*diff(beta,t,1) - t1*b*diff(beta,t,2)) - 2*rho*u*b*C*pi*(t10*u*beta/pi + (b*t11*diff(beta,t,1)/2)/pi)
Cl_beta = simplify(-P_beta/(rho*u*u*b)) #Minus because lift is downward in Theo
Cl_beta = simplify(subs(Cl_beta,b*w,u*k))
#Let's do Cm_alpha and Cm_beta as well
T12 = (2+c)*sqrt(1-c*c)+(1-2*c)*acos(c)
T2 = T4*(T11+T12)
T3 = -(1-c*c)*(5*c*c+4)/8+c*(7+2*c*c)*sqrt(1-c*c)*acos(c)/4-(1/8+c*c)*acos(c)*acos(c)
T5 = -(1-c*c)+2*c*sqrt(1-c*c)*acos(c)-acos(c)*acos(c)
T6 = T2
T7 = c*(7+2*c*c)*sqrt(1-c*c)/8-(1/8+c*c)*acos(c)
T8 = -(1+2*c*c)*sqrt(1-c*c)/3+c*acos(c)
T9 = ((1-c*c)^(3/2)/3+a*T4)/2
T12 = T11+2*T4
T13 = -(T7+(c-a)*T1)/2
T14 = 1/16+a*c/2
T15 = T4 + T10
T16 = T1 - T8 -(c-a)*T4+T11/2
T17 = -2*T9 - T1 + (a-1/2)*T4
T18 = T5 - T4*T10
T19 = T4*T11
T20 = T10 - 2*sqrt(1-c*c)
M_alpha_h = -rho*b*b*(-a*pi*b*diff(h,t,2))+2*rho*u*b*b*pi*(a+1/2)*C*diff(h,t,1)
Cm_alpha_h = simplify(M_alpha_h/(rho*u*u*b))
Cm_alpha_h = simplify(subs(Cm_alpha_h,b*w,u*k))
M_alpha_alpha = -rho*b*b*(pi*(1/2-a)*u*b*diff(alpha,t,1)+pi*b*b*(1/8+a*a)*diff(alpha,t,2))+2*rho*u*b*b*pi*(a+1/2)*C*(u*alpha+b*(1-2-a)*diff(alpha,t,1))
Cm_alpha_alpha = simplify(M_alpha_alpha/(2*rho*u*u*b*b))
Cm_alpha_alpha = simplify(subs(Cm_alpha_alpha,b*w,u*k))
M_alpha_beta = -rho*b*b*(T15*u*u*beta+T16*u*b*diff(beta,t,1)+2*T13*b*b*diff(beta,t,2))+2*rho*u*b*b*pi*(a+1/2)*C*(T10*u*beta/pi+(b*T11*diff(beta,t,1)/2)/pi)
Cm_alpha_beta = simplify(M_alpha_beta/(2*rho*u*u*b*b))
Cm_alpha_beta = simplify(subs(Cm_alpha_beta,b*w,u*k))
M_beta_h = -rho*b*b*(-T1*b*diff(h,t,2))-rho*u*b*b*T12*C*diff(h,t,1)
Cm_beta_h = simplify(M_beta_h/(rho*u*u*b))
Cm_beta_h = simplify(subs(Cm_beta_h,b*w,u*k))
M_beta_alpha = -rho*b*b*(T17*u*b*diff(alpha,t,1)+2*T13*b*b*diff(alpha,t,2))-rho*u*b*b*T12*C*(u*alpha+b*(1/2-a)*diff(alpha,t,1))
Cm_beta_alpha = simplify(M_beta_alpha/(2*rho*u*u*b*b))
Cm_beta_alpha = simplify(subs(Cm_beta_alpha,b*w,u*k))
M_beta_beta = -rho*b*b*(u*u*T18*beta/pi - u*b*T19*diff(beta,t,1)/(2*pi) - T3*b*b*diff(beta,t,2)/pi) - rho*u*b*b*T12*C*(T10*u*beta/pi + b*T11*diff(beta,t,1)/(2*pi))
Cm_beta_beta = simplify(M_beta_beta/(2*rho*u*u*b*b))
Cm_beta_beta = simplify(subs(Cm_beta_beta,b*w,u*k))
Cl = Cl_h + Cl_alpha + Cl_beta
#Convert Cm_alpha and Cm_beta to symboklic expressions withs 't's
@vars t2 t3 t5 t6 t7 t8 t9 t12 t13 t14 t15 t16 t17 t18 t19 t20
M_alpha_beta = -rho*b*b*(t15*u*u*beta+t16*u*b*diff(beta,t,1)+2*t13*b*b*diff(beta,t,2))+2*rho*u*b*b*pi*(a+1/2)*C*(t10*u*beta/pi+(b*t11*diff(beta,t,1)/2)/pi)
Cm_alpha_beta = simplify(M_alpha_beta/(2*rho*u*u*b*b))
Cm_alpha_beta = simplify(subs(Cm_alpha_beta,b*w,u*k))
Cm_alpha = Cm_alpha_h + Cm_alpha_alpha + Cm_alpha_beta
M_beta_h = -rho*b*b*(-t1*b*diff(h,t,2))-rho*u*b*b*t12*C*diff(h,t,1)
Cm_beta_h = simplify(M_beta_h/(rho*u*u*b))
Cm_beta_h = simplify(subs(Cm_beta_h,b*w,u*k))
M_beta_alpha = -rho*b*b*(t17*u*b*diff(alpha,t,1)+2*t13*b*b*diff(alpha,t,2))-rho*u*b*b*t12*C*(u*alpha+b*(1/2-a)*diff(alpha,t,1))
Cm_beta_alpha = simplify(M_beta_alpha/(2*rho*u*u*b*b))
Cm_beta_alpha = simplify(subs(Cm_beta_alpha,b*w,u*k))
M_beta_beta = -rho*b*b*(u*u*t18*beta/pi - (u*b*t19*diff(beta,t,1)/pi)/2 - t3*b*b*diff(beta,t,2)/pi) - rho*u*b*b*t12*C*(t10*u*beta/pi + (b*t11*diff(beta,t,1)/pi)/2)
Cm_beta_beta = simplify(M_beta_beta/(2*rho*u*u*b*b))
Cm_beta_beta = simplify(subs(Cm_beta_beta,b*w,u*k))
Cm_beta = Cm_beta_h + Cm_beta_alpha + Cm_beta_beta
# +
#Final listing of expressions
# -
Cl
Cm_alpha
Cm_beta
# +
#The End
| Notebooks/Thoedorsen_Flap_derivation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Центральная предельная теорема своими руками на примере непрерывного распределения Рэлея
# #### 1. Импорт:
import matplotlib.pyplot as plt
import scipy.stats as sts
import pandas as pd
import numpy as np
import math
# %matplotlib inline
# ##### 2. Генерация выборки из 1000 значений случайной величины распределения Рэлея:
# +
rv_continuous_rayleigh = sts.rayleigh(loc=0, scale=1)
sample = rv_continuous_rayleigh.rvs(1000)
sample[:10]
# -
# #### 3. Построение гистограммы выборки и графика теоретической функции плотности распределения случайной величины:
# +
# Гистограмма выборки:
plt.hist(sample, normed=True)
# Теоретическая плотность распределения
x = np.linspace(-1, 5, 1000)
pdf = rv_continuous_rayleigh.pdf(x)
plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
# -
# #### 4. Оценка распределения выборочного среднего случайной величины при разных объёмах выборок.
#
# Для оценки распределения для каждого значения n - объёма выборки (5, 10, 30, 50) - будет генерироваться 1000 выборок объёма n. Для каждой вычислять выборочное среднее и строить гистограммы распределений выборочных средних для каждого n, а поверх гистограмм строить графики плотности вероятности для нормальных распределений, которые аппроксимирующт выборочные средние при различных n по ЦПТ:
def plot_rv_continuous_rayleigh(size):
sigma = 1
# Теоретические среднее (мат. ожидание) и дисперсия распределния Рэлея
th_mean = math.sqrt(math.pi / 2) * sigma
th_disp = (2 - math.pi / 2)
# Построение гистограммы по выборке объёма (size)
sample = list()
for i in range(0, 1000):
tmp_sample = rv_continuous_rayleigh.rvs(size)
cur_mean = sum(tmp_sample) / float(size)
sample.append(cur_mean)
plt.hist(sample, bins=20, normed=True)
# Построение графика плотности вероятности нормального распределения (по ЦПТ)
norm_rv = sts.norm(th_mean, math.sqrt(th_disp / size))
x = np.linspace(0, 3, 1000)
pdf = norm_rv.pdf(x)
plt.plot(x, pdf, label='current norm pdf', alpha=0.5)
plt.title('n = ' + str(size))
plt.legend(loc="upper right")
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
plot_rv_continuous_rayleigh(5)
plot_rv_continuous_rayleigh(10)
plot_rv_continuous_rayleigh(15)
plot_rv_continuous_rayleigh(30)
plot_rv_continuous_rayleigh(50)
# #### Вывод
# Мы убедились, что точность аппроксимации распределения выборочных средних нормальным с ростом **n** увеличивается, как следствие из Центральной предельной теоремы.
| mathematics-python-for-data-analysis/task-central-limit-theorem/central-limit-theorem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import tensorflow_felzenszwalb_edt
import scipy as sp
import scipy.misc
import numpy as np
import matplotlib.pylab as plt
import numpy.random as npr
import tensorflow as tf
# %matplotlib inline
# # test function
a=scipy.misc.face(gray=True).astype(np.float)
b=tensorflow_felzenszwalb_edt.edt1d(a*100,axis=0)
plt.imshow(b)
b=tensorflow_felzenszwalb_edt.edt1d(a*100,axis=1)
plt.imshow(b)
# # test gradient
# +
def calc_loss(f):
g=tensorflow_felzenszwalb_edt.edt1d(f,axis=0)
return g[2]**2#tf.reduce_sum(g**2)
f=tf.identity(np.array([0,4,1,6,7,8.0]))
g=tensorflow_felzenszwalb_edt.edt1d(f,axis=0)
# delta=tf.identity(npr.randn(*f.shape)*.001)
delta=np.zeros(len(f)); delta[2]=.001
delta=tf.identity(delta)
df1=calc_loss(f+delta)-calc_loss(f)
with tf.GradientTape() as t:
t.watch(f)
loss=calc_loss(f)
ggrad=t.gradient(loss,f)
df2=tf.reduce_sum(ggrad*delta)
print('finite-diff-says',df1.numpy())
print('grad-says',df2.numpy())
plt.plot(f)
plt.plot(g)
# +
def calc_loss(f):
g=tensorflow_felzenszwalb_edt.edt1d(f,axis=0)
return tf.reduce_sum(g**2)
f=tf.identity(scipy.misc.face(gray=True).astype(np.float))*100
delta=tf.identity(npr.randn(*f.shape)*.0000001)
df1=calc_loss(f+delta)-calc_loss(f)
with tf.GradientTape() as t:
t.watch(f)
loss=calc_loss(f)
g=t.gradient(loss,f)
df2=tf.reduce_sum(g*delta)
print('finite-diff-says',df1.numpy())
print('grad-says',df2.numpy())
# -
| test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyzing Broadband Spectra with the `assignment` Module
#
# ## Introduction
#
# In this notebook, we're going to work through how the core functionality of `PySpecTools` can be used to streamline and automate your spectral analysis. It's worth noting that `PySpecTools` and Python provide enough flexibility for you to adjust to your needs; whatever can't be done with `PySpecTools` natively could be automated with Python (e.g. `for` loops) and to a large extent `pandas` as well. In the latter case, particularly when you're analyzing the assignments, and looking to filter out certain molecules, etc. This may be left for a subsequent notebook as the focus of this notebook is to demonstrate how automated assignment is performed.
#
# The core functionality of assigning spectra revolves around the `pyspectools.spectra.assignment` module, and contains three main abstractions:
#
# 1. `AssignmentSession`
# - This is your main interface: holds the spectral data, and allows you to interact (plot, assign, etc) with the data.
# 2. `Transition`
# - Represents every type of spectral feature: every peak in an experiment, and every catalog entry.
# 3. `LineList`
# - A collection of spectral features: the peaks in an experiment (which in themselves are `Transition` objects), and catalogs.
#
# We will demonstrate how these pieces come together by looking at some of our published data: this notebook was used to analyze the Benzene discharge experiments reported in these two papers:
#
# <NAME>.; <NAME>.; <NAME>.; <NAME>.; <NAME>.; <NAME>.; <NAME>. Exhaustive Product Analysis of Three Benzene Discharges by Microwave Spectroscopy. J. Phys. Chem. A 2020, 124 (25), 5170–5181. https://doi.org/10.1021/acs.jpca.0c02919.
#
# Lee, <NAME>.; McCarthy, M. Study of Benzene Fragmentation, Isomerization, and Growth Using Microwave Spectroscopy. J. Phys. Chem. Lett. 2019, 10 (10), 2408–2413. https://doi.org/10.1021/acs.jpclett.9b00586.
#
# The full dataset can also be found on our [Zenodo repository](https://zenodo.org/record/3827742); notebook "4000" most closely resembles this (this is a much more heavily marked up version).
#
# We should stress that, while this is mostly automated, it does not change the fact that spectral analysis is _very much an iterative process_. You will make modifications to the way you do your analysis, and many things you won't know until you've run it at least once. The point of having this notebook is so that it is reproducible and transparent: you can always modify the code and re-run the whole notebook with the latest analysis.
# To begin the analysis, we will construct an `AssignmentSession` object using the class method, `AssignmentSession.from_ascii(...)`. This method will take your ASCII spectrum containing frequency and intensity information, and parse it using `pandas` and store it as a `DataFrame`. With all Python routines, you can call the function/method with a question mark at the end to pull up the documentation associated with that function/method:
from pyspectools.spectra.assignment import AssignmentSession, LineList
# In this case, we're setting up the session based on the Benzene data, which is a tab-delimited text file with a header. We ignore the header with `skiprows=`, and provide our own column names with the `col_names` argument. Additionally, we're going to specify the composition we expect for the experiment with the `composition` kwarg: ideally we would only include `["C", "H"]`, however we know there are atmospheric impurities like nitrogen and oxygen that get incorporated in the discharge products. This keyword _will affect Splatalogue assignments_, and exclude catalogs that contain irrelevant compositions like metal-bearing molecules.
session = AssignmentSession.from_ascii(
"chirp_data/ft2632_hanning_620.txt",
experiment=4000,
col_names=["Frequency", "Intensity"],
skiprows=1,
composition=["C", "H", "N", "O"],
verbose=False
)
# You can also adjust many of these settings after the fact, which are stored as attributes of the `Session` object within an `AssignmentSession`. For example, the `temperature` attribute will set an upper limit to the lower state energies of states assignable: we will ignore all features that are double this specified energy. This isn't the direct threshold, because it nominally corresponds to what your experimental temperature is, and depending on how prominent molecule is, you may see higher temperature transitions. Another useful thing to set is the maximum tolerance for uncertainty in catalog entries: we would like to reject assignments based on poorly predicted lines, which is set by the `max_uncertainty` attribute.
# +
# temperature in K
session.session.temperature = 10.
# uncertainty in MHz
session.session.max_uncertainty = 0.2
# -
# Note that frequency units are in MHz, and temperature in kelvin.
#
# The next step is to pre-process the spectrum. Our chirped-pulse data are collected using Kyle Crabtree's `blackchirp` program, and often we apply a window function to the data. If you are looking at raw FFT data, `PySpecTools` provides access to window functions defined in `scipy.signal`, which you can access in a syntax like this:
#
# ```python
# session.apply_filter("hanning")
# ```
#
# The full list of filters can be found [in the SciPy documentation](https://docs.scipy.org/doc/scipy/reference/signal.windows.html#module-scipy.signal.windows).
#
# After pre-processing, we will perform peak detection and baseline correction. This is done using the `session.find_peaks` functionality, which automates several steps based on the keyword arguments. All of the analysis in `PySpecTools` is done preferably in units of signal-to-noise ratio (SNR), which is established by fitting a baseline (a _vector_, not scalar), and dividing the entire spectrum element-wise. SNR is definitely more meaningful than a raw voltage scale typically reported.
#
# In the default way of peak finding, we use the asymmetric least-squares (ALS) method to fit a baseline (`als=True`). Essentially this can be thought of as a penalized least-squares method, with additional parameters that define how quickly the baseline can respond (you don't want to over-subtract signal). These parameters can be accessed by providing `find_peaks` with keywords arguments ([see documentation](https://laserkelvin.github.io/PySpecTools/pyspectools.spectra.html#pyspectools.spectra.assignment.AssignmentSession.find_peaks)). The `sigma` keyword then specifies the minimum SNR value to use for peak finding; note that if `als=False`, `threshold` and `sigma` are equivalent. The former specifies the absolute intensity scale to use for peak finding.
# Returns a pandas DataFrame containing frequency/intensity of
# every peak detected. This is also stored as an attribute;
# `AssignmentSession.peaks`
peaks = session.find_peaks(sigma=6, als=True)
# Use the `describe` method of a `DataFrame` to summarize the
# peaks information
peaks.describe()
# In the cell below, we actually manually add some lines. Automated peak detection can never be perfect, especially with blended features. You can add frequency/intensity information by providing a list of 2-tuples as an argument to the `add_ulines` method:
session.add_ulines(
[
(7483.911, 9.390),
(8773.866, 12.523),
(9200.000, 9.116),
(9200.888, 9.442),
(10258.311, 6.850),
(10259.111, 6.948),
(10262.044, 15.061),
(10843.111, 9.215),
(10928.266, 12.748),
(10959.38, 14.302),
(10978.93, 8.527),
(10979.73, 7.273),
(11454.844, 7.216),
(11547.555, 7.485),
(11548.000, 8.370),
(11550.49, 7.134),
(11561.51, 7.720),
(11940.00, 6.039),
(12476.444, 14.628),
(12475.911, 13.628),
(13558.40, 7.472),
(13609.07, 6.087),
(13751.378, 6.745),
(13792.80, 9.937),
(14839.64, 6.485),
(14919.555, 17.971),
(15248.177, 13.216),
(15249.067, 15.414),
(15557.60, 6.572),
(16581.07, 7.550),
(16706.76, 70.758),
(16707.47, 49.851),
(16710.67, 70.43661),
(16711.47, 48.40109),
(17115.02, 9.315)
]
)
# ## Running assignments
#
# With all the peaks found, we can start doing some assignments of the features! The main way this is done is by creating `LineList` objects, which are then fed to the `session.process_linelist` method as we shall see later.
#
# There are different types of `LineList` objects, depending on the source of data:
#
# 1. `from_artifacts`
# 2. `from_clock`
# 2. `from_catalog`
# 3. `from_pgopher`
# 4. `from_dataframe`
# 5. `from_lin`
# 6. `from_splatalogue_query`
# 7. `from_list`
#
# `from_artifacts` will create a specialized `LineList` that flags `Transitions` as non-molecular for book-keeping. `from_clock` is a special variant of this, where we have found that radio interference arising from arbitrary waveform generators often bleed into the resulting chirped-pulse spectrum, and exhaustively generates combinations/harmonics of the clock frequency as artifacts.
artifacts = LineList.from_artifacts(
[8000., 16000., 8125.,16250., 7065.7778, 7147.3778, 8574.9022]
)
# With the `artifacts` variable/object, you can then pass it to the `process_linelist` method of our `AssignmentSession`, and it will automatically cross-correlate every unassigned (U-line) with entries contained in your `LineList`:
session.process_linelist(linelist=artifacts)
# For molecular assignments, you could of course repeat this process and manually create individual `LineList`s; in this example, we'll take an SPCAT catalog and generate the `LineList`:
#
# ```python
# formaldehyde = LineList.from_catalog(name="formaldehyde", formula="H2CO", "catalogs/h2co.cat")
# ```
#
# However, this is incredibly time consuming, and not pretty to look at (not to mention a nightmare to update). Instead, we recommend you set up a directory containing all of your catalogs, and create an input file that stores all of the metadata for the catalogs and "batch" process all of the catalogs. In the cell below, we automated the analysis of hydrocarbon molecules (separated oxygen- and nitrogen-bearing species) with a YAML file called `hydrocarbons_cat.yml`. YAML is a simple markup syntax that is both machine and human read/writeable. Below is a small excerpt of our file:
#
# ```yaml
# ethynylbenzene,v23:
# formula: c8h6
# filepath: h_catalogs/phenylacetylene_v23.cat
#
# ethynylbenzene,2v23:
# formula: c8h6
# filepath: h_catalogs/phenylacetylene_2v23.cat
#
# ethynylbenzene,v16:
# formula: c8h6
# filepath: h_catalogs/phenylacetylene_v16.cat
#
# buta-1,3-diynylbenzene:
# formula: c10h6
# filepath: h_catalogs/phenyldiacetylene.cat
#
# hexa-1,3,5-triynylbenzene:
# formula: c12h6
# filepath: h_catalogs/phenyltriacetylene.cat
# ```
#
# You can actually provide the `source` keyword as well, and include a BibTeX citekey. When it comes to automatic report generation, the citation will be automatically used to streamline LaTeX table generation.
#
# ```
# molecule_name:
# formula: C12H6 # formula
# source: mccarthy_benzene_2020 # citekey
# filepath: catalog/molecule.cat # filepath to the SPCAT catalog
# ```
#
session.process_linelist_batch(yml_path="hydrocarbons_cat.yml")
# We repreat the same procedure for a `.lin` file, which also follows SPFIT formatting. The `from_XXX` parser is chosen based on the extension of the referenced file.
session.process_linelist_batch(yml_path="hydrocarbons_lin.yml")
# ## Finishing the analysis
#
# This basically completes the assignment process! We just have a few more steps to take to save the analysis; a `Pickle` file is saved to disk, which is then used for all the subsequent analysis (e.g. line profile, statistics). The `session.finalize_assignments()` is currently not as final as it sounds: it just prompts all the report and table generation to happen, as well as export all of the identified and unidentified data into respective folders.
session.finalize_assignments()
# The `save_session` function below then dumps the entire analysis into the folder `sessions/{experiment_ID}.pkl`, where `{experiment_ID}` is the number assigned to the experiment all the way at the beginning (`experiment=4000`).
session.save_session()
# You can then load this session back in in a separate notebook with `AssignmentSession.load_session("sessions/{experiment_ID}.pkl")`
session = AssignmentSession.load_session("sessions/4000.pkl")
# This loads in all of the information from before, including the results generated with `finalize_assignments()`. For example, the `identifications` attribute stores a `dict` which tracks each distinct species as keys, with the number of assigned lines as values:
session.identifications
# You can also view all of the assignment information by accessing the `DataFrame` stored as the `table` attribute. Below, we also demonstrate how we can sort columns based on their values, for example looking at the transitions with the highest catalog uncertainty first.
session.table.sort_values(["uncertainty"], ascending=False)
# When it comes to making plots, we might also be interested in removing the features that have already been assigned from X/Y; the `clean_spectral_assignments()` function replaces regions of the spectrum that have been assigned with white noise, to make it look natural.
session.clean_spectral_assignments()
# You can then plot the cleaned spectrum, where all of the assigned features are removed from the spectrum with `plot_assigned()`. This creates a `plotly` figure which is interactive!
#
# Note that the `plot_assigned()` function can be used at any point of notebook too; the latest spectrum with assignments overlaid will be shown.
session.plot_assigned()
# ## Conclusions
#
# This notebook completes the first analysis step, which is often the most tedious: assigning and keeping track of every spectral feature, and translating that into something that is publishable. We went through how a spectrum can be loaded and interfaced with the `AssignmentSession` class in `PySpecTools`, followed by peak finding. We then created `LineList` objects based on SPCAT catalogs, and fed them to the `AssignmentSession` to process, and showed that you could do this _en masse_. Finally, the results of the analysis are saved to disk, and generating an interactive report.
#
# In a future notebook, we'll take a look at what kind of things we can do with the saved `AssignmentSession`, for example chemical composition analysis, and making plots of the data for publication.
| docs/source/examples/experiment_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import xlrd
DATA_FILE = 'athlete_events_2.xlsx'
# Step 1: read in data from the .xls file
book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8")
sheet = book.sheet_by_index(0)
data = np.asarray([sheet.row_values(i) for i in range(1, sheet.nrows)])
n_samples = sheet.nrows - 1
# Step 2: create placeholders for input X (number of fire) and label Y (number of theft)
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
# Step 3: create weight and bias, initialized to 0
w = tf.Variable(0.0, name='weights')
b = tf.Variable(0.0, name='bias')
# Step 4: build model to predict Y
Y_predicted = X * w + b
# Step 5: use the square error as the loss function
loss = tf.square(Y - Y_predicted, name='loss')
# loss = utils.huber_loss(Y, Y_predicted)
# Step 6: using gradient descent with learning rate of 0.01 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.00001).minimize(loss)
n_train = 2
rate = 0.1
cur_epoch = 1
n_epoch = 1
total_rate = 1
with tf.Session() as sess:
# Step 7: initialize the necessary variables, in this case, w and b
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('./graphs/linear_reg', sess.graph)
# Step 8: train the model
while n_epoch < n_train: # train the model 100 epochs
total_loss = 0
for x, y in data:
# Session runs train_op and fetch values of loss
_, l = sess.run([optimizer, loss], feed_dict={X: x, Y:y})
total_loss += l
if n_epoch == 1 :
cur_epoch = total_loss/n_samples
n_train += 1
else :
total_rate = cur_epoch - total_loss/n_samples
if total_rate > rate:
n_train +=1
cur_epoch = total_loss/n_samples
print('Epoch {0}: {1} rate : {2}'.format(n_epoch, total_loss/n_samples,total_rate))
n_epoch += 1
# close the writer when you're done using it
writer.close()
# Step 9: output the values of w and b
w, b = sess.run([w, b])
# plot the results
X, Y = data.T[0], data.T[1]
plt.plot(X, Y, 'bo', label='Real data')
plt.plot(X, X * w + b, 'r', label='Predicted data')
plt.legend()
plt.show()
# -
| ML_Linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="pknVo1kM2wI2"
# ##### Copyright 2021 The TensorFlow Authors.
# + cellView="form" id="SoFqANDE222Y"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="6x1ypzczQCwy"
# # Reading data from BigQuery with TFX and Vertex Pipelines
#
# + [markdown] id="_445qeKq8e3-"
# <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
# <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_bq">
# <img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
# <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_bq.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
# <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_bq.ipynb">
# <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
# <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_bq.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
# <td><a href="https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_bq.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud Vertex AI Workbench</a></td>
# </table></div>
#
# + [markdown] id="_VuwrlnvQJ5k"
# This notebook-based tutorial will use
# [Google Cloud BigQuery](https://cloud.google.com/bigquery) as a data source to
# train an ML model. The ML pipeline will be constructed using TFX and run on
# Google Cloud Vertex Pipelines.
#
# This notebook is based on the TFX pipeline we built in
# [Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
# If you have not read that tutorial yet, you should read it before proceeding
# with this notebook.
#
# [BigQuery](https://cloud.google.com/bigquery) is serverless, highly scalable,
# and cost-effective multi-cloud data warehouse designed for business agility.
# TFX can be used to read training data from BigQuery and to
# [publish the trained model](https://www.tensorflow.org/tfx/api_docs/python/tfx/extensions/google_cloud_big_query/pusher/executor/Executor)
# to BigQuery.
#
# In this tutorial, we will use the `BigQueryExampleGen` component which reads
# data from BigQuery to TFX pipelines.
#
# + [markdown] id="cNA00n3irPgE"
# This notebook is intended to be run on
# [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) or on
# [AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks). If you
# are not using one of these, you can simply click "Run in Google Colab" button
# above.
#
# ## Set up
# If you have completed
# [Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple),
# you will have a working GCP project and a GCS bucket and that is all we need
# for this tutorial. Please read the preliminary tutorial first if you missed it.
# + [markdown] id="WJbPaFzKrPgN"
# ### Install python packages
# + [markdown] id="QVWOEGgMrPgO"
# We will install required Python packages including TFX and KFP to author ML
# pipelines and submit jobs to Vertex Pipelines.
# + id="osJJdvmIrPgP"
# Use the latest version of pip.
# !pip install --upgrade pip
# !pip install --upgrade "tfx[kfp]<2"
# + [markdown] id="X5GiQFjprPgP"
# #### Did you restart the runtime?
#
# If you are using Google Colab, the first time that you run
# the cell above, you must restart the runtime by clicking
# above "RESTART RUNTIME" button or using "Runtime > Restart
# runtime ..." menu. This is because of the way that Colab
# loads packages.
# + [markdown] id="y3TRhlDvrPgQ"
# If you are not on Colab, you can restart runtime with following cell.
# + id="JYKpuhamrPgQ"
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# + [markdown] id="NjGI8rzFrPgQ"
# ### Login in to Google for this notebook
# If you are running this notebook on Colab, authenticate with your user account:
# + id="FY8IqqnmrPgQ"
import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# + [markdown] id="KI3AVxMPrPgQ"
# **If you are on AI Platform Notebooks**, authenticate with Google Cloud before
# running the next section, by running
# ```sh
# gcloud auth login
# ```
# **in the Terminal window** (which you can open via **File** > **New** in the
# menu). You only need to do this once per notebook instance.
# + [markdown] id="g3pkMt6zrPgQ"
# Check the package versions.
# + id="mvZS3XW2rPgR"
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
# + [markdown] id="aDtLdSkvqPHe"
# ### Set up variables
#
# We will set up some variables used to customize the pipelines below. Following
# information is required:
#
# * GCP Project id and number. See
# [Identifying your project id and number](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects).
# * GCP Region to run pipelines. For more information about the regions that
# Vertex Pipelines is available in, see the
# [Vertex AI locations guide](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
# * Google Cloud Storage Bucket to store pipeline outputs.
#
# **Enter required values in the cell below before running it**.
#
# + id="EcUseqJaE2XN"
GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS
GOOGLE_CLOUD_PROJECT_NUMBER = '' # <--- ENTER THIS
GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS
GCS_BUCKET_NAME = '' # <--- ENTER THIS
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_PROJECT_NUMBER and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
# + [markdown] id="GAaCPLjgiJrO"
# Set `gcloud` to use your project.
# + id="VkWdxe4TXRHk"
# !gcloud config set project {GOOGLE_CLOUD_PROJECT}
# + id="CPN6UL5CazNy"
PIPELINE_NAME = 'penguin-bigquery'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
# + [markdown] id="szKDDoD_KipW"
# By default the Vertex Pipelines uses the default GCE VM service account of
# format `[project-number]-<EMAIL>`. We need to
# give a permission to use BigQuery to this account to access BigQuery in the
# pipeline. We will add 'BigQuery User' role to the account.
# + id="4aii8K3dJEyj"
# !gcloud projects add-iam-policy-binding {GOOGLE_CLOUD_PROJECT} \
# --member=serviceAccount:{GOOGLE_CLOUD_PROJECT_<EMAIL> \
# --role=roles/bigquery.user
# + [markdown] id="v3ktk1j9s1PP"
# Please see
# [Vertex documentation](https://cloud.google.com/vertex-ai/docs/pipelines/configure-project)
# to learn more about service accounts and IAM configuration.
# + [markdown] id="nH6gizcpSwWV"
# ## Create a pipeline
#
# TFX pipelines are defined using Python APIs as we did in
# [Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
# We previously used `CsvExampleGen` which reads data from a CSV file. In this
# tutorial, we will use
# [`BigQueryExampleGen`](https://www.tensorflow.org/tfx/api_docs/python/tfx/extensions/google_cloud_big_query/example_gen/component/BigQueryExampleGen)
# component which reads data from BigQuery.
#
# + [markdown] id="hNg73Slwn8nq"
# ### Prepare BigQuery query
#
# We will use the same
# [Palmer Penguins dataset](https://allisonhorst.github.io/palmerpenguins/articles/intro.html). However, we will read it from a BigQuery table
# `tfx-oss-public.palmer_penguins.palmer_penguins` which is populated using the
# same CSV file.
#
# If you are using Google Colab, you can examine the content of the BigQuery
# table directly.
# + id="Mb_Kj1U8pBhZ"
# docs_infra: no_execute
# %%bigquery --project {GOOGLE_CLOUD_PROJECT}
SELECT *
FROM `tfx-oss-public.palmer_penguins.palmer_penguins`
LIMIT 5
# + [markdown] id="arvdbM5jpjNm"
# All features were already normalized to 0~1 except `species` which is the
# label. We will build a classification model which predicts the `species` of
# penguins.
#
# `BigQueryExampleGen` requires a query to specify which data to fetch. Because
# we will use all the fields of all rows in the table, the query is quite simple.
# You can also specify field names and add `WHERE` conditions as needed according
# to the
# [BigQuery Standard SQL syntax](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax).
# + id="7AwysGAVnfJA"
QUERY = "SELECT * FROM `tfx-oss-public.palmer_penguins.palmer_penguins`"
# + [markdown] id="lOjDv93eS5xV"
# ### Write model code.
#
# We will use the same model code as in the
# [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple).
# + id="aES7Hv5QTDK3"
_trainer_module_file = 'penguin_trainer.py'
# + id="Gnc67uQNTDfW"
# %%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
# + [markdown] id="-LsYx8MpYvPv"
# Copy the module file to GCS which can be accessed from the pipeline components.
# Because model training happens on GCP, we need to upload this model definition.
#
# Otherwise, you might want to build a container image including the module file
# and use the image to run the pipeline.
# + id="rMMs5wuNYAbc"
# !gsutil cp {_trainer_module_file} {MODULE_ROOT}/
# + [markdown] id="w3OkNz3gTLwM"
# ### Write a pipeline definition
#
# We will define a function to create a TFX pipeline. We need to use
# `BigQueryExampleGen` which takes `query` as an argument. One more change from
# the previous tutorial is that we need to pass `beam_pipeline_args` which is
# passed to components when they are executed. We will use `beam_pipeline_args`
# to pass additional parameters to BigQuery.
#
# + id="M49yYVNBTPd4"
from typing import List, Optional
def _create_pipeline(pipeline_name: str, pipeline_root: str, query: str,
module_file: str, serving_model_dir: str,
beam_pipeline_args: Optional[List[str]],
) -> tfx.dsl.Pipeline:
"""Creates a TFX pipeline using BigQuery."""
# NEW: Query data in BigQuery as a data source.
example_gen = tfx.extensions.google_cloud_big_query.BigQueryExampleGen(
query=query)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a file destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components,
# NEW: `beam_pipeline_args` is required to use BigQueryExampleGen.
beam_pipeline_args=beam_pipeline_args)
# + [markdown] id="mJbq07THU2GV"
# ## Run the pipeline on Vertex Pipelines.
#
# We will use Vertex Pipelines to run the pipeline as we did in
# [Simple TFX Pipeline for Vertex Pipelines Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple).
#
# + [markdown] id="7mp0AkmrPdUb"
# We also need to pass `beam_pipeline_args` for the BigQueryExampleGen. It
# includes configs like the name of the GCP project and the temporary storage for
# the BigQuery execution.
# + id="fAtfOZTYWJu-"
import os
# We need to pass some GCP related configs to BigQuery. This is currently done
# using `beam_pipeline_args` parameter.
BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS = [
'--project=' + GOOGLE_CLOUD_PROJECT,
'--temp_location=' + os.path.join('gs://', GCS_BUCKET_NAME, 'tmp'),
]
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
query=QUERY,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR,
beam_pipeline_args=BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS))
# + [markdown] id="fWyITYSDd8w4"
# The generated definition file can be submitted using kfp client.
# + id="tI71jlEvWMV7"
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.run(sync=False)
# + [markdown] id="L3k9f5IVQXcQ"
# Now you can visit 'Vertex AI > Pipelines' in
# [Google Cloud Console](https://console.cloud.google.com/) to see the
# progress.
| site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_bq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
from numpy import *
import numpy as np
import random
import math
import matplotlib.pyplot as plt
from numpy import dot
from sklearn.externals.joblib import Memory
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
def get_data(file_path):
data = load_svmlight_file(file_path)
return data[0], data[1]
def loadDataSet():
dataMat = []; labelMat = []
fr = open('/Users/hakuri/Desktop/testSet.txt')
for line in fr.readlines():
lineArr = line.strip().split()
dataMat.append([1.0, float(lineArr[0]), float(lineArr[1])])
labelMat.append(int(lineArr[2]))
return dataMat,labelMat
def sigmoid(inX):
return 1.0/(1+exp(-inX))
def gradAscent(dataMatIn, classLabels):
dataMatrix = mat(dataMatIn) #convert to NumPy matrix
labelMat = mat(classLabels).transpose() #convert to NumPy matrix
m,n = shape(dataMatrix)
alpha = 0.001
maxCycles = 500
weights = ones((n,1))
for k in range(maxCycles): #heavy on matrix operations
h = sigmoid(dataMatrix*weights) #matrix mult
error = (labelMat - h) #vector subtraction
weights = weights + alpha * dataMatrix.transpose()* error #matrix mult
return weights
def GetResult():
train_file_path = './a9a.txt'
validation_file_path = './a9a.t'
dataMat,labelMat=get_data(train_file_path)
weights=gradAscent(dataMat,labelMat)
plotBestFit(weights)
def plotBestFit(weights):
dataMat,labelMat=loadDataSet()
dataArr = array(dataMat)
n = shape(dataArr)[0]
xcord1 = []; ycord1 = []
xcord2 = []; ycord2 = []
for i in range(n):
if int(labelMat[i])== 1:
xcord1.append(dataArr[i,1]); ycord1.append(dataArr[i,2])
else:
xcord2.append(dataArr[i,1]); ycord2.append(dataArr[i,2])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xcord1, ycord1, s=30, c='red', marker='s')
ax.scatter(xcord2, ycord2, s=30, c='green')
x = arange(-3.0, 3.0, 0.1)
# y=(0.48*x+4.12414)/(0.616)
# y = (-weights[0]-weights[1]*x)/weights[2]
y = (-(float)(weights[0][0])-(float)(weights[1][0])*x)/(float)(weights[2][0])
ax.plot(x,y)
plt.xlabel('X1'); plt.ylabel('X2');
plt.show()
if __name__=='__main__':
GetResult()
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/satyajitghana/TSAI-DeepNLP-END2.0/blob/main/04_RNN_LSTM/04_LSTM_IMDB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="4trWBrarVO9N" outputId="1ebd0369-a251-45eb-cbf9-0131e37db363"
# ! nvidia-smi
# + id="4SPhj6gnAnT2"
import torch
from torchtext.legacy import data
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy',
tokenizer_language = 'en_core_web_sm',
include_lengths = True)
LABEL = data.LabelField(dtype = torch.float)
# + colab={"base_uri": "https://localhost:8080/"} id="lwn4oStE6PzV" outputId="6c010554-1463-48ce-c528-4d76011d5274"
from torchtext.legacy import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
# + colab={"base_uri": "https://localhost:8080/"} id="5DLJ86m56Xdn" outputId="a11dca14-ff00-4613-b4d8-9e13e8bad2be"
print(f'Number of training examples: {len(train_data)}')
print(f'Number of testing examples: {len(test_data)}')
# + colab={"base_uri": "https://localhost:8080/"} id="iXTWwqXA6rP2" outputId="6c3e4f43-2ba3-4aba-d5d7-a02b522741e0"
print(vars(train_data.examples[0]))
# + id="3HMVqiZd6tR0"
import random
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
# + colab={"base_uri": "https://localhost:8080/"} id="uOeQ6KpP7M-0" outputId="40fdf0bd-ecde-4b64-9114-88b3a829dd8b"
print(f'Number of training examples: {len(train_data)}')
print(f'Number of validation examples: {len(valid_data)}')
print(f'Number of testing examples: {len(test_data)}')
# + id="KixkM1jQ7TB-"
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE)
LABEL.build_vocab(train_data)
# + colab={"base_uri": "https://localhost:8080/"} id="hD4SFKnc7g0D" outputId="f029f071-f012-41f5-8390-d1cc6141df86"
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}")
print(f"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}")
# + colab={"base_uri": "https://localhost:8080/"} id="ttKvFTCQ7isK" outputId="b52fb395-5148-417f-e3c7-1abec226ea05"
print(TEXT.vocab.freqs.most_common(20))
# + colab={"base_uri": "https://localhost:8080/"} id="fZXIsIV47mlI" outputId="09c18a61-872e-4956-9cd4-06ba1c5c128e"
print(TEXT.vocab.itos[:10])
# + id="vmbx3T9-7x4g" colab={"base_uri": "https://localhost:8080/"} outputId="cd4447e5-5654-46d5-dfd1-073cdee3acb4"
print(LABEL.vocab.stoi)
# + id="B3gBfP6mEJ_0"
BATCH_SIZE = 128
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_within_batch=True, # necessary for packed_padded_sequence
device = device)
# + colab={"base_uri": "https://localhost:8080/"} id="hpaWK7HQ4wiA" outputId="bb2ab651-1bb4-4d6b-8004-2cddfb33782f"
print('Train')
for batch in train_iterator:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
print('\nValid:')
for batch in valid_iterator:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
print('\nTest:')
for batch in test_iterator:
print(f'Text matrix size: {batch.text[0].size()}')
print(f'Target vector size: {batch.label.size()}')
break
# + id="f5mSNdbyYKpt"
import torch.nn as nn
class LSTM(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, text, text_length):
#[sentence len, batch size] => [sentence len, batch size, embedding size]
embedded = self.embedding(text)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length.to('cpu'))
#[sentence len, batch size, embedding size] =>
# output: [sentence len, batch size, hidden size]
# hidden: [1, batch size, hidden size]
packed_output, (hidden, cell) = self.lstm(packed)
return self.fc(hidden.squeeze(0)).view(-1)
# + id="x0_X5kSwENad"
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 128
HIDDEN_DIM = 256
OUTPUT_DIM = 1
model = LSTM(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
# + colab={"base_uri": "https://localhost:8080/"} id="VdGb8dKBEO2x" outputId="d013baf8-2d18-44ca-8ec0-4c4766713bc5"
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
# + id="AAeEtXiJEQCj"
import torch.optim as optim
optimizer = optim.Adam(model.parameters(), lr=1e-4)
# NOTE: DO NOT USE SGD HERE, ONLY ADAM
# + id="0Utp4-qAERRG"
criterion = nn.BCEWithLogitsLoss()
# + id="PyAXf58FESdL"
model = model.to(device)
criterion = criterion.to(device)
# + id="w4yNiGXQETh9"
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
# + id="N1iGJW1wEUrL"
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
text, text_length = batch.text
logits = model(text, text_length)
loss = criterion(logits, batch.label)
acc = binary_accuracy(logits, batch.label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
# + id="HNQxQS3tEWUW"
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text, text_length = batch.text
predictions = model(text, text_length)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
# + id="DVM8MtV6EYIw"
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
# + colab={"base_uri": "https://localhost:8080/"} id="yJ5KZmM4EZXW" outputId="f193b179-58e8-434e-fe38-109fb3ab6db6"
N_EPOCHS = 30
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
# + colab={"base_uri": "https://localhost:8080/"} id="qIiKAJMaEbKO" outputId="646bedb9-6e66-48a2-f54b-e91780e42fa6"
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
# + id="G024NssCEcj0"
| 04_RNN_LSTM/04_LSTM_IMDB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
import cv2 as cv
import os
from glob import glob
# -
camera_id = 0
out_folder = '/tmp/imgs/'
# +
from reachy.utils.vision import BackgroundVideoCapture
cap = BackgroundVideoCapture(0)
# -
_, img = cap.read()
plt.imshow(cv.cvtColor(img, cv.COLOR_BGR2RGB))
# +
import ipywidgets as widgets
from IPython.display import display, clear_output
button = widgets.Button(description="Snapshot!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
n = len(glob(os.path.join(out_folder, '*.jpg'))) + 1
path = os.path.join(out_folder, f'img_{n}.jpg')
_, img = cap.read()
cv.imwrite(path, img)
with output:
clear_output()
print(f'Snapshot saved as {path}')
button.on_click(on_button_clicked)
# -
| software/notebooks/Record images from camera.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Forecasting - Facebook Prophet
# https://facebook.github.io/prophet/
#
# https://research.fb.com/blog/2017/02/prophet-forecasting-at-scale/
#
# https://peerj.com/preprints/3190.pdf
# +
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
mpl.rcParams['figure.figsize'] = (16, 10)
pd.set_option('display.max_rows', 500)
import plotly.graph_objects as go
# +
#attention might have problems with holiday package,
#downgrate holidays via: pip install 'holidays==0.9.12'
from fbprophet import Prophet
# -
# %matplotlib inline
plt.style.use('fivethirtyeight')
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
# # Trivial Forecast (rolling mean)
# +
# the final trivial model is at the end of this file
df = pd.DataFrame({'X': np.arange(0,10)}) # generate an input df
df['y']=df.rolling(3).mean() # take the window and write the average as the result
# -
df
# # Small data set
df_all = pd.read_csv('../data/processed/COVID_small_flat_table.csv',sep=';')
df=df_all[['date','Germany']]
df=df.rename(columns={'date': 'ds',
'Germany': 'y'})
# +
ax = df.set_index('ds').plot(figsize=(12, 8),
logy=True)
ax.set_ylabel('Daily Number of confimed cases')
ax.set_xlabel('Date')
plt.show()
# +
# set the uncertainty interval to 95% (the Prophet default is 80%)
#my_model = Prophet(interval_width=0.95) # piecwise linear model
my_model = Prophet(growth='logistic') # logistic model
# -
# the column 'cap' is only mandatory for the logistic model
df['cap']=1000000.
my_model.fit(df)
# +
# define the periods and the frequency 'D'== days
future_dates = my_model.make_future_dataframe(periods=7, freq='D')
future_dates['cap']=1000000. # only mandatory for the logistic model
future_dates.tail()
# +
# predict according to the scikit-learn standard
forecast = my_model.predict(future_dates)
# -
my_model.plot(forecast,
uncertainty=True ); # since fbprohet is rendering the output
# +
import plotly.offline as py
from fbprophet.plot import plot_plotly
fig = plot_plotly(my_model, forecast) # This returns a plotly Figure
fig.update_layout(
width=1024,
height=900,
xaxis_title="Time",
yaxis_title="Confirmed infected people (source johns hopkins csse, log-scale)",
)
fig.update_yaxes(type="log",range=[1.1,5.5])
py.iplot(fig)
# -
forecast.sort_values(by='ds').head()
my_model.plot_components(forecast);
forecast[['ds','trend']].set_index('ds').plot(figsize=(12, 8),logy=True)
# # Cross-Validation
from fbprophet.diagnostics import cross_validation
df_cv = cross_validation(my_model,
initial='40 days', # we take the first 30 days for training
period='1 days', # every days a new prediction run
horizon = '7 days') #we predict 7days into the future
df_cv.sort_values(by=['cutoff','ds'])[0:12]
df_cv.head()
from fbprophet.diagnostics import performance_metrics
df_p = performance_metrics(df_cv)
# the performance matrix shows the result for all horizon
df_p
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape',)
# # Diagonalplot
#
# ### gives a good understanding for the under and over estimation w.r.t. magnitude
# +
horizon='7 days'
df_cv['horizon']=df_cv.ds-df_cv.cutoff
date_vec=df_cv[df_cv['horizon']==horizon]['ds']
y_hat=df_cv[df_cv['horizon']==horizon]['yhat']
y=df_cv[df_cv['horizon']==horizon]['y']
# -
df_cv_7=df_cv[df_cv['horizon']==horizon]
df_cv_7.tail()
type(df_cv['horizon'][0])
# +
fig, ax = plt.subplots(1, 1)
ax.plot(np.arange(max(y)),np.arange(max(y)),'--',label='diagonal')
ax.plot(y,y_hat,'-',label=horizon) # horizon is a np.timedelta objct
ax.set_title('Diagonal Plot')
ax.set_ylim(10, max(y))
ax.set_xlabel('truth: y')
ax.set_ylabel('prediciton: y_hat')
ax.set_yscale('log')
ax.set_xlim(10, max(y))
ax.set_xscale('log')
ax.legend(loc='best',
prop={'size': 16});
# -
# # Trivial Forecast
#
# Example trivial forecast, prediciton 7 days into the future
def mean_absolute_percentage_error(y_true, y_pred):
''' MAPE calculation '''
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
parse_dates=['date']
df_all = pd.read_csv('../data/processed/COVID_small_flat_table.csv',sep=';',parse_dates=parse_dates)
df_trivial=df_all[['date','Germany']]
df_trivial=df_trivial.rename(columns={'date': 'ds',
'Germany': 'y'})
# ### One of the standard forecast is a rolling mean
#
# An other standard forecast is the exponentially-weighted moving average,
# see pandas.ewma
df_trivial['y_mean_r3']=df_trivial.y.rolling(3).mean() # take the average of 3 days
# +
# the result has to be shifted according to the prediciton horizon (here 7 days)
df_trivial['cutoff']=df_trivial['ds'].shift(7)
df_trivial['y_hat']=df_trivial['y_mean_r3'].shift(7)
df_trivial['horizon']=df_trivial['ds']-df_trivial['cutoff']
print('MAPE: '+str(mean_absolute_percentage_error(df_trivial['y_hat'].iloc[12:,], df_trivial['y'].iloc[12:,])))
df_trivial
# -
| notebooks/Modeling_forecast_facebook_prophet_1.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import Memory_Collaborative_Filtering as mem
import sqlite3 as sql
import sklearn
from sklearn.neighbors import NearestNeighbors
from scipy.sparse import csr_matrix
import pickle
import sqlite3 as sql
wrangled_path = 'C:/Users/arjun/Dropbox/Georgetown Data Science -- Team Amazon/wrangled_amazon_data.db'
wrangled_data = mem.import_wrangled_data(wrangled_path)
wrangled_data['star_rating'] = wrangled_data['star_rating'].astype(int)
wrangled_review_data = wrangled_review_data.dropna()
wrangled_review_data = wrangled_review_data.drop_duplicates()
wrangled_review_data = wrangled_review_data.reset_index(drop=True)
product_features = pd.concat([wrangled_review_data[['star_rating']],
wrangled_review_data[['helpful_votes']],
wrangled_review_data[['review_length']],
pd.get_dummies(wrangled_review_data[['author']])
], axis=1)
nbrs = NearestNeighbors(n_neighbors=6, algorithm = 'ball_tree').fit(product_features)
distances2, indices2 = nbrs.kneighbors()
def get_index_from_name(name, review_data):
return review_data[review_data["product_title"]==name].index.tolist()[0]
def print_similar_books(review_data, distance_matrix, index_matric, query=None,id=None, model=None):
if id:
for id in indices2[id][1:]:
print(review_data.iloc[id]["product_title"])
if query:
found_id = get_index_from_name(query, review_data)
counter=0
for id in indices2[found_id][1:]:
print(review_data.iloc[id]["product_title"])
for book in test_recs['product_title']:
print(book + " recommendations:")
print_similar_books(wrangled_review_data, distances2, indices2, query=book, model=nbrs)
print_similar_books(wrangled_review_data, distances2, indices2, query="Trial Run (Fault Lines)", model=nbrs)
| Test_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# ## Load data
#
df_us_confirmed_case = pd.read_csv('RAW_us_confirmed_cases.csv')
df_us_confirmed_case
df_us_mask_mandate = pd.read_csv('U.S._State_and_Territorial_Public_Mask_Mandates_From_April_10__2020_through_August_15__2021_by_County_by_Day.csv')
df_us_mask_mandate
df_mask_use_by_county = pd.read_csv('mask-use-by-county.csv')
df_mask_use_by_county
# ### collect data needed and clean up
state = 'Florida'
state_abbr = 'FL'
county = 'Palm Beach'
FIPS = 12099
# clean up data
# drop unneed columns
df_us_confirmed_case_dropped = df_us_confirmed_case.drop(columns=['UID', 'iso2', 'iso3', 'code3', 'Country_Region', 'Lat', 'Long_', 'Combined_Key'])
df_us_confirmed_case_dropped=df_us_confirmed_case_dropped.fillna(0)
df_us_confirmed_case_dropped['FIPS'] = df_us_confirmed_case_dropped.FIPS.apply(lambda x: "{:05d}".format(int(float(x))))
df_us_confirmed_case_dropped
# +
# Melt the confirm case data so each row repreasent confirmed case each day
id_vars = [
'Province_State',
'Admin2',
'FIPS',
]
df_us_confirmed_case_transformed =pd.melt(df_us_confirmed_case_dropped, id_vars = id_vars, var_name ='date', value_name = "cases")
df_us_confirmed_case_transformed['date'] = pd.to_datetime(df_us_confirmed_case_transformed['date'])
df_us_confirmed_case_transformed
# +
df_palm_beach_confirmed_case_transformed = df_us_confirmed_case_transformed[(df_us_confirmed_case_transformed['Admin2'] == county) & (df_us_confirmed_case_transformed['Province_State'] == state)].reset_index(drop=True)
df_palm_beach_confirmed_case_transformed
# -
# create FIPS column for easier join
df_us_mask_mandate['FIPS_State'] = df_us_mask_mandate.FIPS_State.apply(lambda x: "{:02d}".format(int(x)))
df_us_mask_mandate['FIPS_County'] = df_us_mask_mandate.FIPS_County.apply(lambda x: "{:03d}".format(int(x)))
df_us_mask_mandate['FIPS'] = df_us_mask_mandate['FIPS_State'] + df_us_mask_mandate['FIPS_County']
df_us_mask_mandate
# drop unneed columns
df_us_mask_mandate_dropped = df_us_mask_mandate.drop(columns=['FIPS_State', 'FIPS_County', 'order_code', 'Source_of_Action', 'URL', 'Citation'])
df_us_mask_mandate_dropped
# +
df_palm_beach_mask_mandate = df_us_mask_mandate_dropped[df_us_mask_mandate['County_Name'] == 'Palm Beach County'].reset_index(drop =True)
df_palm_beach_mask_mandate
# -
df_palm_beach_mask_mandate.Face_Masks_Required_in_Public.unique()
df_mask_use_palm_beach = df_mask_use_by_county[df_mask_use_by_county['COUNTYFP'] == FIPS].reset_index(drop =True)
df_mask_use_palm_beach
# ### Palm Beach has no mask mandate
df_mask_use_palm_beach_transformed = df_mask_use_palm_beach.drop(columns=['COUNTYFP'])
df_mask_use_palm_beach_transformed =pd.melt(df_mask_use_by_county_transformed, var_name ='Response', value_name = "Proportion")
df_mask_use_palm_beach_transformed
# ### Visualize
#
# +
import matplotlib.pyplot as plt
import matplotlib.lines as lines
from datetime import datetime
from matplotlib.dates import date2num
# +
fig, ax = plt.subplots(figsize=(15,10))
ax.bar(df_mask_use_palm_beach_transformed['Response'], df_mask_use_palm_beach_transformed['Proportion'], color ='C0')
ax.set_title("Estimated Prevalence of Mask Wearing in Palm Beach Country, FL")
ax.set_ylabel("Proportion")
plt.show()
# +
# accumulated covid cases
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['cases'], color='C0')
ax.set_title("Accumulate Covid Cases, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Number of Covid Cases")
plt.show()
# -
# daily new covid cases
df_palm_beach_confirmed_case_transformed['new_cases'] = df_palm_beach_confirmed_case_transformed['cases'].diff()
df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'] = df_palm_beach_confirmed_case_transformed['new_cases'].rolling(window=7).mean().round()
df_palm_beach_confirmed_case_transformed
# +
# daily new covid cases
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['new_cases'], color='C0')
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'], color='C1')
line0 = lines.Line2D([0], [0], label='Daily New COVID Cases', color='C0')
line1 = lines.Line2D([0], [0], label='7 days Moving average New COVID Cases', color='C1')
plt.legend(handles=[line0,line1])
ax.set_title("Daily New Covid Cases & 7 days average, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Number of Covid Cases")
plt.show()
# +
# infection rate = daily new case / population
population = 1492191
df_palm_beach_confirmed_case_transformed['daily_infection_rate'] = df_palm_beach_confirmed_case_transformed['new_cases'].apply(lambda x: x * 1.0 / population)
df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'] = df_palm_beach_confirmed_case_transformed['new_cases_moving_average_7_days'].apply(lambda x: x * 1.0 / population)
df_palm_beach_confirmed_case_transformed
# +
# daily new covid cases
fig, ax = plt.subplots(figsize=(15,10))
#ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate'], color='C0')
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'], color='C1')
#line0 = lines.Line2D([0], [0], label='Daily Infection rate', color='C0')
line1 = lines.Line2D([0], [0], label='7 days Moving average infection rate', color='C1')
plt.legend(handles=[
#line0,
line1])
ax.set_title("7 days moving average infection rate, Palm Beach Country, FL (Daily new cases / Population)")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate")
plt.show()
# -
# ### since Palm Beach county doesn't have mandate mask policy, try to find
# +
#
df_palm_beach_confirmed_case_transformed['daily_infection_rate_diff'] =df_palm_beach_confirmed_case_transformed['daily_infection_rate'].diff()
df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'] =df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'].diff()
df_palm_beach_confirmed_case_transformed
# +
fig, ax = plt.subplots(figsize=(15,10))
#ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate_diff'], color='C0')
ax.plot(df_palm_beach_confirmed_case_transformed['date'], df_palm_beach_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'], color='C1')
#line0 = lines.Line2D([0], [0], label='Daily Infection rate Diff', color='C0')
line1 = lines.Line2D([0], [0], label='7 days Moving average infection rate Diff', color='C1')
plt.legend(handles=[
#line0,
line1])
ax.set_title("7 days moving average infection rate diff, Palm Beach Country, FL")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate Diff")
plt.show()
# -
# ### Infection rate has no much change, specially using the 7 days moving average
# ## Try to find a different county and do compare?
# find a similar county which has similar always waring masks, TX Val Verde County
#
FIPS_other = 51177
df_mask_use_other = df_mask_use_by_county[df_mask_use_by_county['COUNTYFP'] == FIPS_other].reset_index(drop =True)
df_mask_use_other
# +
df_val_verde_mask_mandate = df_us_mask_mandate_dropped[df_us_mask_mandate['FIPS'] == "{:05d}".format(FIPS_other)].reset_index(drop =True)
df_val_verde_mask_mandate
# -
df_val_verde_mask_mandate.Face_Masks_Required_in_Public.unique()
# +
df_val_verde_mask_mandate_yes = df_val_verde_mask_mandate[df_val_verde_mask_mandate['Face_Masks_Required_in_Public'] == 'Yes'].reset_index(drop =True)
df_val_verde_mask_mandate_yes
# +
df_val_verde_mask_mandate = df_us_mask_mandate_dropped[df_us_mask_mandate['FIPS'] == "{:05d}".format(FIPS_other)].reset_index(drop =True)
df_val_verde_mask_mandate
# +
df_spotsylvania_confirmed_case_transformed = df_us_confirmed_case_transformed[(df_us_confirmed_case_transformed['FIPS'] == "{:05d}".format(FIPS_other))].reset_index(drop=True)
df_spotsylvania_confirmed_case_transformed
# -
Population_other = 136215
df_spotsylvania_confirmed_case_transformed['new_cases'] = df_spotsylvania_confirmed_case_transformed['cases'].diff()
df_spotsylvania_confirmed_case_transformed['new_cases_moving_average_7_days'] = df_spotsylvania_confirmed_case_transformed['new_cases'].rolling(window=7).mean().round()
df_spotsylvania_confirmed_case_transformed['daily_infection_rate'] = df_spotsylvania_confirmed_case_transformed['new_cases'].apply(lambda x: x * 1.0 / Population_other)
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'] = df_spotsylvania_confirmed_case_transformed['new_cases_moving_average_7_days'].apply(lambda x: x * 1.0 / Population_other)
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_diff'] =df_spotsylvania_confirmed_case_transformed['daily_infection_rate'].diff()
df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days_diff'] =df_spotsylvania_confirmed_case_transformed['daily_infection_rate_new_cases_moving_average_7_days'].diff()
df_spotsylvania_confirmed_case_transformed
df_palm_beach_confirmed_case_transformed.columns
# +
joine_df=df_palm_beach_confirmed_case_transformed.merge(df_spotsylvania_confirmed_case_transformed, left_on='date', right_on='date',
suffixes=('_palm_beach', '_spotsylvania'))
joine_df.columns
# +
#
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_diff_palm_beach'], color='C0')
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_diff_spotsylvania'], color='C1')
line0 = lines.Line2D([0], [0], label='Change in 7 days Moving average infection rate Palm Beach,FL', color='C0')
line1 = lines.Line2D([0], [0], label='Change in 7 days Moving average infection rate Spotsylvania, TX', color='C1')
span = ax.axvspan(date2num(datetime(2020,5,29)), date2num(datetime(2021,5,14)),color="C2", label = 'mask mandate in Spotsylvania Country')
plt.legend(handles=[
line0,
line1,
span])
ax.set_title("Change in 7 days moving average infection rate, Palm Beach, FL vs Spotsylvania, VA")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate Diff")
plt.show()
# +
#
fig, ax = plt.subplots(figsize=(15,10))
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_palm_beach'], color='C0')
ax.plot(joine_df['date'], joine_df['daily_infection_rate_new_cases_moving_average_7_days_spotsylvania'], color='C1')
line0 = lines.Line2D([0], [0], label='7 days Moving average infection rate Palm Beach', color='C0')
line1 = lines.Line2D([0], [0], label='7 days Moving average infection rate Spotsylvania', color='C1')
span = ax.axvspan(date2num(datetime(2020,5,29)), date2num(datetime(2021,5,14)),color="C2", label = 'mask mandate in Spotsylvania Country')
plt.legend(handles=[
line0,
line1,
span])
ax.set_title("7 days moving average infection rate, Palm Beach, FL vs Spotsylvania, VA")
ax.set_xlabel("Date")
ax.set_ylabel("Infection Rate Diff")
plt.show()
| A4-common analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
from tqdm import tqdm
from data_load import get_batch, load_vocab
from hyperparams import Hyperparams as hp
from modules import *
from networks import TextEnc, AudioEnc, AudioDec, Attention, SSRN
import tensorflow as tf
from utils import *
import sys
# -
# !pip install numba==0.47.0
| train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id='Urbanization_Using_NDBI_top'></a>
# # Urbanization Using NDBI
# <hr>
#
# ## Background
# Among the many urbanization indices, the Normalized Difference Built-Up Index (NDBI) is one of the most commonly used. This notebook shows how to use NDBI in the context of the Open Data Cube.
#
# The formula for NDBI for Landsat is as follows:
#
# $$ NDBI = \frac{(SWIR - NIR)}{(SWIR + NIR)}$$
#
# Note that for arid environments, the Dry Built-Up Index (DBI) may perform better than NDBI, which struggles with arid environments and some kinds of buildings. DBI requires the TIR band of Landsat 8.
#
# <br>
#
# ## Index
#
# * [Import Dependencies and Connect to the Data Cube](#Urbanization_Using_NDBI_import)
# * [Choose Platform and Product](#Urbanization_Using_NDBI_plat_prod)
# * [Define the Extents of the Analysis](#Urbanization_Using_NDBI_define_extents)
# * [Load Data from the Data Cube](#Urbanization_Using_NDBI_retrieve_data)
# * [Show RGB Representation of the Area](#Urbanization_Using_NDBI_rgb)
# * [Urbanization Analysis](#Urbanization_Using_NDBI_analysis)
# ## <span id="Urbanization_Using_NDBI_import">Import Dependencies and Connect to the Data Cube [▴](#Urbanization_Using_NDBI_top)</span>
# +
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import matplotlib.pyplot as plt
import xarray as xr
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.dc_rgb import rgb
from utils.data_cube_utilities.urbanization import NDBI
from utils.data_cube_utilities.vegetation import NDVI
from utils.data_cube_utilities.dc_water_classifier import NDWI
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
# -
# ## <span id="Urbanization_Using_NDBI_plat_prod">Choose Platform and Product [▴](#Urbanization_Using_NDBI_top)</span>
# Get available products
products_info = dc.list_products()
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
# **Choose the platforms and products**
# +
# These are the platforms (satellites) and products (datacube sets)
# used for this demonstration.
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
# -
# ## <span id="Urbanization_Using_NDBI_define_extents">Define the Extents of the Analysis [▴](#Urbanization_Using_NDBI_top)</span>
# +
# Kumasi, Ghana
# lat = (6.597724,6.781856)
# lon = (-1.727843,-1.509147)
# Accra, Ghana
lat = (5.5162, 5.6338)
lon = (-0.2657, -0.1373)
time_range = ("2019-01-01", "2019-12-31")
# -
# **Visualize the selected area**
display_map(lat, lon)
# ## <span id="Urbanization_Using_NDBI_retrieve_data">Load Data from the Data Cube [▴](#Urbanization_Using_NDBI_top)</span>
# +
desired_bands = ['red','green','nir','swir1', 'swir2', 'pixel_qa'] # needed by ndvi, ndwi, ndbi and cloud masking
desired_bands = desired_bands + ['blue'] # blue is needed for a true color visualization purposes
landsat_ds = dc.load(product = product,
platform = platform,
lat = lat,
lon = lon,
time = time_range,
measurements = desired_bands,
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
# +
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
# -
landsat_ds = landsat_ds.where(clean_mask)
# ## <span id="Urbanization_Using_NDBI_rgb">Show RGB Representation of the Area [▴](#Urbanization_Using_NDBI_top)</span>
median_composite = landsat_ds.median('time')
plt.figure(figsize=(8,8))
median_composite[['red', 'green', 'blue']].to_array().plot.imshow(vmin=0, vmax=2500)
plt.show()
# ## <span id="Urbanization_Using_NDBI_analysis">Urbanization Analysis [▴](#Urbanization_Using_NDBI_top)</span>
#
# > **NDWI, NDVI, NDBI**
# You will very rarely have urban classification and water classifications apply to the same pixel. For urban analysis, it may make sense to compute not just urban classes, but classes that are unlikely to co-occur with urbanization, such as vegetation (e.g. NDVI) or water (e.g. NDWI).
ndbi = NDBI(median_composite) # Urbanization
ndvi = NDVI(median_composite) # Dense Vegetation
ndwi = NDWI(median_composite) # High Concentrations of Water
plt.figure(figsize=(8,8))
ndvi.plot(cmap = "Greens")
plt.show()
plt.figure(figsize=(8,8))
ndwi.plot(cmap = "Blues")
plt.show()
plt.figure(figsize=(8,8))
ndbi.plot(cmap = "Reds")
plt.show()
# > **Merge into one Dataset**
# > If your data-arrays share the same set of coordinates, and you feel that you'll be using these values together in the future, you should consider merging them into an `xarray.Dataset`.
urbanization_dataset = xr.merge((ndvi.rename('NDVI'), ndwi.rename('NDWI'), ndbi.rename('NDBI')))
urbanization_dataset
# >**Building a False Color Composite**
# > If you have three lowly correlated measurements, place the measurement on red, green, and blue channels and visualize them.
plt.figure(figsize=(8,8))
urbanization_dataset[["NDBI", "NDVI", "NDWI"]].to_array().plot.imshow(vmin=0, vmax=1)
plt.show()
# >**Analyze The False Color Image**
# > Values that adhere strongly to individual classes adhere to their own color channel. In this example, NDVI adheres to green, NDWI adheres to blue, and NDBI adheres to red.
# > **Validate urbanization using other imagery**
# > Double check results using high-resolution imagery. Compare to the false color mosaic
# <br>
display_map(latitude = lat ,longitude = lon)
| notebooks/urbanization/Urbanization_Using_NDBI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In Depth - Decision Trees and Forests
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
# Here we'll explore a class of algorithms based on decision trees.
# Decision trees at their root are extremely intuitive. They
# encode a series of "if" and "else" choices, similar to how a person might make a decision.
# However, which questions to ask, and how to proceed for each answer is entirely learned from the data.
#
# For example, if you wanted to create a guide to identifying an animal found in nature, you
# might ask the following series of questions:
#
# - Is the animal bigger or smaller than a meter long?
# + *bigger*: does the animal have horns?
# - *yes*: are the horns longer than ten centimeters?
# - *no*: is the animal wearing a collar
# + *smaller*: does the animal have two or four legs?
# - *two*: does the animal have wings?
# - *four*: does the animal have a bushy tail?
#
# and so on. This binary splitting of questions is the essence of a decision tree.
# One of the main benefit of tree-based models is that they require little preprocessing of the data.
# They can work with variables of different types (continuous and discrete) and are invariant to scaling of the features.
#
# Another benefit is that tree-based models are what is called "nonparametric", which means they don't have a fix set of parameters to learn. Instead, a tree model can become more and more flexible, if given more data.
# In other words, the number of free parameters grows with the number of samples and is not fixed, as for example in linear models.
#
# ## Decision Tree Regression
# A decision tree is a simple binary classification tree that is
# similar to nearest neighbor classification. It can be used as follows:
# +
from figures import make_dataset
x, y = make_dataset()
X = x.reshape(-1, 1)
plt.figure()
plt.xlabel('Feature X')
plt.ylabel('Target y')
plt.scatter(X, y);
# +
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth=5)
reg.fit(X, y)
X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))
y_fit_1 = reg.predict(X_fit)
plt.figure()
plt.plot(X_fit.ravel(), y_fit_1, color='blue', label="prediction")
plt.plot(X.ravel(), y, '.k', label="training data")
plt.legend(loc="best");
# -
# A single decision tree allows us to estimate the signal in a non-parametric way,
# but clearly has some issues. In some regions, the model shows high bias and
# under-fits the data.
# (seen in the long flat lines which don't follow the contours of the data),
# while in other regions the model shows high variance and over-fits the data
# (reflected in the narrow spikes which are influenced by noise in single points).
# Decision Tree Classification
# ==================
# Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf:
#
# +
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from figures import plot_2d_separator
X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
plt.figure()
plot_2d_separator(clf, X, fill=True)
plt.scatter(X_train[:, 0], X_train[:, 1], c=np.array(['b', 'r'])[y_train], s=60, alpha=.7, edgecolor='k')
plt.scatter(X_test[:, 0], X_test[:, 1], c=np.array(['b', 'r'])[y_test], s=60, edgecolor='k');
# -
# There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many "if-else" questions can be asked before deciding which class a sample lies in.
#
# This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a ``max_depth`` of 1 is clearly an underfit model, while a depth of 7 or 8 clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being "pure."
#
# In the interactive plot below, the regions are assigned blue and red colors to indicate the predicted class for that region. The shade of the color indicates the predicted probability for that class (darker = higher probability), while yellow regions indicate an equal predicted probability for either class.
# # %matplotlib inline
from figures import plot_tree_interactive
plot_tree_interactive()
# Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.
#
# Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.
#
# ## Random Forests
# Random forests are simply many trees, built on different random subsets (drawn with replacement) of the data, and using different random subsets (drawn without replacement) of the features for each split.
# This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.
#
from figures import plot_forest_interactive
plot_forest_interactive()
# ## Selecting the Optimal Estimator via Cross-Validation
# +
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200)
parameters = {'max_features':['sqrt', 'log2', 10],
'max_depth':[5, 7, 9]}
clf_grid = GridSearchCV(rf, parameters, n_jobs=-1)
clf_grid.fit(X_train, y_train)
# -
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
# ## Another option: Gradient Boosting
# Another Ensemble method that can be useful is *Boosting*: here, rather than
# looking at 200 (say) parallel estimators, We construct a chain of 200 estimators
# which iteratively refine the results of the previous estimator.
# The idea is that by sequentially applying very fast, simple models, we can get a
# total model error which is better than any of the individual pieces.
# +
from sklearn.ensemble import GradientBoostingRegressor
clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
# -
# <div class="alert alert-success">
# <b>EXERCISE: Cross-validating Gradient Boosting</b>:
# <ul>
# <li>
# Use a grid search to optimize the `learning_rate` and `max_depth` for a Gradient Boosted
# Decision tree on the digits data set.
# </li>
# </ul>
# </div>
# +
from sklearn.datasets import load_digits
from sklearn.ensemble import GradientBoostingClassifier
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
# +
# # %load solutions/18_gbc_grid.py
# -
# ## Feature importance
#
# Both RandomForest and GradientBoosting objects expose a `feature_importances_` attribute when fitted. This attribute is one of the most powerful feature of these models. They basically quantify how much each feature contributes to gain in performance in the nodes of the different trees.
# +
X, y = X_digits[y_digits < 2], y_digits[y_digits < 2]
rf = RandomForestClassifier(n_estimators=300, n_jobs=1)
rf.fit(X, y)
print(rf.feature_importances_) # one value per feature
# -
plt.figure()
plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest')
| notebooks/18.In_Depth-Trees_and_Forests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="NP_Xq8p7WubK" colab_type="text"
# In this notebook we demonstrate how to train a doc2vec model on your custom corpus.
# + id="jifk_HHmvVWf" colab_type="code" outputId="a1f9b5ac-b706-4dd1-88db-35d41a52131a" colab={"base_uri": "https://localhost:8080/", "height": 69}
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
# + id="rLKY1nkIvluD" colab_type="code" colab={}
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
# + id="rPssbTNiwEd9" colab_type="code" outputId="b634cee4-5f3b-4823-c467-84dee2915ba5" colab={"base_uri": "https://localhost:8080/", "height": 85}
tagged_data
# + id="dcxU67TCwSd8" colab_type="code" colab={}
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
# + id="QLkpnvcTx6T9" colab_type="code" outputId="b46139d3-bba7-4458-be90-99645fe30cbc" colab={"base_uri": "https://localhost:8080/", "height": 102}
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
# + id="1KzwAgUJzQLW" colab_type="code" outputId="46706354-9687-41d6-ceaa-cbe865bddf7c" colab={"base_uri": "https://localhost:8080/", "height": 102}
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
# + id="myGVWgudz9mW" colab_type="code" outputId="542a19b6-c925-4263-f8df-50ec5ab87f9c" colab={"base_uri": "https://localhost:8080/", "height": 34}
model_dbow.wv.n_similarity(["dog"],["man"])
# + id="1i2Vv2uY4kqg" colab_type="code" outputId="0a74bbf1-4053-4399-a8de-9e0f2404a59d" colab={"base_uri": "https://localhost:8080/", "height": 170}
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
# + [markdown] id="NmmM7yI31gMn" colab_type="text"
# What happens when we compare between words which are not in the vocabulary?
# + id="YS-mciDx0ZiA" colab_type="code" outputId="dcffc03f-b499-4266-e525-5c5a10d5fca6" colab={"base_uri": "https://localhost:8080/", "height": 306}
model_dm.wv.n_similarity(['covid'],['man'])
# + id="17qDsxIpGx5n" colab_type="code" colab={}
| Ch3/08_Training_Dov2Vec_using_Gensim.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## DESCARGA DE INFORMACIÓN ESPACIAL DE INFORMACIÓN JSON
# ##### CURSO: ANÁLISIS DE DATOSAMBIENTALES CON PYTHON I
# ##### AUTOR: https://github.com/marvinjonathcn
# - #### NOTA: El siguiente código fue desarrollado en python 3.8.5. Antes de ejecutar los scripts se deben instalar las librerías necesarias.
# +
# INSTALAR LAS LIBRERIAS NECESARIAS PARA LA EJECUCIÓN DEL SCRIPT
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests #SOLICITUDES A LA WEB
import re #LIMPIEZA DEL CODIGO FUENTE
import json #TRABAJAR JSON
# +
import folium
m = folium.Map(location=[-12.08221, -76.93931])
folium.Marker([-12.08221, -76.93931], popup='EMA VH - AGRARIA').add_to(m)
m
# +
# HACER UNA SOLICITUD PARA OBTENER LA INFORMACIÓN DE UNA PAGINA WEB EN HTTP
r = requests.get("https://www.senamhi.gob.pe/mapas/mapa-estaciones-2/?")
print(r.text)
# +
# REALIZAR CORTES AL TEXTO DEPENDIENDO LA UBICACION DE CIERTAS PALABRAS O FRASES
s = r.text
s_split1 = s.split('var PruebaTest = [')[1]
s_split2 = s_split1.split('\n ]')[0]
s_split2
# +
# REALIZAR LA SEPARACIÓN DE LOS ARCHIVOS JSON POR UN ESPACIO EN BLANCO
values = [str(i) for i in s_split2.split('\n')]
values = values[1:]
values[52]
# +
# REALIZAR LA CONVERSION A FORMATO JSON Y POSTERIORMENTE A DATAFRAME
df_list_1 = [i[:-1] for i in values]
df_list_2 = [json.loads(i) for i in df_list_1]
df_list_3 = [pd.DataFrame.from_dict(i, orient="index") for i in df_list_2]
df_list_4 = [df.transpose() for df in df_list_3]
df_list_4[2]
# +
# UNIR LOS DATAFRAMES DE UNA LISTA EN UN SOLO DATAFRAME
df = pd.concat(df_list_4)
df.head(15)
# -
# EXPORTAR ARCHIVO DE DATOS
df.to_csv(r'D:\ARCGIS\ARCGIS-PRO\ARGISPRO-I\stations_senamhi_df.csv', index = False, header=True)
| CLASE2/CONVERTIR GEOJSON A CSV - PYTHON.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: "Python 3.7 (Intel\xAE oneAPI)"
# language: python
# name: c009-intel_distribution_of_python_3_oneapi-beta05-python
# ---
# # Subgroups
# ##### Sections
# - [What are Subgroups?](#What-are-Subgroups?)
# - [How a Subgroup Maps to Graphics Hardware](#How-a-Subgroup-Maps-to-Graphics-Hardware)
# - _Code:_ [Subgroup info](#Subgroup-info)
# - _Code:_ [Subgroup shuffle operations](#Subgroup-shuffle-operations)
# - _Code:_ [Subgroup Collectives](#Subgroup-Collectives)
# ## Learning Objectives
# - Understand advantages of using Subgroups in Data Parallel C++ (DPC++)
# - Take advantage of Subgroup collectives in ND-Range kernel implementation
# - Use Subgroup Shuffle operations to avoid explicit memory operations
# ## What are Subgroups?
# On many modern hardware platforms, __a subset of the work-items in a work-group__ are executed simultaneously or with additional scheduling guarantees. These subset of work-items are called subgroups. Leveraging subgroups will help to __map execution to low-level hardware__ and may help in achieving higher performance.
# ## Subgroups in ND-Range Kernel Execution
# Parallel execution with the ND_RANGE Kernel helps to group work items that map to hardware resources. This helps to __tune applications for performance__.
#
# The execution range of an ND-range kernel is divided into __work-groups__, __subgroups__ and __work-items__ as shown in picture below.
# 
# ## How a Subgroup Maps to Graphics Hardware
# | | |
# |:---:|:---|
# | __Work-item__ | Represents the individual instances of a kernel function. |
# | __Work-group__ | The entire iteration space is divided into smaller groups called work-groups, work-items within a work-group are scheduled on a single compute unit on hardware. |
# | __Subgroup__ | A subset of work-items within a work-group that are executed simultaneously, may be mapped to vector hardware. (DPC++) |
#
# The picture below shows how work-groups and subgroups map to __Intel® Gen11 Graphics Hardware__.
# 
# ## Why use Subgroups?
# - Work-items in a sub-group can __communicate directly using shuffle operations__, without explicit memory operations.
# - Work-items in a sub-group can synchronize using sub-group barriers and __guarantee memory consistency__ using sub-group memory fences.
# - Work-items in a sub-group have access to __sub-group collectives__, providing fast implementations of common parallel patterns.
# ## sub_group class
# The subgroup handle can be obtained from the nd_item using the __get_sub_group()__
# ```cpp
# ONEAPI::sub_group sg = item.get_sub_group();
# ```
# Once you have the subgroup handle, you can query for more information about the subgroup, do shuffle operations or use collective functions.
# ## Subgroup info
# The subgroup handle can be queried to get other information like number of work-items in subgroup, or number of subgroups in a work-group which will be needed for developers to implement kernel code using subgroups:
# - __get_local_id()__ returns the index of the work-item within its subgroup
# - __get_local_range()__ returns the size of sub_group
# - __get_group_id()__ returns the index of the subgroup
# - __get_group_range()__ returns the number of subgroups within the parent work-group
#
#
# ```cpp
# h.parallel_for(nd_range<1>(64,64), [=](nd_item<1> item){
# /* get sub_group handle */
# ONEAPI::sub_group sg = item.get_sub_group();
# /* query sub_group and print sub_group info once per sub_group */
# if(sg.get_local_id()[0] == 0){
# out << "sub_group id: " << sg.get_group_id()[0]
# << " of " << sg.get_group_range()[0]
# << ", size=" << sg.get_local_range()[0]
# << endl;
# }
# });
# ```
# ### Lab Exercise: Subgroup Info
# The DPC++ code below demonstrates subgroup query methods to print sub-group info: Inspect code, there are no modifications necessary:
# 1. Inspect the code cell below and click run ▶ to save the code to file
# 2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
# +
# %%writefile lab/sub_group_info.cpp
//==============================================================
// Copyright © 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
// =============================================================
#include <CL/sycl.hpp>
using namespace sycl;
static const size_t N = 64; // global size
static const size_t B = 64; // work-group size
int main() {
queue q;
std::cout << "Device : " << q.get_device().get_info<info::device::name>() << std::endl;
q.submit([&](handler &h) {
//# setup sycl stream class to print standard output from device code
auto out = stream(1024, 768, h);
//# nd-range kernel
h.parallel_for(nd_range<1>(N, B), [=](nd_item<1> item) {
//# get sub_group handle
ONEAPI::sub_group sg = item.get_sub_group();
//# query sub_group and print sub_group info once per sub_group
if (sg.get_local_id()[0] == 0) {
out << "sub_group id: " << sg.get_group_id()[0] << " of "
<< sg.get_group_range()[0] << ", size=" << sg.get_local_range()[0]
<< endl;
}
});
}).wait();
}
# -
# #### Build and Run
# Select the cell below and click run ▶ to compile and execute the code:
# ! chmod 755 q; chmod 755 run_sub_group_info.sh; if [ -x "$(command -v qsub)" ]; then ./q run_sub_group_info.sh; else ./run_sub_group_info.sh; fi
# _If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel:
# "Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_.
# ## Sub-group shuffle operations
# One of the most useful features of subgroups is the ability to __communicate directly between individual work-items__ without explicit memory operations.
#
# Shuffle operations enable us to remove work-group local memory usage from our kernels and/or to __avoid unnecessary repeated accesses to global memory__.
#
# The code below uses `shuffle_xor` to swap the values of two work-items:
#
# ```cpp
# h.parallel_for(nd_range<1>(N,B), [=](nd_item<1> item){
# ONEAPI::sub_group sg = item.get_sub_group();
# size_t i = item.get_global_id(0);
# /* Shuffles */
# //data[i] = sg.shuffle(data[i], 2);
# //data[i] = sg.shuffle_up(0, data[i], 1);
# //data[i] = sg.shuffle_down(data[i], 0, 1);
# data[i] = sg.shuffle_xor(data[i], 1);
# });
#
# ```
#
# <img src="assets/shuffle_xor.png" alt="shuffle_xor" width="300"/>
# ### Lab Exercise: Subgroup Shuffle
# The code below uses subgroup shuffle to swap items in a subgroup. You can try other shuffle operations or change the fixed constant in the shuffle function.
#
# The DPC++ code below demonstrates sub-group shuffle operations: Inspect code, there are no modifications necessary:
#
# 1. Inspect the code cell below and click run ▶ to save the code to file.
#
# 2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
# +
# %%writefile lab/sub_group_shuffle.cpp
//==============================================================
// Copyright © 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
// =============================================================
#include <CL/sycl.hpp>
using namespace sycl;
static const size_t N = 256; // global size
static const size_t B = 64; // work-group size
int main() {
queue q;
std::cout << "Device : " << q.get_device().get_info<info::device::name>() << std::endl;
//# initialize data array using usm
int *data = static_cast<int *>(malloc_shared(N * sizeof(int), q));
for (int i = 0; i < N; i++) data[i] = i;
for (int i = 0; i < N; i++) std::cout << data[i] << " ";
std::cout << std::endl << std::endl;
q.parallel_for(nd_range<1>(N, B), [=](nd_item<1> item) {
ONEAPI::sub_group sg = item.get_sub_group();
size_t i = item.get_global_id(0);
//# swap adjasent items in array using sub_group shuffle_xor
data[i] = sg.shuffle_xor(data[i], 1);
}).wait();
for (int i = 0; i < N; i++) std::cout << data[i] << " ";
free(data, q);
return 0;
}
# -
# #### Build and Run
# Select the cell below and click run ▶ to compile and execute the code:
# ! chmod 755 q; chmod 755 run_sub_group_shuffle.sh; if [ -x "$(command -v qsub)" ]; then ./q run_sub_group_shuffle.sh; else ./run_sub_group_shuffle.sh; fi
# _If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel:
# "Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_.
# ## Subgroup Collectives
# The collective functions provide implementations of closely-related common parallel patterns.
#
# Providing these implementations as library functions instead __increases developer productivity__ and gives implementations the ability to __generate highly optimized code__ for individual target devices.
#
# ```cpp
# h.parallel_for(nd_range<1>(N,B), [=](nd_item<1> item){
# ONEAPI::sub_group sg = item.get_sub_group();
# size_t i = item.get_global_id(0);
# /* Collectives */
# data[i] = reduce(sg, data[i], ONEAPI::plus<>());
# //data[i] = reduce(sg, data[i], ONEAPI::maximum<>());
# //data[i] = reduce(sg, data[i], ONEAPI::minimum<>());
# });
#
# ```
# ### Lab Exercise: Subgroup Collectives
# The code below uses subgroup collectives to add all items in a subgroup. You can change "_plus_" to "_maximum_" and check output.
#
# The DPC++ code below demonstrates sub-group collectives: Inspect code, there are no modifications necessary:
#
# 1. Inspect the code cell below and click run ▶ to save the code to file.
#
# 2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
# +
# %%writefile lab/sub_group_collective.cpp
//==============================================================
// Copyright © 2020 Intel Corporation
//
// SPDX-License-Identifier: MIT
// =============================================================
#include <CL/sycl.hpp>
using namespace sycl;
static const size_t N = 256; // global size
static const size_t B = 64; // work-group size
int main() {
queue q;
std::cout << "Device : " << q.get_device().get_info<info::device::name>() << std::endl;
//# initialize data array using usm
int *data = static_cast<int *>(malloc_shared(N * sizeof(int), q));
for (int i = 0; i < N; i++) data[i] = 1 + i;
for (int i = 0; i < N; i++) std::cout << data[i] << " ";
std::cout << std::endl << std::endl;
q.parallel_for(nd_range<1>(N, B), [=](nd_item<1> item) {
ONEAPI::sub_group sg = item.get_sub_group();
size_t i = item.get_global_id(0);
//# Adds all elements in sub_group using sub_group collectives
int sum = reduce(sg, data[i], ONEAPI::plus<>());
//# write sub_group sum in first location for each sub_group
if (sg.get_local_id()[0] == 0) {
data[i] = sum;
} else {
data[i] = 0;
}
}).wait();
for (int i = 0; i < N; i++) std::cout << data[i] << " ";
free(data, q);
return 0;
}
# -
# #### Build and Run
# Select the cell below and click run ▶ to compile and execute the code:
# ! chmod 755 q; chmod 755 run_sub_group_collective.sh; if [ -x "$(command -v qsub)" ]; then ./q run_sub_group_collective.sh; else ./run_sub_group_collective.sh; fi
# _If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel:
# "Kernel->Restart Kernel and Clear All Outputs" and compile the code samples again_.
# ## Summary
# Subgroups allow kernel programming that maps executions at low-level hardware and may help in achieving higher levels of performance.
# <html><body><span style="color:green"><h1>Survey</h1></span></body></html>
#
# [We would appreciate any feedback you’d care to give, so that we can improve the overall training quality and experience. Thanks! ](https://intel.az1.qualtrics.com/jfe/form/SV_574qnSw6eggbn1z)
# <html><body><span style="color:Red"><h1>Reset Notebook</h1></span></body></html>
#
# ##### Should you be experiencing any issues with your notebook or just want to start fresh run the below cell.
#
#
# + jupyter={"source_hidden": true}
from IPython.display import display, Markdown, clear_output
import ipywidgets as widgets
button = widgets.Button(
description='Reset Notebook',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='This will update this notebook, overwriting any changes.',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
out = widgets.Output()
def on_button_clicked(_):
# "linking function with output"
with out:
# what happens when we press the button
clear_output()
# !rsync -a --size-only /data/oneapi_workshop/oneAPI_Essentials/04_DPCPP_Sub_Groups/ ~/oneAPI_Essentials/04_DPCPP_Sub_Groups
print('Notebook reset -- now click reload on browser.')
# linking button and function together using a button's method
button.on_click(on_button_clicked)
# displaying button and its output together
widgets.VBox([button,out])
| DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/04_DPCPP_Sub_Groups/Sub_Groups.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from numpy.random import randn as rn
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
# ## Creating a new column:
df['New'] = df['X'] + df['Y']
df
type(df['New'])
# ## Deleting a column:
df = df.drop('New',axis=1) #Axis = 1 for columns, Axis = 0 for rows/index
df
#Dropping multiple columns:
df['NewXY'] = df['X'] + df['Y']
df['NewYZ'] = df['Y'] + df['Z']
df['NewXZ'] = df['X'] + df['Z']
df
df = df.drop(["NewXY","NewYZ","NewXZ"],axis=1)
df
# ## Deleting a row:
df = df.drop('A') #By default the axis = 0 which is for the index
df
# ### using inplace="True" that acts as df = df.drop(...) instead of actually writing df = :
df.drop('B')
df #The df is not affected
#Therefore, we can use inplace that acts as df = df.drop(---)
df.drop('B',axis=0,inplace = True)
df
# # Indexing and slicing:
df
# ## For the columns:
df['X']
type(df['X'])
df.X
type(df.X)
df[['X']]
type(df[['X']])
df[['X','Z']]
type(df[['X','Z']])
# ## For the rows:
# .loc method is for the labels(originally labels are indexes from 0,1,2..; since we have changed them to 'A','B',..'E', we use this method)
df.loc['C'] # loc = location for the label(rows)
df.iloc[2] #i = index, loc = index at that loc, since indexes start from 0, here C's index = 2
df.loc[['B','C']]
df.iloc[['1','2']]
# ## For a particular element(s) by addressing both row and column:
#
df.loc['C','Y']
#Type-I
df.loc['B':'E':2,'X'::2]
#Type-II
df.loc[['B','D'],['X','Z']]
# # Randomly getting df by re-executing the command as shown:
#np.random.seed(101)
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
#np.random.seed(101)
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
# # Comparision operators:
df>0
df.loc[['A','B','C']]>0
# # Q. Replacing the negative values:
df
booldf = df>0
bool2df = df<0
df[booldf]
df[booldf] = "Pos"
df[bool2df] = "Neg"
df
# # Creating matrix data:
mat = np.matrix("22,66,140;42,70,148;30,62,125;35,68,160")
mat
row_label = ['A','B','C','D']
col_head = ['Age','Height','Weight']
df = pd.DataFrame(data = mat, index = row_label, columns = col_head)
df
# ## To extract info from the matrix col wise:
df['Height']>65
df1 = df['Height'][df['Height']>65]
df1
# ## To extract info from the matrix col wise plus other col data:
df1 = df[df['Height']>65]
df1
df
# # Extracting info using Logical operators:
booldf1 = df['Height']>65
booldf2 = df['Weight']>145
df[(booldf1)&(booldf2)]
booldf1
# # Extract info from a matrix by operating on one column but excluding it in the result:
df[booldf1]
df[booldf1][['Age','Weight']]
# # Reset the index(labels) back to 0,1,2...:
df.reset_index()
# Now we have an extra column called index, the original indices have been replaced
# # Reset the index(labels) back to 0,1,2.. and drop the extra column index_name generated :
df.reset_index(drop = True)
df #but here the index is retained which should not have been the case
# # Creating a new column using .split() function (of numpy):
df['Profession'] = "Teacher Engineer Doctor Nurse".split()
df
# ## Replace the index by using set_index(..) method:
df.set_index("Profession")
# # Multi-indexing:
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
higher_index = list(zip(outside,inside))
higher_index
type(higher_index)
higher_index = pd.MultiIndex.from_tuples(higher_index)
higher_index
# +
type(higher_index)
# -
# ### np.round(matrix_name/any_number,round_till_this_digit) method:
# +
#rn is the alias for the random given at the starting of this class (check Day 20)
np.random.seed(101)
df1 = pd.DataFrame(data = np.round(rn(6,3),2),index = higher_index,columns = ['A','B','C'])
# -
df1
# +
#CHECK THE .round method:
np.random.seed(101)
df2 = pd.DataFrame(data = np.round(rn(6,3),5),index = higher_index,columns = ['A','B','C'])
# -
df2
pd.__version__
# # Indexing and slicing:
df1.loc['G1']
df1.loc['G2']
df1.loc['G1'].loc[[1,3],['A','C']]
df2.loc['G2'].loc[[2],['B']]
df1.loc['G1'].loc[[1,3]][['A','C']]
df1
# ## Giving the names to the outside and inside indices:
df1.index.names = ["outside","inner"]
df1
# # Day 20:
import pandas as pd
import numpy as np
# # Random number distribution: by using Normal Distribution [check]:
from numpy.random import randn as rn
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
df
# ## Creating a new column:
df['New'] = df['X'] + df['Y']
df
type(df['New'])
# ## Deleting a column:
df = df.drop('New',axis=1) #Axis = 1 for columns, Axis = 0 for rows/index
df
#Dropping multiple columns:
df['NewXY'] = df['X'] + df['Y']
df['NewYZ'] = df['Y'] + df['Z']
df['NewXZ'] = df['X'] + df['Z']
df
df = df.drop(["NewXY","NewYZ","NewXZ"],axis=1)
df
# ## Deleting a row:
df = df.drop('A') #By default the axis = 0 which is for the index
df
# ### using inplace="True" that acts as df = df.drop(...) instead of actually writing df = :
df.drop('B')
df #The df is not affected
#Therefore, we can use inplace that acts as df = df.drop(---)
df.drop('B',axis=0,inplace = True)
df
# # Indexing and slicing:
df
# ## For the columns:
df['X']
type(df['X'])
df.X
type(df.X)
df[['X']]
type(df[['X']])
df[['X','Z']]
type(df[['X','Z']])
# ## For the rows:
# .loc method is for the labels(originally labels are indexes from 0,1,2..; since we have changed them to 'A','B',..'E', we use this method)
df.loc['C'] # loc = location for the label(rows)
df.iloc[2] #i = index, loc = index at that loc, since indexes start from 0, here C's index = 2
df.loc[['B','C']]
df.iloc[['1','2']]
# ## For a particular element(s) by addressing both row and column:
#
df.loc['C','Y']
#Type-I
df.loc['B':'E':2,'X'::2]
#Type-II
df.loc[['B','D'],['X','Z']]
# # Randomly getting df by re-executing the command as shown:
#np.random.seed(101)
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
#np.random.seed(101)
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
np.random.seed(101) # To make a fixed state after generating the random series
matrix_data = rn(5,4) # 5 rows, 4 columns
row_labels = ['A','B','C','D','E'] # for the 5 rows
col_headings = ['W','X','Y','Z'] # for the 4 columns
df = pd.DataFrame(data=matrix_data, index = row_labels, columns = col_headings)
df
# # Comparision operators:
df>0
df.loc[['A','B','C']]>0
# # Q. Replacing the negative values:
df
booldf = df>0
bool2df = df<0
df[booldf]
df[booldf] = "Pos"
df[bool2df] = "Neg"
df
# # Creating matrix data:
mat = np.matrix("22,66,140;42,70,148;30,62,125;35,68,160")
mat
row_label = ['A','B','C','D']
col_head = ['Age','Height','Weight']
df = pd.DataFrame(data = mat, index = row_label, columns = col_head)
df
# ## To extract info from the matrix col wise:
df['Height']>65
df1 = df['Height'][df['Height']>65]
df1
# ## To extract info from the matrix col wise plus other col data:
df1 = df[df['Height']>65]
df1
df
# # Extracting info using Logical operators:
booldf1 = df['Height']>65
booldf2 = df['Weight']>145
df[(booldf1)&(booldf2)]
booldf1
# # Extract info from a matrix by operating on one column but excluding it in the result:
df[booldf1]
df[booldf1][['Age','Weight']]
# # Reset the index(labels) back to 0,1,2...:
df.reset_index()
# Now we have an extra column called index, the original indices have been replaced
# # Reset the index(labels) back to 0,1,2.. and drop the extra column index_name generated :
df.reset_index(drop = True)
df #but here the index is retained which should not have been the case
# # Creating a new column using .split() function (of numpy):
df['Profession'] = "Teacher Engineer Doctor Nurse".split()
df
# ## Replace the index by using set_index(..) method:
df.set_index("Profession")
# # Multi-indexing:
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
higher_index = list(zip(outside,inside))
higher_index
type(higher_index)
higher_index = pd.MultiIndex.from_tuples(higher_index)
higher_index
# +
type(higher_index)
# -
# ### np.round(matrix_name/any_number,round_till_this_digit) method:
# +
#rn is the alias for the random given at the starting of this class (check Day 20)
np.random.seed(101)
df1 = pd.DataFrame(data = np.round(rn(6,3),2),index = higher_index,columns = ['A','B','C'])
# -
df1
# +
#CHECK THE .round method:
np.random.seed(101)
df2 = pd.DataFrame(data = np.round(rn(6,3),5),index = higher_index,columns = ['A','B','C'])
# -
df2
pd.__version__
# # Indexing and slicing:
df1.loc['G1']
df1.loc['G2']
df1.loc['G1'].loc[[1,3],['A','C']]
df2.loc['G2'].loc[[2],['B']]
df1.loc['G1'].loc[[1,3]][['A','C']]
df1
# ## Giving the names to the outside and inside indices:
df1.index.names = ["outside","inner"]
df1
df #Seeding will ensure that result remains the same
| Data-Science-HYD-2k19/Topic-Wise/NUMPY/all/3. Normal Distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming assignment (Linear models, Optimization)
#
# In this programming assignment you will implement a linear classifier and train it using stochastic gradient descent modifications and numpy.
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import sys
sys.path.append("..")
import grading
grader = grading.Grader(assignment_key="<KEY>",
all_parts=["xU7U4", "HyTF6", "uNidL", "ToK7N", "GBdgZ", "dLdHG"])
# token expires every 30 min
COURSERA_TOKEN = "Your Token will be here"
COURSERA_EMAIL = "Your EMail will be here"
# ## Two-dimensional classification
#
# To make things more intuitive, let's solve a 2D classification problem with synthetic data.
# +
with open('train.npy', 'rb') as fin:
X = np.load(fin)
with open('target.npy', 'rb') as fin:
y = np.load(fin)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired, s=20)
plt.show()
# -
# # Task
#
# ## Features
#
# As you can notice the data above isn't linearly separable. Since that we should add features (or use non-linear model). Note that decision line between two classes have form of circle, since that we can add quadratic features to make the problem linearly separable. The idea under this displayed on image below:
#
# 
def expand(X):
"""
Adds quadratic features.
This expansion allows your linear model to make non-linear separation.
For each sample (row in matrix), compute an expanded row:
[feature0, feature1, feature0^2, feature1^2, feature0*feature1, 1]
:param X: matrix of features, shape [n_samples,2]
:returns: expanded features of shape [n_samples,6]
"""
X_expanded = np.zeros((X.shape[0], 6))
for i,row in enumerate(X):
X_expanded[i] = np.array([row[0], row[1], row[0]**2, row[1]**2, row[0]*row[1], 1])
return X_expanded
X_expanded = expand(X)
X[0]
X_expanded[0]
# Here are some tests for your implementation of `expand` function.
# +
# simple test on random numbers
dummy_X = np.array([
[0,0],
[1,0],
[2.61,-1.28],
[-0.59,2.1]
])
# call your expand function
dummy_expanded = expand(dummy_X)
# what it should have returned: x0 x1 x0^2 x1^2 x0*x1 1
dummy_expanded_ans = np.array([[ 0. , 0. , 0. , 0. , 0. , 1. ],
[ 1. , 0. , 1. , 0. , 0. , 1. ],
[ 2.61 , -1.28 , 6.8121, 1.6384, -3.3408, 1. ],
[-0.59 , 2.1 , 0.3481, 4.41 , -1.239 , 1. ]])
#tests
assert isinstance(dummy_expanded,np.ndarray), "please make sure you return numpy array"
assert dummy_expanded.shape == dummy_expanded_ans.shape, "please make sure your shape is correct"
assert np.allclose(dummy_expanded,dummy_expanded_ans,1e-3), "Something's out of order with features"
print("Seems legit!")
# -
# ## Logistic regression
#
# To classify objects we will obtain probability of object belongs to class '1'. To predict probability we will use output of linear model and logistic function:
#
# $$ a(x; w) = \langle w, x \rangle $$
# $$ P( y=1 \; \big| \; x, \, w) = \dfrac{1}{1 + \exp(- \langle w, x \rangle)} = \sigma(\langle w, x \rangle)$$
#
import math
def probability(X, w):
"""
Given input features and weights
return predicted probabilities of y==1 given x, P(y=1|x), see description above
Don't forget to use expand(X) function (where necessary) in this and subsequent functions.
:param X: feature matrix X of shape [n_samples,6] (expanded)
:param w: weight vector w of shape [6] for each of the expanded features
:returns: an array of predicted probabilities in [0,1] interval.
"""
# probabilities = []
# # TODO:<your code here>
# for row in X:
# temp = 0
# for i in range(len(row)):
# temp += (row[i]*w[i])
# prob = 1 / (1 + math.e**(-temp))
# probabilities.append(prob)
# return np.array(probabilities)
z = np.dot(X,w)
a = 1./(1+np.exp(-z))
return np.array(a)
dummy_weights = np.linspace(-1, 1, 6)
ans_part1 = probability(X_expanded[:1, :], dummy_weights)[0]
print(ans_part1)
## GRADED PART, DO NOT CHANGE!
grader.set_answer("xU7U4", ans_part1)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# In logistic regression the optimal parameters $w$ are found by cross-entropy minimization:
#
# Loss for one sample: $$ l(x_i, y_i, w) = - \left[ {y_i \cdot log P(y_i = 1 \, | \, x_i,w) + (1-y_i) \cdot log (1-P(y_i = 1\, | \, x_i,w))}\right] $$
#
# Loss for many samples: $$ L(X, \vec{y}, w) = {1 \over \ell} \sum_{i=1}^\ell l(x_i, y_i, w) $$
#
#
def compute_loss(X, y, w):
"""
Given feature matrix X [n_samples,6], target vector [n_samples] of 1/0,
and weight vector w [6], compute scalar loss function L using formula above.
Keep in mind that our loss is averaged over all samples (rows) in X.
"""
# TODO:<your code here>
l = X.shape[0]
a = probability(X, w)
cross_entropy = y*np.log(a) +(1-y)*np.log(1-a)
cost = -np.sum(cross_entropy)/float(l)
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
# use output of this cell to fill answer field
ans_part2 = compute_loss(X_expanded, y, dummy_weights)
ans_part2
## GRADED PART, DO NOT CHANGE!
grader.set_answer("HyTF6", ans_part2)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# Since we train our model with gradient descent, we should compute gradients.
#
# To be specific, we need a derivative of loss function over each weight [6 of them].
#
# $$ \nabla_w L = {1 \over \ell} \sum_{i=1}^\ell \nabla_w l(x_i, y_i, w) $$
#
# We won't be giving you the exact formula this time — instead, try figuring out a derivative with pen and paper.
#
# As usual, we've made a small test for you, but if you need more, feel free to check your math against finite differences (estimate how $L$ changes if you shift $w$ by $10^{-5}$ or so).
def compute_grad(X, y, w):
"""
Given feature matrix X [n_samples,6], target vector [n_samples] of 1/0,
and weight vector w [6], compute vector [6] of derivatives of L over each weights.
Keep in mind that our loss is averaged over all samples (rows) in X.
"""
# TODO<your code here>
m = X.shape[0]
A = probability(X, w)
# print(A)
dZ = A - y
# print(dZ)
#cost = compute_loss(X, y, w)
dW = np.dot(dZ, X) / float(m)
# print(dW)
return dW
# use output of this cell to fill answer field
ans_part3 = np.linalg.norm(compute_grad(X_expanded, y, dummy_weights))
## GRADED PART, DO NOT CHANGE!
grader.set_answer("uNidL", ans_part3)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# Here's an auxiliary function that visualizes the predictions:
# +
from IPython import display
h = 0.01
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
def visualize(X, y, w, history):
"""draws classifier prediction with matplotlib magic"""
Z = probability(expand(np.c_[xx.ravel(), yy.ravel()]), w)
Z = Z.reshape(xx.shape)
plt.subplot(1, 2, 1)
plt.contourf(xx, yy, Z, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.subplot(1, 2, 2)
plt.plot(history)
plt.grid()
ymin, ymax = plt.ylim()
plt.ylim(0, ymax)
display.clear_output(wait=True)
plt.show()
# -
visualize(X, y, dummy_weights, [0.5, 0.5, 0.25])
# ## Training
# In this section we'll use the functions you wrote to train our classifier using stochastic gradient descent.
#
# You can try change hyperparameters like batch size, learning rate and so on to find the best one, but use our hyperparameters when fill answers.
# ## Mini-batch SGD
#
# Stochastic gradient descent just takes a random batch of $m$ samples on each iteration, calculates a gradient of the loss on it and makes a step:
# $$ w_t = w_{t-1} - \eta \dfrac{1}{m} \sum_{j=1}^m \nabla_w l(x_{i_j}, y_{i_j}, w_t) $$
#
#
# +
# please use np.random.seed(42), eta=0.1, n_iter=100 and batch_size=4 for deterministic results
np.random.seed(42)
w = np.array([0, 0, 0, 0, 0, 1])
eta= 0.1 # learning rate
n_iter = 100
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12, 5))
for i in range(n_iter):
ind = np.random.choice(X_expanded.shape[0], batch_size)
loss[i] = compute_loss(X_expanded, y, w)
if i % 10 == 0:
visualize(X_expanded[ind, :], y[ind], w, loss)
# Keep in mind that compute_grad already does averaging over batch for you!
# TODO:<your code here>
gradient = compute_grad(X_expanded[ind, :], y[ind], w)
w = w - (eta * gradient)
visualize(X, y, w, loss)
plt.clf()
# +
# use output of this cell to fill answer field
ans_part4 = compute_loss(X_expanded, y, w)
# -
## GRADED PART, DO NOT CHANGE!
grader.set_answer("ToK7N", ans_part4)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# ## SGD with momentum
#
# Momentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations as can be seen in image below. It does this by adding a fraction $\alpha$ of the update vector of the past time step to the current update vector.
# <br>
# <br>
#
# $$ \nu_t = \alpha \nu_{t-1} + \eta\dfrac{1}{m} \sum_{j=1}^m \nabla_w l(x_{i_j}, y_{i_j}, w_t) $$
# $$ w_t = w_{t-1} - \nu_t$$
#
# <br>
#
#
# 
#
# +
# please use np.random.seed(42), eta=0.05, alpha=0.9, n_iter=100 and batch_size=4 for deterministic results
np.random.seed(42)
w = np.array([0, 0, 0, 0, 0, 1])
eta = 0.05 # learning rate
alpha = 0.9 # momentum
nu = np.zeros_like(w)
n_iter = 100
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12, 5))
for i in range(n_iter):
ind = np.random.choice(X_expanded.shape[0], batch_size)
loss[i] = compute_loss(X_expanded, y, w)
if i % 10 == 0:
visualize(X_expanded[ind, :], y[ind], w, loss)
# TODO:<your code here>
gradient = compute_grad(X_expanded[ind, :], y[ind], w)
nu = alpha*nu+eta*gradient
w = w- nu
visualize(X, y, w, loss)
plt.clf()
# +
# use output of this cell to fill answer field
ans_part5 = compute_loss(X_expanded, y, w)
# -
## GRADED PART, DO NOT CHANGE!
grader.set_answer("GBdgZ", ans_part5)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# ## RMSprop
#
# Implement RMSPROP algorithm, which use squared gradients to adjust learning rate:
#
# $$ G_j^t = \alpha G_j^{t-1} + (1 - \alpha) g_{tj}^2 $$
# $$ w_j^t = w_j^{t-1} - \dfrac{\eta}{\sqrt{G_j^t + \varepsilon}} g_{tj} $$
# +
# please use np.random.seed(42), eta=0.1, alpha=0.9, n_iter=100 and batch_size=4 for deterministic results
np.random.seed(42)
w = np.array([0, 0, 0, 0, 0, 1.])
eta = 0.1 # learning rate
alpha = 0.9 # moving average of gradient norm squared
G = 0
g2 = None # we start with None so that you can update this value correctly on the first iteration
eps = 1e-8
n_iter = 100
batch_size = 4
loss = np.zeros(n_iter)
plt.figure(figsize=(12,5))
for i in range(n_iter):
ind = np.random.choice(X_expanded.shape[0], batch_size)
loss[i] = compute_loss(X_expanded, y, w)
if i % 10 == 0:
visualize(X_expanded[ind, :], y[ind], w, loss)
# TODO:<your code here>
gradient = compute_grad(X_expanded[ind, :], y[ind], w)
g2 = gradient**2
G = alpha*G + (1-alpha)*g2
w = w - ((eta*gradient) / np.sqrt(G+eps))
visualize(X, y, w, loss)
plt.clf()
# -
# use output of this cell to fill answer field
ans_part6 = compute_loss(X_expanded, y, w)
## GRADED PART, DO NOT CHANGE!
grader.set_answer("dLdHG", ans_part6)
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
| week01_pa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import time
import numpy as np
import pandas as pd
from sklearn.metrics import f1_score, roc_curve, auc, roc_auc_score, precision_recall_curve, recall_score, precision_score, confusion_matrix, average_precision_score
import matplotlib.pyplot as plt
# %matplotlib inline
# +
#data_features consists of lumisections from JetHT, wich are preprocessed:
# 1) samples with low lumi were deleted:
# nonempty = np.where(data["lumi"] >= 0.01)[0]
# data = data.iloc[nonempty]
# 2) columns with std=0 were removed
# cols = data.select_dtypes([np.number]).columns
# std = data[cols].std()
# cols_to_drop = std[std==0].index
# data = data.drop(cols_to_drop, axis=1)
# 3) standard scaler was applied for all features
data_features = pd.read_hdf('/home/olgako/data/data_features_JetHT.hdf5', "data")
labels = pd.read_hdf('/home/olgako/data/labels_JetHT.hdf5', 'labels')
# -
one_fifth = int(data_features.shape[0]/5)
step = int(data_features.shape[0]/10)
whole = data_features.shape[0]
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
data_features=scaler.fit_transform(data_features)
def get_error_df(X_test, predictions, y_test, mode='None', n_highest = 100):
if mode=='allmean':
mse = np.mean(np.power(X_test - predictions, 2), axis=1)
error_df = pd.DataFrame({'reconstruction_error': mse,
'true_class': y_test})
return error_df
elif mode=='topn':
temp = np.partition(-np.power(X_test - predictions, 2), n_highest)
result = -temp[:,:n_highest]
mse = np.mean(result, axis=1)
error_df = pd.DataFrame({'reconstruction_error': mse,
'true_class': y_test})
return error_df
# +
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import tensorflow as tf
tf.set_random_seed(0)
from keras import backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
K.set_session(sess)
# -
from keras.models import Model, load_model
from keras.optimizers import Adam
from keras.layers import Input, Dense, Activation
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras import regularizers
from keras.layers.advanced_activations import PReLU, LeakyReLU
from sklearn.utils import shuffle
import h5py
input_dim = data_features.shape[1]
encoding_dim = 50
def buildAE():
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation='linear')(input_layer)
encoder = LeakyReLU(alpha=.1)(encoder)
decoder = Dense(input_dim, activation='sigmoid')(encoder)
return Model(inputs=input_layer, outputs=decoder)
buildAE().summary()
nb_epoch = 20
batch_size = 512
# +
plt.figure(figsize=(10, 10))
imp_features = []
for i in range(9):
indx_test = np.arange(i*step, one_fifth+i*step)
indx_train = list(set(range(whole))-set(indx_test))
y_train = np.array(labels.iloc[indx_train], 'float32')
y_test = np.array(labels.iloc[indx_test], 'float32')
X_train = np.array(data_features[indx_train], 'float32')
X_test = np.array(data_features[indx_test], 'float32')
start_time = time.time()
autoencoder = buildAE()
autoencoder.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy')
autoencoder.fit(X_train[y_train==0.],
X_train[y_train==0.],
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_split=0.1,
verbose=1,
initial_epoch=0)
print("--- %s seconds ---" % (time.time() - start_time))
predictions = autoencoder.predict(X_test)
error_df = get_error_df(pd.DataFrame(X_test), pd.DataFrame(predictions), y_test, mode='topn', n_highest = 300)
fpr, tpr, _ = roc_curve(error_df.true_class, error_df.reconstruction_error)
average_precision = average_precision_score(error_df.true_class, error_df.reconstruction_error)
auc_score = roc_auc_score(error_df.true_class, error_df.reconstruction_error)
percent = np.sum(y_test)/np.float(len(y_test))
plt.plot(fpr, tpr, label= "frame: "+np.str(i)+' AUC = %.3lf' % auc_score+' average_precision = %.3lf' % average_precision +', anomalies percentage = %.3lf' % percent)
plt.legend(loc='lower right', fontsize=10)
plt.xlabel('FPR', fontsize=10)
plt.ylabel('TPR', fontsize=10)
plt.show()
# -
from evaluation import *
ps, rs = perfomance(error_df.true_class, error_df.reconstruction_error)
| notebooks/test-split-AE.ipynb |