code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=[]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=[]
# # Hugging Face - Naas drivers integration
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Hugging%20Face/Hugging_Face_Naas_drivers_integration.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=[]
# **Tags:** #huggingface #nlp #huggingface #api #models #transformers #sales #ai #text
# + [markdown] papermill={} tags=[]
# **Author:** [<NAME>](https://www.linkedin.com/in/gbhatia30/)
# + [markdown] papermill={} tags=[]
# In this notebook, you will be able to explore the Hugging Face transformers package with minimal technical knowledge thanks to Naas low-code drivers.<br>
# Hugging Face is an immensely popular Python library providing pretrained models that are extraordinarily useful for a variety of natural language processing (NLP) tasks.
# + [markdown] papermill={} tags=[]
# ## How it works?
# Naas drivers HuggingFace formulas follow this format.
# ```
# huggingface.get(task, model, tokenizer)(inputs)
# ```
# The supported tasks are the following:
#
# - text-generation (model: GPT2)
# - summarization (model: t5-small)
# - fill-mask (model: distilroberta-base)
# - text-classification (model: distilbert-base-uncased-finetuned-sst-2-english)
# - feature-extraction (model: distilbert-base-cased)
# - token-classification (model: dslim/bert-base-NER)
# - question-answering
# - translation
#
# We simply use [Hugging Face API](https://huggingface.co/models) under the hood to access the models.
# + [markdown] papermill={} tags=[]
# ## Input
# + [markdown] papermill={} tags=[]
# ### Import library
# + papermill={} tags=[]
from naas_drivers import huggingface
# + [markdown] papermill={} tags=[]
# ### Text Generation
# + papermill={} tags=[]
huggingface.get("text-generation", model="gpt2", tokenizer="gpt2")("What is the most important thing in your life right now?")
# + [markdown] papermill={} tags=[]
# ## Model
# + [markdown] papermill={} tags=[]
# ### Text Summarization
# Summarize the text given, maximum lenght (number of tokens/words) is set to 200.
# + papermill={} tags=[]
huggingface.get("summarization", model="t5-small", tokenizer="t5-small")('''
There will be fewer and fewer jobs that a robot cannot do better.
What to do about mass unemployment this is gonna be a massive social challenge and
I think ultimately we will have to have some kind of universal basic income.
I think some kind of a universal basic income is going to be necessary
now the output of goods and services will be extremely high
so with automation they will they will come abundance there will be or almost everything will get very cheap.
The harder challenge much harder challenge is how do people then have meaning like a lot of people
they find meaning from their employment so if you don't have if you're not needed if
there's not a need for your labor how do you what's the meaning if you have meaning
if you feel useless these are much that's a much harder problem to deal with.
''')
# + [markdown] papermill={} tags=[]
# ### Text Classification
# Basic sentiment analysis on a text.<br>
# Returns a "label" (negative/neutral/positive), and score between -1 and 1.
# + papermill={} tags=[]
huggingface.get("text-classification",
model="distilbert-base-uncased-finetuned-sst-2-english",
tokenizer="distilbert-base-uncased-finetuned-sst-2-english")('''
It was a weird concept. Why would I really need to generate a random paragraph?
Could I actually learn something from doing so?
All these questions were running through her head as she pressed the generate button.
To her surprise, she found what she least expected to see.
''')
# + [markdown] papermill={} tags=[]
# ### Fill Mask
# + [markdown] papermill={} tags=[]
# Fill the blanks ('< mask >') in a sentence given with multiple proposals. <br>
# Each proposal has a score (confidence of accuracy), token value (proposed word in number), token_str (proposed word)
# + papermill={} tags=[]
huggingface.get("fill-mask",
model="distilroberta-base",
tokenizer="distilroberta-base")('''
It was a beautiful <mask>.
''')
# + [markdown] papermill={} tags=[]
# ### Feature extraction
# This generate a words embedding (extract numbers out of the text data).<br>
# Output is a list of numerical values.
# + papermill={} tags=[]
huggingface.get("feature-extraction", model="distilbert-base-cased", tokenizer="distilbert-base-cased")("Life is a super cool thing")
# + [markdown] papermill={} tags=[]
# ### Token classification
# Basically NER. If you give names, location, or any "entity" it can detect it.<br>
#
# | Entity abreviation | Description |
# |--------------|------------------------------------------------------------------------------|
# | O | Outside of a named entity |
# | B-MIS | Beginning of a miscellaneous entity right after another miscellaneous entity |
# | I-MIS | Miscellaneous entity |
# | B-PER | Beginning of a person’s name right after another person’s name |
# | I-PER | Person’s name |
# | B-ORG | Beginning of an organization right after another organization |
# | I-ORG | organization |
# | B-LOC | Beginning of a location right after another location |
# | I-LOC | Location |
#
#
# Full documentation : https://huggingface.co/dslim/bert-base-NER.<br>
# + [markdown] papermill={} tags=[]
# ## Output
# + [markdown] papermill={} tags=[]
# ### Display result
# + papermill={} tags=[]
huggingface.get("token-classification", model="dslim/bert-base-NER", tokenizer="dslim/bert-base-NER")('''
My name is Wolfgang and I live in Berlin
''')
| Hugging Face/Hugging_Face_Naas_drivers_integration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit ('py38')
# metadata:
# interpreter:
# hash: c90655714ffb9d69d923b994a9a952fe34a64a031cf84cba975d9030ad2b344f
# name: python3
# ---
from py2neo import Graph
from igraph import Graph as IGraph
from pprint import pprint
graph = Graph('bolt://127.0.0.1:7687', password='<PASSWORD>')
# 查詢台GG
query = '''
match (m:Stock{code:'2330'}) return m
'''
graph.run(query).data()
# +
# Concept(概念股)種類的總數
query = '''
match (n:Concept) return count(distinct(n))
'''
graph.run(query)
# +
# GG有多少副總在其他公司也是董監事經理人或大股東
query = '''
MATCH (m:Stock{code:'2330'})<-[:employ_of{jobs:'副總'}]-(n:Person)-[:employ_of]->(q:Stock)
RETURN n, count(distinct(n))
'''
graph.run(query).data()
# -
# GG有多少獨立董事
query = '''
MATCH (m:Stock{name:'台積電'})<-[:employ_of{jobs:'獨立董事'}]-(n:Person)
RETURN n, count(distinct(n))
'''
ig = IGraph.TupleList(graph.run(query), weights=True)
pprint([v for v in ig.vs])
# 有多少'電腦及週邊設備業'也是'AI概念股'
query = '''
MATCH (:Concept{name:'AI人工智慧'})<-[:concept_of]-(m:Stock)-[:industry_of]->(:Industry{name:'電腦及週邊設備業'})
RETURN m, count(distinct(m))
'''
graph.run(query).data()
# 列出仁寶所有零持股的副總
query = '''
MATCH (:Stock{name:'仁寶'})<-[:employ_of{jobs:'副總', stock_num:0}]-(p1:Person)
RETURN p1, count(distinct(p1))
'''
graph.run(query).data()
# 列出仁寶副總所有持股的統計
query = '''
MATCH (:Stock{name:'仁寶'})<-[emp:employ_of{jobs:'副總'}]-(p1:Person)
WITH emp.stock_num AS num
RETURN min(num) AS min, max(num) AS max, avg(num) AS avg_characters, stdev(num) AS stdev
'''
graph.run(query).data()
# 仁寶副總有多少比例零持股
query = '''
MATCH (:Stock{name:'仁寶'})<-[:employ_of{jobs:'副總', stock_num:0}]-(p1:Person)
MATCH (:Stock{name:'仁寶'})<-[:employ_of{jobs:'副總'}]-(p2:Person)
RETURN count(distinct(p1))*1.0/ count(distinct(p2)) as ratio
'''
graph.run(query)
# 仁寶副總有多少比例持股小於1000
query = '''
MATCH (:Stock{name:'仁寶'})<-[emp:employ_of{jobs:'副總'}]-(p1:Person)
MATCH (:Stock{name:'仁寶'})<-[:employ_of{jobs:'副總'}]-(p2:Person)
WHERE emp.stock_num < 1000
RETURN count(distinct(p1))*1.0/ count(distinct(p2)) as ratio
'''
graph.run(query)
# 隔日沖券商 美林 買超張數超過1000張的股票
query = '''
MATCH (d:Dealer{name:'美林'})-[bs:buy_or_sell]->(s:Stock)
WHERE bs.amount>1000
RETURN *
'''
graph.run(query)
# GG跟發哥在概念股上的最短路徑
query = '''
MATCH (a:Stock {code:'2330'}), (b:Stock {code:'2454'})
MATCH p=allShortestPaths((a)-[:concept_of*]-(b))
WITH [node IN nodes(p) where node:Concept | node.name] AS concept
RETURN concept
'''
graph.run(query).data()
# +
# 查詢含有最多degree(不考慮方向以及關聯類別)的前10個Stock
# 彰銀主要靠董事成員夠多
query = '''
MATCH (s:Stock)
RETURN s.name AS stock, size( (s)-[]-() ) AS degree
ORDER BY degree DESC LIMIT 10
'''
graph.run(query).data()
# +
# 查詢跟最多股票有關係的前35位董事
# 涂水城是大股東
query = '''
MATCH (p:Person)
RETURN p.name AS person, size( (p)-[]-(:Stock) ) AS degree
ORDER BY degree DESC LIMIT 35
'''
graph.run(query).data()
# +
# 查詢總持股前35最多的董事
query = '''
MATCH (p:Person)-[r:employ_of]-(:Stock)
RETURN p.name AS person, sum(r.stock_num) as stockNum
ORDER BY stockNum DESC LIMIT 35
'''
graph.run(query).data()
# +
# %%time
# PageRank: The size of each node is proportional to the number and size of the other nodes pointing to it in the network.
# 使用pagerank篩選在所有概念股類別當中,參與這些概念股前20多的股票
query = '''
MATCH (s:Stock)-[r:concept_of]->(c:Concept)
RETURN s.name, c.name
'''
ig = IGraph.TupleList(graph.run(query), weights=True)
pg = ig.pagerank()
pgvs = []
for p in zip(ig.vs, pg):
pgvs.append({"name": p[0]["name"], "pg": p[1]})
write_clusters_query = '''
UNWIND $nodes AS n
MATCH (s:Stock) WHERE s.name = n.name
SET s.pagerank = n.pg
'''
graph.run(write_clusters_query, nodes=pgvs)
query = '''
MATCH (s:Stock)--(st:StockType{name:'股票'})
WHERE s.pagerank<>'none'
RETURN s.name AS name, s.pagerank AS pagerank ORDER BY pagerank DESC LIMIT 20
'''
graph.run(query).data()
# + tags=[]
# %%time
# 使用各檔股票的概念股來做各檔股票的community detection
clusters = IGraph.community_walktrap(ig, steps=3).as_clustering()
nodes = [{"name": node["name"]} for node in ig.vs]
for node in nodes:
idx = ig.vs.find(name=node["name"]).index
node["community"] = clusters.membership[idx]
write_clusters_query = '''
UNWIND $nodes AS n
MATCH (s:Stock) WHERE s.name = n.name
SET s.community = toInteger(n.community)
'''
graph.run(write_clusters_query, nodes=nodes)
query = '''
MATCH (s:Stock)
WITH s.community AS cluster, collect(s.name) AS members
WHERE cluster<>'none'
RETURN cluster, members ORDER BY cluster ASC
'''
graph.run(query).data()[0]
# +
# %%time
# 使用pagerank篩選在所有股票當中,參與這些股票前20多的董事
query = '''
MATCH (p:Person)-[r:employ_of]->(s:Stock)
RETURN p.name, s.name
'''
ig = IGraph.TupleList(graph.run(query), weights=True)
pg = ig.pagerank()
pgvs = []
for p in zip(ig.vs, pg):
pgvs.append({"name": p[0]["name"], "pg": p[1]})
write_clusters_query = '''
UNWIND $nodes AS n
MATCH (p:Person) WHERE p.name = n.name
SET p.pagerank = n.pg
'''
graph.run(write_clusters_query, nodes=pgvs)
query = '''
MATCH (p:Person)
RETURN p.name AS name, p.pagerank AS pagerank ORDER BY pagerank DESC LIMIT 20
'''
graph.run(query).data()
# -
len(pgvs)
# +
# %%time
# 使用各檔股票來做董事的community detection
# 看看是否有哪些董事是同一群的
# 也就是共同都是某些股票的董事/大股東
# 理論上只有少數人身兼多家公司董事/大股東
clusters = IGraph.community_walktrap(ig, steps=2).as_clustering()
nodes = [{"name": node["name"]} for node in ig.vs]
for node in nodes:
idx = ig.vs.find(name=node["name"]).index
node["community"] = clusters.membership[idx]
write_clusters_query = '''
UNWIND $nodes AS n
MATCH (p:Person) WHERE p.name = n.name
SET p.community = toInteger(n.community)
'''
graph.run(write_clusters_query, nodes=nodes)
query = '''
MATCH (p:Person)
WITH p.community AS cluster, collect(p.name) AS members
WHERE cluster<>'none'
RETURN cluster, members ORDER BY cluster ASC
'''
graph.run(query).data()[0]
# -
person_clusters = graph.run(query).data()
len(person_clusters)
# +
write_stock_weight_query = '''
MATCH (p:Person)-[r:employ_of]->(s:Stock)
RETURN p.name as person_name, r.stock_num as stock_num, s.name as stock_name
'''
data = graph.run(write_stock_weight_query).data()
# +
stock_query = '''
MATCH (p:Person)-[r:employ_of]->(s:Stock)
RETURN DISTINCT s.name as stock_name
'''
from collections import defaultdict
total_stock_nums = defaultdict(int)
stock_name = graph.run(stock_query).data()
for person in data:
total_stock_nums[person['stock_name']] += person['stock_num']
for person in data:
person['stock_ratio'] = person['stock_num']/total_stock_nums[person['stock_name']]
# -
total_stock_nums['台積電'] # 9692484
# +
# %%time
## Person和Stock關聯employ_of線的粗細,用股東總持股當分母,董事持股數當分子
write_stock_weight_query = '''
UNWIND $nodes AS n
MATCH (p:Person)-[r:employ_of]->(s:Stock)
WHERE p.name = n.person_name AND s.name = n.stock_name
SET r.stock_ratio = n.stock_ratio
'''
graph.run(write_stock_weight_query, nodes=data).data()
# -
# 畫出半導體業的股票與董事之間的關係,限制1000個entity
query = '''
MATCH (s:Stock)-[:industry_of]->(Industry{name:'半導體業'})
WITH s
MATCH (p:Person)-[r:employ_of]->(s)
RETURN * LIMIT 1000
'''
graph.run(query).to_table()[:5]
| src/cypher_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="XZQ2ZrKZGbOi"
# # Install and import dependencies
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 180182, "status": "ok", "timestamp": 1612746758649, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15104894116977045422"}, "user_tz": 300} id="kW6Ip9aWF_Wr" outputId="21a2558b-a789-4c46-ce72-2bc002c565ea"
from google.colab import files
import json
import os
import csv
import io
import pandas as pd
import numpy as np
import copy
import spacy
PATH_TO_LICENSE_KEY = ''
#import license keys from drive
with open(PATH_TO_LICENSE_KEY) as f:s
license_keys = json.load(f)
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
# Install Java
# ! apt-get update -qq
# ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
# ! java -version
# Install pyspark
# ! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
# ! pip install --ignore-installed spark-nlp==$sparknlp_version
# ! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.types import *
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
# + [markdown] id="--4ukn3ZG6-N"
# # Define pipeline elements
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 795784, "status": "ok", "timestamp": 1612747374258, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15104894116977045422"}, "user_tz": 300} id="Z8GSUtr2G88p" outputId="168bf02d-8ba3-4e19-b96c-ba1ffe0b4b34"
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings_clinical = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
ner_jsl = NerDLModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter_diagnosis = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Diagnosis'])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols(["ner_chunk", "embeddings"])\
.setOutputCol("chunk_embeddings")
c2doc = Chunk2Doc().setInputCols("ner_chunk").setOutputCol("ner_chunk_doc")
sbiobert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
sbert_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
pipeline= Pipeline(
stages = [
document_assembler,
sentence_detector,
tokenizer,
word_embeddings_clinical,
ner_jsl,
ner_converter_diagnosis,
chunk_embeddings,
c2doc,
sbiobert_embedder,
sbert_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
# + [markdown] id="26JMoQAFIabW"
# # Define functions
# + id="un1bjnoQIZJH"
def get_codes (light, code, text, url):
'''
example call: get_codes(light_pipeline, 'icd10cm_code', FEED_TEXT, FEED_URL)
'''
full_light_result = light.fullAnnotate(text)
urls = []
chunks = []
begin = []
end = []
sent = []
codes = []
results = []
resolutions = []
res_distances = []
for chunk, code in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
urls.append(url)
chunks.append(chunk.result)
begin.append(chunk.begin)
end.append(chunk.end)
sent.append(chunk.metadata['sentence'])
codes.append(code.result)
results.append(code.metadata['all_k_results'])
resolutions.append(code.metadata['all_k_resolutions'])
res_distances.append(code.metadata['all_k_distances'])
df = pd.DataFrame({'url':urls,
'chunks':chunks,
'begin': begin,
'end':end,
'sent':sent,
'code':codes,
'results':results,
'resolutions':resolutions,
'res_distances':res_distances})
return df
# + id="GzrK11yrUw8t"
import itertools
def run_pipeline(feed):
r = []
for index, row in feed.iterrows():
url = row['url']
text = row['text']
er_results = get_codes(light_pipeline, 'icd10cm_code', text, url)
r.append(er_results)
#return concatenated pandas dataframes
df = pd.concat(r)
return df
# + [markdown] id="qxE_qPIQJu5u"
# # Import and process feed data
#
# - If you are running this on google colab (recommended), it might be best to export the feed data from gfm.db into a few .csv files based on runtime restrictions for your license.
#
# - Here, we exported the columns "url" and "fund_description" from all data into 4 separate .json files (~25k records in each file)
#
# Example code:
#
# ```
# feed = feed[['url','fund_description']]
# feed.rename(columns={'fund_description':'text'}, inplace=True)
# dfs = np.array_split(feed, 4)
#
# PATH_TO_DATA_FOR_COLAB = ''
#
# for i in range(4):
# with open(PATH_TO_DATA_FOR_COLAB + 'feed_chunk_' + str(i) + '.json', 'w', encoding='utf-8') as file:
# dfs[i].to_json(file, orient="records", force_ascii=False)
#
# ```
#
# + id="hr8jgTAAJz0V"
#read in chunk and analyze one at a time
PATH_TO_CHUNK = ''
chunk_n = 0
with open(PATH_TO_CHUNK + 'feed_chunk_' + str(chunk_n) + '.json') as json_file:
feed = json.load(json_file)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 797149, "status": "ok", "timestamp": 1612747380477, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15104894116977045422"}, "user_tz": 300} id="8NAQyKVQDMX6" outputId="181774f3-5905-4c30-f029-351a97a45503"
feed = pd.DataFrame(feed)
# + [markdown] id="yz2Fx61UkvzO"
# ### Text preprocessing
# + [markdown] id="hM6qlnJCIx1d"
# The spark tokenizer does not work reliably to tokenize on punctuation without whitespace e.g. "end.Beginning"
#
# So will preprocess this manually to split tokens by .,!?
# + id="GQsGQqJAJhwz"
import re
def CustomTokenize(df):
r = []
for i in range(len(df)):
string = re.sub(r'(?<=[.,!\\?])(?=[^\s])', r' ', df['text'][i])
r.append(string)
return r
# + id="NcChk7b7MtRo"
feed.loc[:,'text_clean'] = CustomTokenize(feed)
del feed['text']
feed = feed.rename(columns={'text_clean':'text'})
# + [markdown] id="rmuR0tYGV3gM"
# # Run pipeline
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2963160, "status": "ok", "timestamp": 1612808476758, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15104894116977045422"}, "user_tz": 300} id="VCrN83P0V13p" outputId="4c52855e-57da-458d-fb52-86d49fbdf5af"
# %time r = run_pipeline(feed)
# + executionInfo={"elapsed": 1150, "status": "ok", "timestamp": 1612808477899, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15104894116977045422"}, "user_tz": 300} id="n1jy3NtgV-Lu"
EXPORT_PATH = ''
r.to_csv(EXPORT_PATH + 'feed_chunk_' + str(chunk_n) + '.csv', index=False)
# + id="AnOzANFsa_la"
| code/08-Spark-JSL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this example, exact analytical trajectory of a charged particle moving in a uniform electric field is compared with results of numerical simulation.
# In uniform electric field charged particle moves with uniform acceleration, directed along the field.
#
# \begin{align}
# & \mathbf{v} = \mathbf{v}_0 + \frac{q}{m} \mathbf{E}_0 t
# \\
# & \mathbf{r} = \mathbf{r}_0 + \mathbf{v}_0 t + \frac{1}{2} \frac{q}{m} \mathbf{E}_0 t^2
# \end{align}
# Part of a config file, regarding single particle, is similar to
# ['singe particle in free space'](https://github.com/epicf/ef/wiki/Ex1:-Single-Particle-In-Free-Space).
from ef.config.components import *
timegrid = TimeGridConf(total=6e-09, save_step=6e-11, step=6e-12)
mesh = SpatialMeshConf((5, 5, 15), (.5, .5, 1.5))
source = ParticleSourceConf("emit_single_particle", Box((.1, .1, .1), (.01, .01, .01)), 1, 0,
(6e-19, 6e-19, 1.77e-18), 0., 4.8e-10, 9.8e-28)
file_config = OutputFileConf("single_particle_electric_field_")
field = ExternalElectricFieldUniformConf("uniform_z_field", (0, 0, 6))
from ef.config.config import Config
from ef.config.visualizer import Visualizer3d
conf = Config(time_grid=timegrid, spatial_mesh=mesh, sources=[source],
external_fields=[field], output_file=file_config)
conf.visualize_all(Visualizer3d())
from ef.runner import Runner
Runner(conf.make(), output_writer=file_config.make()).start()
# +
import os, glob
import h5py
import numpy as np
import matplotlib.pyplot as plt
def main():
num = extract_num_trajectory_from_out_files()
an = eval_an_trajectory_at_num_time_points( num )
plot_trajectories( num , an )
def extract_num_trajectory_from_out_files():
out_files = find_necessary_out_files()
num_trajectory = []
for f in out_files:
num_trajectory.append( extract_time_pos_mom( f ) )
num_trajectory = remove_empty_and_sort_by_time( num_trajectory )
num_trajectory = np.array( num_trajectory,
dtype=[('t','float'),
('x','float'), ('y','float'), ('z','float'),
('px','float'), ('py','float'), ('pz','float') ] )
return( num_trajectory )
def remove_empty_and_sort_by_time( num_trajectory ):
removed_empty = [ x for x in num_trajectory if x ]
sorted_by_time = sorted( removed_empty, key = lambda x: x[0] )
return ( sorted_by_time )
def find_necessary_out_files():
os.chdir("./")
h5files = []
for file in glob.glob("single_particle_electric_field_[0-9][0-9][0-9][0-9][0-9][0-9][0-9].h5"):
h5files.append( file )
return h5files
def extract_time_pos_mom( h5file ):
h5 = h5py.File( h5file, mode="r")
t = h5["/TimeGrid"].attrs["current_time"][0]
t_pos_mom = ()
if ( len(h5["/ParticleSources/emit_single_particle/particle_id"]) > 0 ):
x = h5["/ParticleSources/emit_single_particle/position_x"][0]
y = h5["/ParticleSources/emit_single_particle/position_y"][0]
z = h5["/ParticleSources/emit_single_particle/position_z"][0]
px = h5["/ParticleSources/emit_single_particle/momentum_x"][0]
py = h5["/ParticleSources/emit_single_particle/momentum_y"][0]
pz = h5["/ParticleSources/emit_single_particle/momentum_z"][0]
t_pos_mom = (t, x, y, z, px, py, pz)
h5.close()
return( t_pos_mom )
def eval_an_trajectory_at_num_time_points( num_trajectory ):
global mass, charge, E0
mass, charge, x0, y0, z0, px0, py0, pz0 = get_mass_charge_and_initial_pos_and_mom()
E0 = eval_field_amplitude()
an_trajectory = np.empty_like( num_trajectory )
for i, t in enumerate( num_trajectory['t'] ):
x, y, z = coords( t, x0, y0, z0, px0, py0, pz0 )
px, py, pz = momenta( t, px0, py0, pz0 )
an_trajectory[i] = ( t, x, y ,z, px, py, pz )
return( an_trajectory )
def get_mass_charge_and_initial_pos_and_mom():
initial_out_file = "single_particle_electric_field_0000000.h5"
h5 = h5py.File( initial_out_file, mode="r")
m = h5["/ParticleSources/emit_single_particle"].attrs["mass"][0]
q = h5["/ParticleSources/emit_single_particle"].attrs["charge"][0]
x0 = h5["/ParticleSources/emit_single_particle/position_x"][0]
y0 = h5["/ParticleSources/emit_single_particle/position_y"][0]
z0 = h5["/ParticleSources/emit_single_particle/position_z"][0]
px0 = h5["/ParticleSources/emit_single_particle/momentum_x"][0]
py0 = h5["/ParticleSources/emit_single_particle/momentum_y"][0]
pz0 = h5["/ParticleSources/emit_single_particle/momentum_z"][0]
h5.close()
return( m, q, x0, y0, z0, px0, py0, pz0 )
def eval_field_amplitude():
initial_out_file = "single_particle_electric_field_0000000.h5"
h5 = h5py.File( initial_out_file, mode="r")
E0 = h5["ExternalFields/uniform_z_field"].attrs["electric_uniform_field_z"]
return E0
def momenta( t, px0, py0, pz0 ):
global mass, charge, E0
px = px0
py = py0
pz = pz0 + charge * E0 * t
return ( px, py, pz )
def coords( t, x0, y0, z0, px0, py0, pz0 ):
global mass, charge, E0
x = x0 + px0 / mass * t
y = y0 + py0 / mass * t
z = z0 + pz0 / mass * t + 1 / 2 * charge / mass * E0 * t * t
return ( x, y, z )
def plot_trajectories( num , an ):
plot_3d( num, an )
plot_2d( num, an )
plot_kin_en( num , an )
def plot_3d( num, an ):
fig = plt.figure()
ax = fig.gca( projection='3d' )
ax.plot( num['x'], num['y'], num['z'], '.r', label = "Num" )
ax.plot( an['x'], an['y'], an['z'], label = "An" )
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.legend( loc = 'upper left', title="3d" )
plt.show()
def plot_2d( num, an ):
plt.figure(1)
#XY
plt.subplot(131)
plt.plot( num['x'], num['y'],
linestyle='', marker='o',
label = "Num" )
plt.plot( an['x'], an['y'],
linestyle='-', marker='', lw = 2,
label = "An" )
plt.legend( loc = 'upper right', title="XY" )
#XZ
plt.subplot(132)
plt.plot( num['x'], num['z'],
linestyle='', marker='o',
label = "Num" )
plt.plot( an['x'], an['z'],
linestyle='-', marker='', lw = 2,
label = "An" )
plt.legend( loc = 'upper right', title="XZ" )
#YZ
plt.subplot(133)
plt.plot( num['y'], num['z'],
linestyle='', marker='o',
label = "Num" )
plt.plot( an['y'], an['z'],
linestyle='-', marker='', lw = 2,
label = "An" )
plt.legend( loc = 'upper right', title="YZ" )
plt.show()
def plot_kin_en( num , an ):
global mass
E_num = ( num['px']**2 + num['py']**2 + num['pz']**2 ) / ( 2 * mass )
E_an = ( an['px']**2 + an['py']**2 + an['pz']**2 ) / ( 2 * mass )
t = num['t']
plt.figure()
axes = plt.gca()
axes.set_xlabel( "t [s]" )
axes.set_ylabel( "E [erg]" )
# axes.set_ylim( [min( E_an.min(), E_num.min() ),
# max( E_an.max(), E_num.max() ) ] )
line, = plt.plot( t, E_num, 'o' )
line.set_label( "Num" )
line, = plt.plot( t, E_an, ls = 'solid', lw = 3 )
line.set_label( "An" )
plt.legend( loc = 'upper right' )
plt.show()
main()
| examples/single_particle_in_electric_field/single_particle_in_uniform_electric_field.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Linear** Classifier
# # 1 SVM Classifier
# ## 1.1 load data
import numpy as np
import matplotlib.pyplot as plt
from data_util import load_CIFAR10
# %matplotlib inline
X_train, y_train, X_test, y_test = load_CIFAR10()
print('train data shape:', X_train.shape)
print('train label shape:', y_train.shape)
print('test data shape:', X_test.shape)
print('test label shape:', y_test.shape)
# display the images
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# # 1.2 Take samples
# +
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print ('Train data shape: ', X_train.shape)
print ('Train labels shape: ', y_train.shape)
print ('Validation data shape: ', X_val.shape)
print ('Validation labels shape: ', y_val.shape)
print ('Test data shape: ', X_test.shape)
print ('Test labels shape: ', y_test.shape)
# -
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print (mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# +
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print( X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
# +
from linear_svm import svm_loss_naive
W = np.random.randn(10, 3073) * 0.0001
loss, grad = svm_loss_naive(W, X_dev.T, y_dev, 0.00001)
print ('loss: %f' % (loss, ))
# -
# use the svm
from linear_classifier import LinearSVM
svm = LinearSVM()
svm.fit(X_dev.T, y_dev, learning_rate=1e-3, reg=1e3,verbose=True)
pred = svm.predict(X_test.T)
print('accuracy is ', (pred==y_test).sum()/len(y_test))
# # 1.3 Validation
learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]
regularization_strengths = [(1+i*0.1)*1e4 for i in range(-3,3)] + [(2+0.1*i)*1e4 for i in range(-3,3)]
results = []
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
svm = LinearSVM()
svm.fit(X_train.T, y_train, learning_rate, regularization_strength, num_iters=1000)
pred = svm.predict(X_val.T)
accuracy = (pred==y_val).sum()/len(y_val)
results.append('learnging_rate is '+str(learning_rate)+',regularization_strength is '+str(regularization_strength)
+ ': accuarcy is :'+str(accuracy))
for value in results:
print(value)
# From above results, We know that hyperparamaters **learning rate** is 1.5e-7 and **regularization strength** is 21000,
# the accuracy is highest.
svm = LinearSVM()
svm.fit(X_train.T, y_train, 1.5e-7, 21000, num_iters=1000)
pred = svm.predict(X_test.T)
print('test data accuracy is ', (pred==y_test).sum()/len(y_test))
# # 2 Softmax Classifier
from linear_softmax import softmax_loss_naive
from linear_classifier import LinearSoftmax
softmax = LinearSoftmax()
softmax.fit(X_dev.T, y_dev, reg=2100)
pred = softmax.predict(X_test.T)
print('accuray is ', (pred==y_test).sum()/len(y_test))
learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]
regularization_strengths = [(1+i*0.1)*1e4 for i in range(-3,3)] + [(2+0.1*i)*1e4 for i in range(-3,3)]
results = []
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
softmax = LinearSoftmax()
softmax.fit(X_train.T, y_train, learning_rate, regularization_strength, num_iters=1000)
pred = softmax.predict(X_val.T)
accuracy = (pred==y_val).sum()/len(y_val)
results.append('learnging_rate is '+str(learning_rate)+',regularization_strength is '+str(regularization_strength)
+ ': accuarcy is :'+str(accuracy))
for value in results:
print(value)
| assignment1/LinearClassifer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Advanced Tutorial for the MFF Package <br/>
#
# In this jupyter notebook, most of the uses of the MFF package will be thoroughly analysed and explained.
# + slideshow={"slide_type": "slide"}
# Standard Imports
import logging
import numpy as np
import matplotlib.pyplot as plt
logging.basicConfig(level=logging.ERROR) # Verbosity selection: INFO, WARNING or ERROR
# import ASE
from ase.io import read
from ase.io import write
from ase import Atoms
# Import MFF
from mff import utility, configurations, models, calculators
from pathlib import Path
# Atoms visualization Tool
from IPython.display import HTML
def atoms_to_html(atoms):
'Return the html representation the atoms object as string'
from tempfile import NamedTemporaryFile
with NamedTemporaryFile('r+', suffix='.html') as ntf:
atoms.write(ntf.name, format='html')
ntf.seek(0)
html = ntf.read()
return html
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 8]
# + [markdown] slideshow={"slide_type": "slide"}
# # First System: Nickel 19 nanoparticle
# + [markdown] slideshow={"slide_type": "slide"}
# ### Import the .xyz file and convert it to local atomic environment data
# +
directory = Path('data/Ni_19/') # Path to the directory that contains the trajectory file, and where all of the files will be saved
filename = directory / 'movie.xyz' # name of the trajectory file
cutoff = 6.4 # Cutoff to be used to carve local atomic environemnts, in Angstrom
data = configurations.generate_and_save(filename, cutoff, forces_label='forces', energy_label='energy') # Generate and save the compressed "data" object that contains all the information
elements, confs, forces, energies, global_confs = configurations.load_and_unpack(directory, cutoff) # Extract the information from the zipped data object which is located in the directory
# + slideshow={"slide_type": "slide"}
atoms = read(filename, index = 0)
atoms.set_positions(atoms.get_positions() - 12*np.ones_like(atoms.get_positions))
nickel = atoms_to_html(atoms)
HTML(nickel)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generate a 2-body GP and train it using force data only
# +
ntr = 200
ntest = 100
ntest_en = 50
sigma = 0.5
noise = 0.001
theta = 0.5
# Choose data at random from the force database and split into training/test data
ind_f = np.random.choice(np.arange(len(confs)), ntr+ntest, replace = False)
ind_tr, ind_test = ind_f[:ntr], ind_f[ntr:]
# Choose data at random from the energy database for testing
ind_test_en = np.random.choice(np.arange(len(global_confs)), ntest_en, replace = False)
# Initialize the model
m1 = models.TwoBodySingleSpeciesModel(elements, cutoff, sigma, theta, noise)
# Fit using local atomic environment and force data using 2 cores
m1.fit(confs[ind_tr], forces[ind_tr], ncores = 2)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Test the accuracy of the GP-FF on force and energy data
# -
MAEC, MAEF, SMAEF, MF, RMSEF = utility.test_forces(m1, confs[ind_test], forces[ind_test], plot = True, ncores = 2)
# + slideshow={"slide_type": "slide"}
MAE, SMAE, RMSE_e = utility.test_energies(m1, global_confs[ind_test_en], energies[ind_test_en], plot = True, ncores = 2)
# + [markdown] slideshow={"slide_type": "slide"}
# The accuracy on the force vectors is good, but we are not really taking into account the energy of the system. <br>
# It is therefore better to fit using a large amount of force data, and a smalla amount of energy data, in order to give the GP some reference points for the energetic values, and not only for its gradient (force).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generate a 2-body GP and train it using force and energy data to improve its accuracy on energy predictions
# +
ntr = 200
ntest = 100
ntr_en = 1
ntest_en = 50
sigma = 0.5
noise = 0.001
theta = 0.5
# Choose data at random from the force database and split into training/test data
ind_f = np.random.choice(np.arange(len(confs)), ntr+ntest, replace = False)
ind_tr, ind_test = ind_f[:ntr], ind_f[ntr:]
# Choose data at random from the energy database for testing
ind_e = np.random.choice(np.arange(len(global_confs)), ntr_en+ntest_en, replace = False)
ind_tr_en, ind_test_en = ind_e[:ntr_en], ind_e[ntr_en:]
# Initialize the model
m2 = models.TwoBodySingleSpeciesModel(elements, cutoff, sigma, theta, noise)
# Fit using local atomic environment and force data using 2 cores
m2.fit_force_and_energy(confs[ind_tr], forces[ind_tr], global_confs[ind_tr_en], energies[ind_tr_en], ncores = 2)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Test the accuracy of the second GP-FF on force and energy data
# -
MAEC, MAEF, SMAEF, MF, RMSEF = utility.test_forces(m2, confs[ind_test], forces[ind_test], plot = True, ncores = 2)
# + slideshow={"slide_type": "slide"}
MAE, SMAE, RMSE_e = utility.test_energies(m2, global_confs[ind_test_en], energies[ind_test_en], plot = True, ncores = 2)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Visualize the 2-body FF-s
# +
distances = np.linspace(1.8, cutoff, 100)
en = np.zeros(len(distances))
for i, c in enumerate(distances):
conf = np.array([[[[c, 0, 0, elements[0], elements[0]]]]])
en[i] = m1.predict_energy(conf, ncores=1)
plt.plot(distances, en)
plt.plot(distances, np.zeros(len(distances)), 'k--')
plt.ylabel('Pair Energy [eV]')
plt.xlabel(r'Pair Distance [$\AA$]')
plt.show()
# + slideshow={"slide_type": "slide"}
distances = np.linspace(1.8, cutoff, 100)
en = np.zeros(len(distances))
for i, c in enumerate(distances):
conf = np.array([[[[c, 0, 0, elements[0], elements[0]]]]])
en[i] = m2.predict_energy(conf, ncores=1)
plt.plot(distances, en)
plt.plot(distances, np.zeros(len(distances)), 'k--')
plt.ylabel('Pair Energy [eV]')
plt.xlabel(r'Pair Distance [$\AA$]')
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Map the GP-FFs into M-FFs
# +
m1.build_grid(1.0, 100, ncores = 2)
m2.build_grid(1.0, 100, ncores = 2)
m1.save(directory / "models")
m2.save(directory / "models")
# + [markdown] slideshow={"slide_type": "slide"}
# ### Setup a Simulation in ASE
# +
# Additional ASE imports
from ase.md.velocitydistribution import MaxwellBoltzmannDistribution
from ase.md.velocitydistribution import Stationary
from ase.md.velocitydistribution import ZeroRotation
from ase.md.verlet import VelocityVerlet
from ase.md.langevin import Langevin
from ase import units
from ase.io import extxyz
# Global Variables Definition
gamma = 0.001 # For Langevin Dynamics
temp = 500 # K
dt = 1.0 # fs
cell_length = 10.0 # Angstrom
atoms = read(filename, index=0, format='extxyz') # Read the input file and setup the initial geometry form the xyz file
atoms.set_positions(atoms.get_positions() - 0.5*np.array([cell_length, cell_length, cell_length])) # Center atoms in the cell
atoms.set_cell([[cell_length, 0.0, 0.0], [0, cell_length, 0], [0, 0, cell_length]]) # Set the simulation cell if not present
atoms.set_pbc([False, False, False])
# + slideshow={"slide_type": "slide"}
def printenergy(a=atoms):
"""Function to print the potential, kinetic and total energy"""
epot = a.get_potential_energy() / len(a)
ekin = a.get_kinetic_energy() / len(a)
print('Energy per atom: Epot = %.3feV Ekin = %.3feV (T=%3.0fK) '
'Etot = %.3feV' % (epot, ekin, ekin / (1.5 * units.kB), epot + ekin))
def savexyz(a=atoms):
this_traj = open(output_name, "a")
extxyz.write_extxyz(this_traj, [atoms])
# + [markdown] slideshow={"slide_type": "slide"}
# ### Launch Simulation With m1
# +
calc1 = calculators.TwoBodySingleSpecies(cutoff, m1.grid) # Initialize calculator module
atoms.set_calculator(calc1) # Tell ASE how to get the forces ( AKA choose the potential)
# Setup the momenta and velocities
dyn = VelocityVerlet(atoms, dt * units.fs) # The dynamics will be a Verlet one
MaxwellBoltzmannDistribution(atoms, 2.0 * temp * units.kB) # Initialize velocities according to MB distribution at double T because of equipartition theorem
ZeroRotation(atoms) # Stop global rotation
Stationary(atoms) # Stop global movement
# + slideshow={"slide_type": "slide"}
steps = 1000
output_name = "data/Ni_19/trajectory_mff_1.xyz" # Output name
this_traj = open(output_name, "w")
dyn.attach(savexyz, interval=100) # Save coordinates every interval steps
dyn.attach(printenergy, interval=100) # Print informations every interval steps
dyn.run(steps) # Have fun
this_traj.close()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Launch Simulation With m2
# +
calc2 = calculators.TwoBodySingleSpecies(cutoff, m2.grid) # Initialize calculator module
atoms.set_calculator(calc2) # Tell ASE how to get the forces ( AKA choose the potential)
# Setup the momenta and velocities
dyn = VelocityVerlet(atoms, dt * units.fs) # The dynamics will be a Verlet one
MaxwellBoltzmannDistribution(atoms, 2.0 * temp * units.kB) # Initialize velocities according to MB distribution at double T because of equipartition theorem
ZeroRotation(atoms) # Stop global rotation
Stationary(atoms) # Stop global movement
# + slideshow={"slide_type": "slide"}
steps = 1000
output_name = "data/Ni_19/trajectory_mff_2.xyz" # Output name
this_traj = open(output_name, "w")
dyn.attach(savexyz, interval=100) # Save coordinates every interval steps
dyn.attach(printenergy, interval=100) # Print informations every interval steps
dyn.run(steps) # Have fun
this_traj.close()
# + [markdown] slideshow={"slide_type": "slide"}
# # Second System: Fe with grain boundaries
# + slideshow={"slide_type": "slide"}
directory = Path('data/Fe/') # Path to the directory that contains the trajectory file, and where all of the files will be saved
filename = directory / 'movie.xyz' # name of the trajectory file
cutoff = 4.45 # Cutoff to be used to carve local atomic environemnts, in Angstrom
elements, confs, forces, energies, global_confs = configurations.load_and_unpack(directory, cutoff) # Extract the information from the zipped data object which is located in the directory
# -
atoms = read(filename, index = 0)
iron = atoms_to_html(atoms)
HTML(iron)
# + slideshow={"slide_type": "slide"}
ntr = 200
ntest = 100
ntr_en = 1
ntest_en = 50
sigma = 0.5
noise = 0.001
theta = 0.5
# Choose data at random from the force database and split into training/test data
ind_f = np.random.choice(np.arange(len(confs)), ntr+ntest, replace = False)
ind_tr, ind_test = ind_f[:ntr], ind_f[ntr:]
# Choose data at random from the energy database for testing
ind_e = np.random.choice(np.arange(len(global_confs)), ntr_en+ntest_en, replace = False)
ind_tr_en, ind_test_en = ind_e[:ntr_en], ind_e[ntr_en:]
# Load the model
m3 = utility.load_model(directory / "models" / "MODEL_combined_ntr_200.json")
# + slideshow={"slide_type": "slide"}
MAEC, MAEF, SMAEF, MF, RMSEF = utility.test_forces(m3, confs[ind_test], forces[ind_test], plot = True, ncores = 2)
| tutorials/Advanced_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#примеры из статьи https://habr.com/ru/post/531940/
from spacy.lang.ru import Russian
nlp = Russian()
doc = nlp("Съешь ещё этих мягких французских булок, да выпей чаю. А лучше 2.")
token = doc[0]
print(token.text)
span = doc[3:6]
print(span.text)
print("is_alpha: ", [(token,token.is_alpha) for token in doc])
print("is_punct: ", [(token,token.is_punct) for token in doc])
print("like_num: ", [(token,token.like_num) for token in doc])
#последние слова до точек
for token in doc:
if token.i+1 < len(doc):
next_token = doc[token.i+1]
if next_token.text == ".":
print(token.text)
# +
#python -m spacy download en_core_web_sm
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("New Apple MacBook set launch tomorrow")
for token in doc:
token_text = token.text
token_pos = token.pos_
token_dep = token.dep_
token_head = token.head.text
print(f"{token_text:<12}{token_pos:<10}" \
f"{token_dep:<10}{token_head:<12}")
# -
from spacy import displacy
displacy.render(doc, style='dep', jupyter=True)
#расшифровка названий
print(spacy.explain("aux"))
print(spacy.explain("PROPN"))
# +
#начальная форма слова
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("I saw a movie yesterday")
print(' '.join([token.lemma_ for token in doc]))
# +
#именованные сущности
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for 1$ billion")
for ent in doc.ents:
print(ent.text, ent.label_)
# -
print(spacy.explain("GPE"))
print(spacy.explain("ORG"))
from spacy import displacy
displacy.render(doc, style='ent', jupyter=True)
# +
#пример поиска
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [
{"IS_DIGIT": True},
{"LOWER": {"REGEX": "(fifa|icc)"}},
{"LOWER": "cricket", "OP": "?"},
{"LOWER": "world"},
{"LOWER": "cup"}
]
matcher.add("fifa_pattern", None, pattern)
doc = nlp("2018 ICC Cricket World Cup: Afghanistan won!")
matches = matcher(doc)
for match_id, start, end in matches:
matched_span = doc[start:end]
print(matched_span)
# -
nlp = spacy.load("en_core_web_sm") # лучше брать модели побольше, например, en_core_web_md
doc1 = nlp("I like burgers")
doc2 = nlp("I like pizza")
print(doc1.similarity(doc2))
# +
#Модуль SpaCy поддерживает ряд встроенных компонентов (токенизатор, выделение именованных сущностей),
#но также позволяет определять свои собственные компоненты. По сути, компоненты – это последовательно
#вызывающиеся функции, которые принимают на вход документ, изменяют его и отдают обратно. Новые компоненты
#можно добавлять с помощью атрибута add_pipe:
import spacy
def length_component(doc):
doc_length = len(doc)
print(f"This document is {doc_length} tokens long.")
return doc
nlp = spacy.load("en_core_web_sm")
nlp.add_pipe(length_component, first=True)
print(nlp.pipe_names)
doc = nlp("This is a sentence.")
# -
| Spacy-Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import glob
import os
import matplotlib.pyplot as plt
import json
import numpy as np
from cycler import cycler
from matplotlib.pyplot import cm
# output_folder = "tmp_output"
output_folder = "/mnt/d/Projects/covid-19/sensitivity"
all_scenarios = next(os.walk(output_folder))[1]
# +
def get_param_data(param, scenarios, output_folder):
config_count = len(glob.glob(f'{output_folder}/{scenarios[0]}/*.json'))
df_list = []
for scenario in scenarios:
for i in range(config_count):
config_path = f'{output_folder}/{scenario}/config_{i}.json'
run_path = f'{output_folder}/{scenario}/run_{i}.csv'
with open(config_path) as config_file:
run_config = json.load(config_file)
if run_config['sensitivity_target'] == param:
df = pd.read_csv(run_path, index_col=None, header=0)
df['Strategy'] = scenario
df[param] = run_config['config'][param]
df_list.append(df)
return pd.concat(df_list, axis=0, ignore_index=True)
def plots_for_param(param, df, scenarios, param_label=None, plot_tests=True, save_as=None):
if param_label is None:
param_label = param
colors = cm.rainbow(np.linspace(0, 1, len(scenarios)))
fig = plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
for i, scenario in enumerate(scenarios):
plt.plot(df[df['Strategy']==scenario][param], df[df['Strategy']==scenario]['Reduced R'], c=colors[i], label=scenario);
plt.xlabel(param_label);
plt.ylabel('R');
plt.legend()
plt.subplot(1, 2, 2)
for i, scenario in enumerate(scenarios):
plt.plot(df[df['Strategy']==scenario][param], df[df['Strategy']==scenario]['Tests Needed'], c=colors[i], label=scenario);
plt.xlabel(param_label);
plt.ylabel('Number of tests needed');
plt.legend()
plt.suptitle(f'Sensitivity to {param_label}')
if save_as is not None:
plt.savefig(save_as)
def table_for_param(param, df, param_label=None, as_latex=True):
df = df[[param, 'Reduced R', 'Tests Needed']]
if param_label is None:
param_label = param
df = df.rename(columns={param: param_label, 'Reduced R': 'R'})
if as_latex:
return df.to_latex()
else:
return df
# example usage
df = get_param_data("app_cov", ["L4", "L3"], "tmp_output")
plots_for_param("app_cov", df, param_label="App coverage", scenarios=["L4", "L3"], save_as=None)
table_for_param("app_cov", df, param_label="App coverage", as_latex=False)
# -
parameters = [
{'name': 'testing_delay', 'label': 'Time needed to get test result',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'manual_trace_delay', 'label': 'Time needed to trace contacts without an app',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'app_cov', 'label': 'App uptake',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'trace_adherence', 'label': 'Policy adherence to quarantine on being traced as a contact',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'app_report_prob', 'label': 'Probability of reporting symptoms via app',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'manual_report_prob', 'label': 'Probability of reporting symptoms without app',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'latent_period', 'label': 'Period from getting infected to being infectious',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'met_before_o', 'label': 'Probability the case person met contacts in Other before',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'max_contacts', 'label': 'Max contacts a person can have a day',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
{'name': 'wfh_prob', 'label': 'WFH probability',
'show_plot': True, 'show_table': False,
'plot_pareto': False, 'save_as': None},
]
# +
scenarios_to_plot = all_scenarios
for param in parameters:
df = get_param_data(param['name'], scenarios_to_plot, output_folder)
if param['show_plot']:
plots_for_param(param['name'], df, param_label=param['label'], scenarios=scenarios_to_plot,
save_as=param['save_as'])
if param['show_table']:
print(table_for_param(param['name'], df, param_label=param['label'], as_latex=param['as_latex']))
# -
| tti/notebooks/plots/plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Large sum
#
# # Problem 13
# Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
#
# 37107287533902102798797998220837590246510135740250
# 46376937677490009712648124896970078050417018260538
# 74324986199524741059474233309513058123726617309629
# 91942213363574161572522430563301811072406154908250
# 23067588207539346171171980310421047513778063246676
# 89261670696623633820136378418383684178734361726757
# 28112879812849979408065481931592621691275889832738
# 44274228917432520321923589422876796487670272189318
# 47451445736001306439091167216856844588711603153276
# 70386486105843025439939619828917593665686757934951
# 62176457141856560629502157223196586755079324193331
# 64906352462741904929101432445813822663347944758178
# 92575867718337217661963751590579239728245598838407
# 58203565325359399008402633568948830189458628227828
# 80181199384826282014278194139940567587151170094390
# 35398664372827112653829987240784473053190104293586
# 86515506006295864861532075273371959191420517255829
# 71693888707715466499115593487603532921714970056938
# 54370070576826684624621495650076471787294438377604
# 53282654108756828443191190634694037855217779295145
# 36123272525000296071075082563815656710885258350721
# 45876576172410976447339110607218265236877223636045
# 17423706905851860660448207621209813287860733969412
# 81142660418086830619328460811191061556940512689692
# 51934325451728388641918047049293215058642563049483
# 62467221648435076201727918039944693004732956340691
# 15732444386908125794514089057706229429197107928209
# 55037687525678773091862540744969844508330393682126
# 18336384825330154686196124348767681297534375946515
# 80386287592878490201521685554828717201219257766954
# 78182833757993103614740356856449095527097864797581
# 16726320100436897842553539920931837441497806860984
# 48403098129077791799088218795327364475675590848030
# 87086987551392711854517078544161852424320693150332
# 59959406895756536782107074926966537676326235447210
# 69793950679652694742597709739166693763042633987085
# 41052684708299085211399427365734116182760315001271
# 65378607361501080857009149939512557028198746004375
# 35829035317434717326932123578154982629742552737307
# 94953759765105305946966067683156574377167401875275
# 88902802571733229619176668713819931811048770190271
# 25267680276078003013678680992525463401061632866526
# 36270218540497705585629946580636237993140746255962
# 24074486908231174977792365466257246923322810917141
# 91430288197103288597806669760892938638285025333403
# 34413065578016127815921815005561868836468420090470
# 23053081172816430487623791969842487255036638784583
# 11487696932154902810424020138335124462181441773470
# 63783299490636259666498587618221225225512486764533
# 67720186971698544312419572409913959008952310058822
# 95548255300263520781532296796249481641953868218774
# 76085327132285723110424803456124867697064507995236
# 37774242535411291684276865538926205024910326572967
# 23701913275725675285653248258265463092207058596522
# 29798860272258331913126375147341994889534765745501
# 18495701454879288984856827726077713721403798879715
# 38298203783031473527721580348144513491373226651381
# 34829543829199918180278916522431027392251122869539
# 40957953066405232632538044100059654939159879593635
# 29746152185502371307642255121183693803580388584903
# 41698116222072977186158236678424689157993532961922
# 62467957194401269043877107275048102390895523597457
# 23189706772547915061505504953922979530901129967519
# 86188088225875314529584099251203829009407770775672
# 11306739708304724483816533873502340845647058077308
# 82959174767140363198008187129011875491310547126581
# 97623331044818386269515456334926366572897563400500
# 42846280183517070527831839425882145521227251250327
# 55121603546981200581762165212827652751691296897789
# 32238195734329339946437501907836945765883352399886
# 75506164965184775180738168837861091527357929701337
# 62177842752192623401942399639168044983993173312731
# 32924185707147349566916674687634660915035914677504
# 99518671430235219628894890102423325116913619626622
# 73267460800591547471830798392868535206946944540724
# 76841822524674417161514036427982273348055556214818
# 97142617910342598647204516893989422179826088076852
# 87783646182799346313767754307809363333018982642090
# 10848802521674670883215120185883543223812876952786
# 71329612474782464538636993009049310363619763878039
# 62184073572399794223406235393808339651327408011116
# 66627891981488087797941876876144230030984490851411
# 60661826293682836764744779239180335110989069790714
# 85786944089552990653640447425576083659976645795096
# 66024396409905389607120198219976047599490197230297
# 64913982680032973156037120041377903785566085089252
# 16730939319872750275468906903707539413042652315011
# 94809377245048795150954100921645863754710598436791
# 78639167021187492431995700641917969777599028300699
# 15368713711936614952811305876380278410754449733078
# 40789923115535562561142322423255033685442488917353
# 44889911501440648020369068063960672322193204149535
# 41503128880339536053299340368006977710650566631954
# 81234880673210146739058568557934581403627822703280
# 82616570773948327592232845941706525094512325230608
# 22918802058777319719839450180888072429661980811197
# 77158542502016545090413245809786882778948721859617
# 72107838435069186155435662884062257473692284509516
# 20849603980134001723930671666823555245252804609722
# 53503534226472524250874054075591789781264330331690
long = 37107287533902102798797998220837590246510135740250463769376774900097126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690
len(str(long))
l_str = str(long)
print(l_str)
ll = list(l_str)
for i in range(len(ll)):
ll[i]= int(ll[i])
len(ll)
def add_digits(l):
s = 0
for i in l:
s += i
return s
add_digits(ll)
| solutions/S0013.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Latent Factor DCMM Analysis
# This example highlights several advanced modeling strategies.
#
# **Dynamic Count Mixture Model**
#
# A Dynamic Count Mixture Model (DCMM) is the combination of Bernoulli and Poisson DGLMs. The Bernoulli DGLM models the probability of a zero outcome, while the Poisson models the outcome, conditional on it being non-zero (Berry and West, 2019):
# \begin{equation} \label{eqn-dcmm}
# z_t \sim Ber(\pi_t) \text{ and } y_t \mid z_t =
# \begin{cases}
# 0, & \text{if } z_t = 0,\\
# 1 + x_t, \quad x_t \sim Po(\mu_t), & \text{if }z_t = 1
# \end{cases}
# \end{equation}
# where $\pi_t$ and $\mu_t$ vary according to the dynamics of independent Bernoulli and Poisson DGLMs respectively:
# \begin{equation*}
# \text{logit}(\pi_t) = \mathbf{F}_{ber, t}^{'}\boldsymbol{\theta}_{ber, t} \qquad \text{and} \qquad \text{log}(\mu_t) = \mathbf{F}_{Po, t}^{'} \boldsymbol{\theta}_{Po, t}
# \end{equation*}
# A DCMM is useful to capture a higher prevalence of $0$ outcomes than in a standard Poisson DGLM.
# **Latent Factors**
#
# The second concept is the use of *latent factors* for multiscale modeling. This example features simulated data from a retail sales setting. There is data for an item, and also for total sales at the store. The total sales are smoother and more predictable than the item level sales. A model is fit to the total sales which includes a day-of-week seasonal effect. This seasonal effect is extracted from the model on total sales, and used as a predictor in the item level model.
#
# Modeling effects at different levels of a hierarchy - aka multiscale modeling - can improve forecast accuracy, because the sales of an individual can be very noisy, making it difficult to learn accurate patterns.
# **Copula Forecasting**
#
# The final concept is Copula forecasting. In this example we are focusing on forecasting sales *1* through *14* days into the future. To do this, we will simulate from the joint forecast distribution *1:14* days into the future. This is accomplished in an efficient manner through the use of a Copula model, which accounts for dependence in the forecasts across days.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pybats_nbdev.analysis import analysis, analysis_dcmm
from pybats_nbdev.latent_factor import seas_weekly_lf
from pybats_nbdev.shared import load_dcmm_latent_factor_example
from pybats_nbdev.plot import plot_data_forecast
from pybats_nbdev.point_forecast import median
from pybats_nbdev.loss_functions import MAD
# We start by loading the simulated retail sales data.
#
# The dataframe 'totaldata' has the total daily sales in a store, along with a predictor, which represents a measure of average price.
#
# The dataframe 'data' has the sales of a single item, and along with a predictor which represents the daily change in price.
data = load_dcmm_latent_factor_example()
totaldata, data = data.values()
totaldata['Y'] = np.log(totaldata['Y'] + 1)
totaldata.head()
data.head()
# Here we define hyper parameters for the analysis:
# - *rho* is a discount factor which calibrates the forecast distribution. A value of *rho* less than $1$ increases the forecast variance.
#
# - *k* is the number of days ahead to forecast.
#
# - *nsamps* is the number of forecast samples to draw.
#
# - *prior_length* is the number of days of data to use when defining the priors for model coefficients.
#Define hyper parameters
rho = .2
k = 14 # Number of days ahead that we will forecast
nsamps = 100
prior_length = 21
# Next, we define the window of time that we want to forecast over. For each day within this time period, the model will sample from the path (joint) forecast distribution *1:k* days into the future.
# Define forecast range for final year of data
T = len(totaldata)
forecast_end_date = totaldata.index[-k]
forecast_start_date = forecast_end_date - pd.DateOffset(days=365)
# The first analysis is run on the 'totaldata', which is used to learn a latent factor. In this case, we fit a normal DLM to the log of total sales in order to learn the weekly season effect, which is stored in the latent factor.
# Get multiscale signal (a latent factor) from higher level log-normal model
latent_factor = analysis(totaldata['Y'].values, totaldata['X'].values, k,
forecast_start_date, forecast_end_date, dates=totaldata.index,
seasPeriods=[7], seasHarmComponents=[[1,2,3]],
family="normal", ret=['new_latent_factors'], new_latent_factors= [seas_weekly_lf],
prior_length=prior_length)
# The second analysis fits a DCMM to 'data', which is the simulated sales of a single item. The latent factor we derived above is used as a predictive signal for the sales of this individual item. On each day within the forecasting period, a copula is used to draw joint forecast samples *1:k* days ahead.
# +
# Update and forecast the model
mod, forecast_samples = analysis_dcmm(data['Y'].values, data['X'].values.reshape(-1,1), k,
forecast_start_date, forecast_end_date,
prior_length=prior_length, nsamps=nsamps, rho=rho,
latent_factor=latent_factor, dates=totaldata.index)
forecast = median(forecast_samples)
# -
# Finally we can examine the results, first by plotting both the *1-* and *14-* step ahead forecasts. The point forecast is the median (blue line), and the forecast samples provide easy access to 95% credible intervals.
# +
# Plotting 1-step ahead forecasts
horizon = 1
plot_length = 50
fig, ax = plt.subplots(figsize=(8,4))
start_date = forecast_end_date + pd.DateOffset(horizon - plot_length)
end_date = forecast_end_date + pd.DateOffset(horizon - 1)
ax = plot_data_forecast(fig, ax, data.loc[start_date:end_date].Y,
forecast[-plot_length:,horizon - 1],
forecast_samples[:,-plot_length:,horizon - 1],
data.loc[start_date:end_date].index,
linewidth = 2)
# +
# Plotting 14-step ahead forecasts
horizon = 14
plot_length = 50
fig, ax = plt.subplots(figsize=(8,4))
start_date = forecast_end_date + pd.DateOffset(horizon - plot_length)
end_date = forecast_end_date + pd.DateOffset(horizon - 1)
ax = plot_data_forecast(fig, ax, data.loc[start_date:end_date].Y,
forecast[-plot_length:,horizon - 1],
forecast_samples[:,-plot_length:,horizon - 1],
data.loc[start_date:end_date].index,
linewidth = 2)
# -
# To evaluate the point forecasts, we can look at the mean absolute deviation (MAD) between the forecast median and the observations across forecast horizons. Interestingly, there is only a small increase in the MAD for longer forecast horizons.
# Mean absolute deviation at increasing forecast horizons
horizons = list(range(1, k+1))
list(map(lambda k: MAD(data.loc[forecast_start_date + pd.DateOffset(k-1):forecast_end_date + pd.DateOffset(k-1)].Y,
forecast[:,k-1]),
horizons))
| examples/DCMM Latent Factor Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Census income classification with scikit-learn
#
# This example uses the standard <a href="https://archive.ics.uci.edu/ml/datasets/Adult">adult census income dataset</a> from the UCI machine learning data repository. We train a k-nearest neighbors classifier using sci-kit learn and then explain the predictions.
import sklearn
import shap
# ## Load the census data
X,y = shap.datasets.adult()
X["Occupation"] *= 1000 # to show the impact of feature scale on KNN predictions
X_display,y_display = shap.datasets.adult(display=True)
X_train, X_valid, y_train, y_valid = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=7)
# ## Train a k-nearest neighbors classifier
#
# Here we just train directly on the data, without any normalizations.
knn = sklearn.neighbors.KNeighborsClassifier()
knn.fit(X_train, y_train)
# ### Explain predictions
#
# Normally we would use a logit link function to allow the additive feature inputs to better map to the model's probabilistic output space, but knn's can produce infinite log odds ratios so we don't for this example.
#
# It is important to note that Occupation is the dominant feature in the 1000 predictions we explain. This is because it has larger variations in value than the other features and so it impacts the k-nearest neighbors calculations more.
# +
f = lambda x: knn.predict_proba(x)[:,1]
med = X_train.median().values.reshape((1,X_train.shape[1]))
explainer = shap.Explainer(f, med)
shap_values = explainer(X_valid.iloc[0:1000,:])
# -
shap.plots.waterfall(shap_values[0])
# A summary beeswarm plot is an even better way to see the relative impact of all features over the entire dataset. Features are sorted by the sum of their SHAP value magnitudes across all samples.
shap.plots.beeswarm(shap_values)
# A heatmap plot provides another global view of the model's behavior, this time with a focus on population subgroups.
shap.plots.heatmap(shap_values)
# ## Normalize the data before training the model
#
# Here we retrain a KNN model on standardized data.
# normalize data
dtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))
X_train_norm = X_train.copy()
X_valid_norm = X_valid.copy()
for k,dtype in dtypes:
m = X_train[k].mean()
s = X_train[k].std()
X_train_norm[k] -= m
X_train_norm[k] /= s
X_valid_norm[k] -= m
X_valid_norm[k] /= s
knn_norm = sklearn.neighbors.KNeighborsClassifier()
knn_norm.fit(X_train_norm, y_train)
# ### Explain predictions
#
# When we explain predictions from the new KNN model we find that Occupation is no longer the dominate feature, but instead more predictive features, such as marital status, drive most predictions. This is simple example of how explaining why your model is making it's predicitons can uncover problems in the training process.
# +
f = lambda x: knn_norm.predict_proba(x)[:,1]
med = X_train_norm.median().values.reshape((1,X_train_norm.shape[1]))
explainer = shap.Explainer(f, med)
shap_values_norm = explainer(X_valid_norm.iloc[0:1000,:])
# -
# With a summary plot with see marital status is the most important on average, but other features (such as captial gain) can have more impact on a particular individual.
shap.summary_plot(shap_values_norm, X_valid.iloc[0:1000,:])
# A dependence scatter plot shows how the number of years of education increases the chance of making over 50K annually.
shap.plots.scatter(shap_values_norm[:,"Education-Num"])
# <hr>
# Have an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!
| notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/JerKeller/2022_ML_Earth_Env_Sci/blob/main/Daphnia_Class.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="jGpPP6148Xl4"
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten,Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.applications.vgg19 import VGG19
from tensorflow.keras.applications.vgg19 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} id="m20nOyhNC7E8" outputId="5169aedd-ea14-4ebc-cdb7-5642971ddc46"
from google.colab import drive
drive.mount('/content/drive')
# + id="yFYHoO3f92Vc"
train_path="/content/drive/MyDrive/Colab Notebooks/FinalData/Train"
test_path="/content/drive/MyDrive/Colab Notebooks/FinalData/Test"
validation_path="/content/drive/MyDrive/Colab Notebooks/FinalData/Val"
# + id="Bb2VuY7t_QLN"
IMAGE_SIZE = [224, 224]
# + id="3e7rgrul-K1j"
x_train=[]
for folder in os.listdir(train_path):
sub_path=train_path+"/"+folder
for img in os.listdir(sub_path):
image_path=sub_path+"/"+img
img_arr=cv2.imread(image_path)
img_arr=cv2.resize(img_arr,(224,224))
x_train.append(img_arr)
# + id="eTnyBdWN_Rre"
x_test=[]
for folder in os.listdir(test_path):
sub_path=test_path+"/"+folder
for img in os.listdir(sub_path):
image_path=sub_path+"/"+img
img_arr=cv2.imread(image_path)
img_arr=cv2.resize(img_arr,(224,224))
x_test.append(img_arr)
# + id="my0bdJiPHQT5"
x_val=[]
for folder in os.listdir(validation_path):
sub_path=validation_path+"/"+folder
for img in os.listdir(sub_path):
image_path=sub_path+"/"+img
img_arr=cv2.imread(image_path)
img_arr=cv2.resize(img_arr,(224,224))
x_val.append(img_arr)
# + id="R4ITofpmHtm2"
train_x=np.array(x_train)
test_x=np.array(x_test)
val_x=np.array(x_val)
# + colab={"base_uri": "https://localhost:8080/"} id="Uqn8uGX6Hwdw" outputId="91bec4fb-5408-4ea7-da49-e1e0c0bee01c"
train_x.shape,test_x.shape,val_x.shape
# + id="bwg-RLKTPucn"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# + id="_q0imAKJQRj0"
train_x=train_x/250
test_x=test_x/250
val_x=val_x/250
# + colab={"base_uri": "https://localhost:8080/"} id="6Yz0c2E5PwqV" outputId="d0ba3c31-f693-401a-c7fc-53c1db13917b"
train_datagen = ImageDataGenerator(rescale = 1./255)
test_datagen = ImageDataGenerator(rescale = 1./255)
val_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(train_path,
target_size = (224, 224),
batch_size = 32,
class_mode = 'sparse')
test_set = test_datagen.flow_from_directory(test_path,
target_size = (224, 224),
batch_size = 32,
class_mode = 'sparse')
val_set = val_datagen.flow_from_directory(validation_path,
target_size = (224, 224),
batch_size = 32,
class_mode = 'sparse')
# + colab={"base_uri": "https://localhost:8080/"} id="ygYtq-vuQ0Vz" outputId="e500bae2-05e2-4328-8c95-7639cce44873"
training_set.class_indices
# + colab={"base_uri": "https://localhost:8080/"} id="ID3MsccRQ3gc" outputId="9025cf7f-d445-464c-f918-baaa4a820708"
train_y=training_set.classes
test_y=test_set.classes
val_y=val_set.classes
train_y.shape,test_y.shape,val_y.shape
# + colab={"base_uri": "https://localhost:8080/"} id="NR7xahpJRDsk" outputId="bb68c76f-2bd2-4822-b320-c39b08a5642e"
vgg = VGG19(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
# + id="qZyBaI0cRI9m"
for layer in vgg.layers:
layer.trainable = False
# + id="VdcceoT-RMtq"
x = Flatten()(vgg.output)
prediction = Dense(3, activation='softmax')(x)
# + colab={"base_uri": "https://localhost:8080/"} id="KYqz9RmjRQiO" outputId="b9201675-eb50-4cc0-9f70-96c8b3406f02"
# create a model object
model = Model(inputs=vgg.input, outputs=prediction)
# view the structure of the model
model.summary()
# + id="pUwzb1iYRcMq"
model.compile(
loss='sparse_categorical_crossentropy',
optimizer="adam",
metrics=['accuracy']
)
# + id="rseQlqAqRe_N"
from tensorflow.keras.callbacks import EarlyStopping
early_stop=EarlyStopping(monitor='val_loss',mode='min',verbose=1,patience=5)
# + colab={"base_uri": "https://localhost:8080/"} id="Cz13NRi9Rir6" outputId="abb52439-3908-4772-a1e3-4555e2d85da9"
history = model.fit(
train_x,
train_y,
validation_data=(val_x,val_y),
epochs=10,
callbacks=[early_stop],
batch_size=32,shuffle=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="FPNdw6-Jf781" outputId="5194f6eb-1522-4f96-8d21-b7f17b556fd6"
# loss
plt.plot(history.history['loss'], label='train loss')
plt.plot(history.history['val_loss'], label='validation loss')
plt.legend()
plt.title("Training/Validation Loss")
plt.ylabel('Loss')
plt.xlabel('Periods')
plt.savefig('loss.png')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="hwK46YXcgGBy" outputId="e3811fc3-7ab9-48fa-cd66-6ba1c74c272e"
# accuracies
plt.plot(history.history['accuracy'], label='train accuracy')
plt.plot(history.history['val_accuracy'], label='validation accuracy')
plt.legend()
plt.title("Training/Validation Accuracy")
plt.ylabel('Accuracy')
plt.xlabel('Periods')
plt.savefig('accuracy.png')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="QTrjeGNzgSFW" outputId="3d6a5d63-8779-4e6f-c923-9f541fa3364c"
model.evaluate(test_x,test_y,batch_size=32)
# + id="k3_6S7tPg5RJ"
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
import warnings
warnings.filterwarnings('always') # "error", "ignore", "always", "default", "module" or "once"
# + colab={"base_uri": "https://localhost:8080/"} id="JGanPVmHhGdk" outputId="c694000f-7cd4-45e5-c69c-7f6e9f347a1c"
y_pred=model.predict(test_x)
y_pred=np.argmax(y_pred,axis=1)
accuracy_score(y_pred,test_y)
# + colab={"base_uri": "https://localhost:8080/"} id="3GrziCewh9K2" outputId="601ec7ab-dbe6-4603-ea84-e8b928cd64c9"
print(classification_report(y_pred,test_y))
# + colab={"base_uri": "https://localhost:8080/"} id="VR8witfgiiDA" outputId="b5b20feb-ed6f-4e76-cd23-f3b3f3c2af86"
confusion_matrix(y_pred,test_y)
# + id="Kn9NE-HRpe7y"
from sklearn.metrics import ConfusionMatrixDisplay
| Daphnia_Class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
reviews=pd.read_csv('S9_pos_clean_reviews.csv')
words=''
for doc in reviews:
words += ' ' + doc
words=words.strip()
from collections import Counter
cf=Counter(word_tokenize(words))
cf.most_common(100)
import matplotlib.pyplot as plt
from wordcloud import WordCloud
def wordcloud_draw(data, color = 'white'):
wordcloud = WordCloud(
background_color=color,
width=2500,
height=2000
).generate(data)
plt.figure(1,figsize=(13, 13))
plt.imshow(wordcloud)
plt.title('WordCloud_Neutral')
plt.savefig('neut_WC.jpg')
plt.close()
wordcloud_draw(words)
| Samsung S9/.ipynb_checkpoints/Positive_WordCloud_Code-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Edit diff rather than edit distance
#
# OCR correction is hard, because in a typical corpus there are various sources of variations in words:
#
# * morphological variation: `gaan`, `gaat`, `gegaan`, `vergaan`, `ging`
# * spelling variation: `aan`, `aen`, `coninc`, `coningh`
# * OCR mistakes: `verteert`, `verteerl`, `schip`, `scbip`
#
# If we use edit *distances* to find candidates for OCR corrections, we have too little information.
# OCR mistakes can be frequent, more so than certain spelling variations and morphological variants.
# So it is impossible to separate them on purely quantitative measures.
#
# In this notebook we add more *quality* to the ways in which words differ from each other.
# The basic intuition is that we encode changes between two words in such a way that the same kind of change
# is always represented by the same code, no matter in which word-pair the change occur.
#
# For example, the change between the following pairs
#
# * `haalt` - `gehaald`
# * `rent` - `gerend`
# * `boer` - `geboerd`
#
# is identical and can be coded as `- t| + |ge.d|`
#
# This shorthand can be read as:
#
# * remove the final `t`
# * add initial `ge`
# * add final `d`.
# + [markdown] tags=[]
# # Application to Daghregisters
#
# We apply this idea to volume 4 of the Daghregisters.
#
# Usually, it does not scale to compute all possible word comparisons.
# But we can make that much lighter by restricting ourselves to
# a subset of non-rare, non-short words.
#
# Because the bulk of morphological and spelling variation will be witnessed by such a subset.
# Moreover, within that subset we can restrict ourselves to those pairs with a limited edit distance.
# Computing the distance is much cheaper than computing the diff itself, so we reserve the diff
# computation to a relatively small subset of the space of word pairs, as we shall see.
#
# Then we list these diff-codes by frequency, like this:
#
# ```
# # freq diff examples
# ---------------------------------------------------------------------------------------------------------
# 1: 148 - + n| Bengale~Bengalen, Chineese~Chineesen, Chinese~Chinesen
# 2: 112 - + en| Chinees~Chineesen, Christen~Christenen, Engels~Engelsen
# 3: 79 - + e| Atchins~Atchinse, Chinees~Chineese, Engels~Engelse
# 4: 61 - + de| becomen~becomende, bedragen~bedragende, begon~begonde
# 5: 61 - e + Chineese~Chinese, Chineesen~Chinesen, Deenen~Denen
# 6: 53 - + s| Atchin~Atchins, Berckelangh~Berckelanghs, Coninc~Conincs
# 7: 47 - en| + t| adviseeren~adviseert, antwoorden~antwoordt, arriveeren~arriveert
# 8: 36 - |ge + gehouden~houden, gekeert~keert, geladen~laden
# 9: 32 - + |ver antwoorden~verantwoorden, bieden~verbieden, brant~verbrant
# 10: 28 - + e Nederlanderen~Neederlanderen, Nederlanders~Neederlanders, Nederlantse~Neederlantse
# 11: 28 - c + g Almachtich~Almachtigh, affgevaerdicht~affgevaerdight, cleynicheden~cleynigheden
# ```
#
# The idea is that the low frequency diffs are unimportant, but that the higher freqeuncy diffs are either
# morphology, or spelling variation, or frequent OCR errors.
#
# Having all these diffs in a compact overview is useful.
#
# We can classify these diffs into morphology/variation/ocr-error.
#
# Then we can go through all low-frequency words and diff them with neighbouring higher frequency words and see
# whether they are a morphology/spelling/ocr variant of a higher frequency word.
#
# Depending on what is the case, we can take action.
# -
# In order to produce a handy diff between words, we use the function
# [editops(word1, word2)](https://rawgit.com/ztane/python-Levenshtein/master/docs/Levenshtein.html)
# of the
# [Levenshtein module](https://pypi.org/project/python-Levenshtein/#documentation).
#
# It gives a sequence of edit operations to change the word1 into word2.
# We take that sequence and represent it by identifying which pieces of word1 must be left out and which pieces of word2
# will be added to word2.
# We also mark whether these pieces occur at the begin or end of word1/word2.
# If several non-adjacent pieces have to be added or deleted we separate them by a `.`
#
# You might need to do
# %pip3 install python-levenshtein
# %pip3 install text-fabric
# %pip3 install numpy
# %pip3 install matplotlib
# +
import sys
import os
import collections
from math import log2
from Levenshtein import distance, editops
import numpy as np
import matplotlib.pyplot as plt
from tf.app import use
from tf.core.helpers import unexpanduser
# -
# # Graphics of frequency distributions
#
# We use matplotlib to show the distribution of the diffs.
# %matplotlib inline
plt.rcParams.update({'figure.figsize':(7,5), 'figure.dpi':100})
def showDist(freqs, maxBins=100, title="Frequencies", xitem="log-freqency", yitem="words"):
values = np.fromiter((log2(f) for f in freqs.values()), float)
(frequency, bins) = np.histogram(values, bins=100)
fig, ax = plt.subplots()
ax.hist(values, bins=maxBins)
plt.gca().set(title=title, ylabel=yitem, xlabel=xitem)
plt.show()
nHapax = sum(1 for f in freqs.values() if f == 1)
print(f"{len(freqs)} {yitem} of which {nHapax} with frequency 1")
# # Corpus loading
#
# We load volume 4 of the Daghregisters, which we have in Text-Fabric format.
# Text-Fabric downloads it and loads it into memory as a result of the following call.
# (If you run it a second time, it will go much faster, because in the first run several
# features will be precomputed for the corpus and stored in binary form).
#
# You get an extra global var `F` which gives you access to the feature information of the corpus.
# A = use("CLARIAH/wp6-daghregisters:clone", checkout="clone", hoist=globals())
A = use("CLARIAH/wp6-daghregisters", hoist=globals())
# We fetch all words.
#
# We walk over all slot nodes, and for each slot node we retrieve the feature `letters` which contains the text of the
# word at that node.
#
# We then store the word in a dictionary, keyed by its text, and valued by the sequence of slot nodes
# that have the same text.
# +
wordOccs = collections.defaultdict(list)
for w in range(1, F.otype.maxSlot + 1):
wordOccs[F.letters.v(w)].append(w)
print(f"{F.otype.maxSlot} word occurrences of {len(wordOccs)} distinct words")
# -
# This is all we need from Text-Fabric for now.
#
# Should we need context information of certain words, the nodes provide a handle to that.
#
# Here is an example.
exampleWords = ("Laola", "Laala")
exampleOccs = [wordOccs[word] for word in exampleWords]
exampleOccs
# Here are the pages/lines of these occurrences:
for occs in exampleOccs:
for occ in occs:
line = L.u(occ, otype="line")[0]
A.plain(line, highlights={occ})
# # Common words
#
# We want to grab the words that are reasonably long (longer than `SIZE_THR`)
# and not too rare (at least `FREQ_THR` occurrences)
FREQ_THR = 6
SIZE_THR = 5
DIST_THR = 5
WORDS_COMMON = sorted(
word
for (word, occs) in wordOccs.items()
if len(occs) >= FREQ_THR and len(word) >= SIZE_THR
)
print(f"{len(WORDS_COMMON)} common words")
# # Comparing words
#
# We are going to compare the common words pairwise, and we store all pairs that are not to far apart
# (their edit distance at most `DIST_THR`).
# +
WORD_USED = set()
WORD_MATRIX = collections.defaultdict(dict)
nWords = len(WORDS_COMMON)
total = nWords * (nWords - 1) // 2
print(f"Computing {total} comparisons")
k = 0
c = 0
nPairs = 0
chunkSize = int(round(total / 100))
for i in range(nWords - 1):
word1 = WORDS_COMMON[i]
for j in range(i + 1, nWords):
if c == chunkSize:
c = 0
sys.stdout.write(f"\r{k:>9} = {int(round(k / chunkSize)):>3} %")
k += 1
c += 1
word2 = WORDS_COMMON[j]
dist = distance(word1, word2)
if dist <= DIST_THR:
nPairs += 1
WORD_MATRIX[word1][word2] = dist
WORD_USED.add(word1)
WORD_USED.add(word2)
sys.stdout.write(f"\r{k:>9} = {int(round(k / chunkSize)):>3} %")
print(f"\nStored {nPairs} word pairs between {len(WORD_USED)} words")
# -
# # Coding the diff
#
# Here is the tricky part: we code the difference between a pair of words as a bunch of strings
# in which they differ.
# +
OPS = dict(replace="#", insert="+", delete="-")
def codeDiff(source, dest):
ops = collections.defaultdict(list)
for (op, iS, iD) in editops(source, dest):
abb = OPS[op]
if abb == "#":
ops["-"].append(iS)
ops["+"].append(iD)
elif abb == "+":
ops[abb].append(iD)
elif abb == "-":
ops[abb].append(iS)
materialMin = ""
prevI = len(source)
endI = len(source) - 1
for i in sorted(ops["-"]):
pre = "|" if i == 0 else "." if i > prevI + 1 else ""
post = "|" if i ==endI else ""
materialMin += f"{pre}{source[i]}{post}"
prevI = i
materialPlus = ""
prevI = len(dest)
endI = len(dest) - 1
for i in sorted(ops["+"]):
pre = "|" if i == 0 else "." if i > prevI + 1 else ""
post = "|" if i ==endI else ""
materialPlus += f"{pre}{dest[i]}{post}"
prevI = i
return (materialMin, materialPlus)
# -
# ## Examples
#
# This function works for arbitrary word pairs, let's see a few examples:
# +
examples = """
bakken gebakken
geloven verloven
haalt verhaald
nemen genomen
schip scbip
""".strip().split("\n")
for example in examples:
(word1, word2) = example.split()
print(f"{word1:<15} ==> {word2:<15} : {codeDiff(word1, word2)}")
# -
# # Many diffs
#
# Now we will compute the diffs for all word pairs we have collected.
# + tags=[]
WORD_DIFF = collections.defaultdict(list)
print(f"Computing {nPairs} diffs between word pairs")
k = 0
c = 0
chunkSize = int(round(nPairs / 100))
for (word1, words2) in WORD_MATRIX.items():
for word2 in words2:
if c == chunkSize:
c = 0
sys.stdout.write(f"\r{k:>9} = {int(round(k / chunkSize)):>3} %")
k += 1
c += 1
diff = codeDiff(word1, word2)
WORD_DIFF[diff].append((word1, word2))
sys.stdout.write(f"\r{k:>9} = {int(round(k / chunkSize)):>3} %")
print(f"\n{len(WORD_DIFF)} distinct differences")
# -
# Here is the distribution of the diffs:
showDist(
{d: len(pairs) for (d, pairs) in WORD_DIFF.items()},
maxBins=100, title="Word differences", xitem="log-frequency", yitem="differences",
)
# As expected, most differences are not interesting, since we have compared arbitrary words with
# each other.
# But if the same kind of change occurs more often, the frequency of that change will rise above
# the background noise.
#
# Let's see the top 100 diffs.
def showDiffs(fro, to):
for (i, ((d, a), pairs)) in enumerate(sorted(WORD_DIFF.items(), key=lambda x: (-len(x[1]), x[0]))[fro:to]):
freq = len(pairs)
exampleRep = ", ".join("~".join(pair) for pair in pairs[0:3])
print(f"{i + 1:>3}: {freq:>3} -{d:>5} + {a:<5} {exampleRep}")
showDiffs(0, 100)
# ## Observations
#
# Number 1 and 2 correspond to the regular plural in Dutch by appending `n` or `en`.
#
# Up till 8 it is all inflection of nouns, adjectives and verbs.
#
# 9 deals with the productive prefix `ver` of many words.
#
# 11 is a spelling variation between `gh` and `ch`.
#
# 12 also seems to be a spelling variation, but `Backer` vs `Becker` could be a real difference.
#
# 20 is a freqent OCR error, confusion between `l` and `t`
#
# 57 is an OCR error, and a rather frequent one.
#
# 83 is strong verb morphology: the past tense of a strong verb
#
# 85 is not a variation but a matter of real differences between words.
# ## The tail
#
# We show a bit of the long tail to show that little of importance happens here.
showDiffs(-50, None)
# # Back to the top
#
# To finish off,
# we show more examples from the top.
#
# If you skim through it, you see many instances of morphology that happens in 17th century Dutch that you might not be]aware of.
#
# And, when the frequency lowers to 3 and below, the cases become more and more tenuous.
showDiffs(100, 500)
| programs/diffanalysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Model - Logistics Regression
# Let us build some intuition around the Loan Data
#Load the libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Default Variables
# %matplotlib inline
plt.rcParams['figure.figsize'] = (16,9)
plt.rcParams['font.size'] = 18
plt.style.use('fivethirtyeight')
pd.set_option('display.float_format', lambda x: '%.2f' % x)
#Load the dataset
df = pd.read_csv("../data/loan_data_clean.csv")
df.head()
# ## Logistic Regression - Two Variable `age` and `interest`
from sklearn.linear_model import LogisticRegression
# Define the features
X = df.loc[:,('age', 'years')]
# Define the target
y = df['default']
# Initiate the model
clf_logistic_2 = LogisticRegression()
#Fit the model
clf_logistic_2.fit(X,y)
# Calculate the Accuracy Score
clf_logistic_2.score(X,y)
# Calculate the predictions
y_pred = clf_logistic_2.predict(X)
# Calculate the probabilities
y_proba = clf_logistic_2.predict_proba(X)[:,0]
# ### Plot the Decision Boundaries
x1_min, x1_max = X.iloc[:,0].min(), X.iloc[:,0].max()
x2_min, x2_max = X.iloc[:,1].min(), X.iloc[:,1].max()
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, (x1_max - x1_min)/100),
np.arange(x2_min, x2_max, (x2_max - x2_min)/100))
xx = np.c_[np.ones(xx1.ravel().shape[0]), xx1.ravel(), xx2.ravel()]
Z = clf_logistic_2.predict_proba(np.c_[xx1.ravel(), xx2.ravel()])[:,0]
Z = Z.reshape(xx1.shape)
cs = plt.contourf(xx1, xx2, Z, cmap=plt.cm.viridis, alpha = 0.3)
plt.scatter(x = X.iloc[:,0], y = X.iloc[:,1], c = y, s = 50, cmap=plt.cm.magma)
plt.colorbar(cs)
plt.xlabel('age')
plt.ylabel('years')
# Exercise: What is the range of the predicted probabilities
# Exercise: What is the accuracy measure if you change the cut-off threshold
# ## Logistic Regression - All Variables
# +
# Preprocess the data
# -
df = pd.read_csv("../data/loan_data_clean.csv")
df.head()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df.grade = le.fit_transform(df.grade)
le.classes_
df.head()
df.ownership = le.fit_transform(df.ownership)
# ?le
le.classes_
df.head()
# +
# Build the Model
# -
df['amount_log'] = np.log10(df.amount)
df['income_log'] = np.log10(df.income)
df['age_log'] = np.log10(df.age)
df['years_log'] = np.log10(df.years + 1)
df.years_log.hist()
df.head()
X = df[['amount_log', 'interest', 'grade', 'years_log', 'ownership', 'income_log', 'age_log' ]]
y = df.default
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV(Cs = [0.001, 0.01, 0.1, 1, 10], class_weight="balanced", penalty='l1',
verbose =1, cv=5, solver="liblinear" )
# +
#from sklearn.feature_selection import SelectFromModel
# +
#model = SelectFromModel(clf)
# -
clf.fit(X,y)
clf.C_
clf.coef_
y_pred = clf.predict(X)
y_pred
X.head()
clf.predict_proba(X)
def P(z):
return 1/(1+np.exp(-z))
z=0.12*df.interest + 0.12 * df.grade - 0.33 * df.income_log
#z=0.12*df.interest + 0.12 * df.grade + 0.02 * df.ownership - 0.33 * df.income_log
y_pred_easy = P(z)
y_pred_easy.tail()
# +
# Calculate the accuracy
# -
from sklearn import metrics
metrics.roc_auc_score(y_pred, y)
metricas.accuracy_score(y_pred,y)
metrics.confusion_matrix(y_pred=y_pred, y_true=y)
y.value_counts()
# ## Calculate the error metric
# - Accuracy
# - Precision
# - Recall
# - Sensitivity
# - Specificity
# - Receiver Operating Curve
# - Area Under the Curve
# ## Choosing the Error Metric
# What is a good error metric to choose in this case?
# ## Regularization - L1 and L2
# ## Feature Selection
| reference/Module-02c-reference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from matplotlib.ticker import ScalarFormatter
from ipm_util import log_to_dataframe # https://github.com/JiaweiZhuang/ipm_util
from brokenaxes import brokenaxes
# -
# # Read logs
# ls -lh ./logs/ipm_profiling/
impi_log_list = [
'./logs/ipm_profiling/N{0}n{1}_intelmpi-EFA_4x5met_nooutput.ipm.xml'.format(N, N*36)
for N in [4, 8, 16, 32]
]
impi_log_list
# %time df_impi_list = [log_to_dataframe(log) for log in impi_log_list]
ompi_log_list = [
'./logs/ipm_profiling/N{0}n{1}_openmpi_4x5met_nooutput.ipm.xml'.format(N, N*36)
for N in [4, 8, 16, 32]
]
ompi_log_list
# %time df_ompi_list = [log_to_dataframe(log) for log in ompi_log_list]
df_impi_list[0].tail()
df_impi_list[0].tail().mean().sort_values()
# # Show statistics
# +
major_columns = [
'MPI_Barrier', 'MPI_Bcast', 'MPI_Wait', 'MPI_Allreduce', 'MPI_Scatterv']
def mean_df(df_list):
'''Average over all ranks; extract major columns'''
df_mean = pd.concat([df[major_columns].mean() for df in df_list], axis=1).T
df_mean.index = [144, 288, 576, 1152]
df_mean.index.name = 'core'
return df_mean
mean_impi = mean_df(df_impi_list)
mean_impi
# +
mean_ompi = pd.concat([df[major_columns].mean() for df in df_ompi_list], axis=1).T
mean_ompi.index = [144, 288, 576, 1152]
mean_ompi.index.name = 'core'
mean_ompi = mean_df(df_ompi_list)
mean_ompi
# -
# # Compare MPI calls
plt.rcParams['font.size'] = 14
mean_impi.iloc[0].plot.barh()
# +
# https://github.com/bendichter/brokenaxes/issues/11#issuecomment-373650734
bax = brokenaxes(xlims=((0, 2000), (13000, 14000)), wspace=0.4)
bax.barh(mean_impi.columns[::-1], mean_impi.values[0][::-1])
bax.set_xlabel('\n time (s)')
bax.set_title('IPM profiling (IntelMPI-EFA, 144 cores)')
# -
mean_impi.T.plot.barh(logx=True)
df_compare = pd.concat([mean_impi.iloc[-1], mean_ompi.iloc[-1]], axis=1,
keys=['IntelMPI-EFA', 'OpenMPI-TCP'])
df_compare.index.name = ''
df_compare
(df_compare/3600).iloc[::-1, ::-1].plot.barh(legend='reverse', color=['C1', 'C0'], figsize=[4.5, 4])
plt.title('(a) IPM profiling on GEOS-Chem (1152 cores)')
plt.xlabel('Time (hours)')
plt.savefig('gchp-all-mpi.png', dpi=300, bbox_inches='tight')
# # Plot MPI scaling
# +
fig, ax = plt.subplots(1, 1, figsize=[4.5, 4])
plot_kwargs = dict(ax=ax, linestyle='-', marker='o', linewidth=2.5, markersize=7.0, alpha=0.8)
(mean_impi['MPI_Bcast']/3600).plot(**plot_kwargs, label='IntelMPI-EFA')
(mean_ompi['MPI_Bcast']/3600).plot(**plot_kwargs, label='OpenMPI-TCP')
marker_list = ['o','o']
linestyle_list = ['-',':']
for i, line in enumerate(ax.get_lines()[:3]):
line.set_marker(marker_list[i])
line.set_linestyle(linestyle_list[i])
ax.set_title('(b) MPI_Bcast time in GEOS-Chem')
ax.legend()
ax.grid()
ax.set_xscale('log')
ax.xaxis.set_major_formatter(ScalarFormatter())
ax.minorticks_off()
ax.set_xticks(mean_impi.index)
ax.set_xlim(130, 1300)
ax.set_xlabel('Number of cores')
ax.set_ylabel('Time (hours)')
ax.set_ylim(0, 1.1)
fig.savefig('gchp_mpi_bcast.png', dpi=300, bbox_inches='tight')
# -
| notebooks/plot_ipm_profiling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In this draft script I will try to build a optimization workers with homogeneous interfaces
# ___
#
# ### It has built on top of [mlrose](https://github.com/gkhayes/mlrose) package
#
#
# +
import mlrose
import numpy as np
import logging
import tsplib95
# import pandas as pd
RANDOM_SEED = 1
np.random.seed(seed=RANDOM_SEED)
# -
# ### Function for generating the problem.
# ##### NOTE: The function should return the instance of initialized mlrose problem.
# This part of notebook will be enhanced with adding other problem instances
# +
def GenerteNQueensProblem(n: int = 8):
problem = mlrose.DiscreteOpt(length=n, fitness_fn=mlrose.Queens(), maximize=False, max_val=n)
return problem
def GenerateTSPProblem(instance_name: str="a280.tsp"):
tsplib_problem = tsplib95.load_problem("TSPProblems/" + instance_name if not '/' == instance_name[0] else instance_name)
cities_coordinates = list(tsplib_problem.node_coords.values())
problem = mlrose.TSPOpt(length=tsplib_problem.dimension, coords=cities_coordinates, maximize=False)
return problem
# -
# #### Simulated annealing
def RunSASolver(problem, init_state, budget, mh_parameters):
assert "schedule" in mh_parameters.keys()
assert "max_attempts" in mh_parameters.keys()
best_state, best_fitness = mlrose.simulated_annealing(problem,
max_iters=budget,
init_state=init_state,
random_state=RANDOM_SEED,
**mh_parameters)
return best_state, best_fitness
# #### Stohastic hill climbing
def RunSCH(problem, init_state, budget, mh_parameters):
results = mlrose.random_hill_climb(problem,
init_state=init_state,
restarts=budget,
random_state=RANDOM_SEED,
**mh_parameters)
return results
# #### Genetic algorithm
def RunGA(problem, init_state, budget, mh_parameters):
best_state, best_fitness = mlrose.genetic_alg(problem,
max_iters=budget,
random_state=RANDOM_SEED,
**mh_parameters)
return best_state, best_fitness
# #### Running the algorithms
# Just for testing purposes.
# +
sa_mh_parameters = {
'schedule': mlrose.GeomDecay(),
'max_attempts': 1000,
}
sch_mh_parameters = {
'max_attempts': 100
}
gen_mh_parameters = {'mutation_prob': 0.3, 'max_attempts': 30}
n_queen_problem = GenerteNQueensProblem(19)
tsp_problem = GenerateTSPProblem('/media/sem/B54BE5B22C0D3FA8/TUD/Master/metaheuristics_library/TSPProblems/kroA100.tsp')
budget = 30000
iterations = 1
for problem in (n_queen_problem, tsp_problem):
for mh, params in zip((
RunSASolver,
RunSCH,
RunGA,
), (
sa_mh_parameters,
sch_mh_parameters,
gen_mh_parameters,
)):
init_state = np.array(list(range(19 if type(problem) == mlrose.DiscreteOpt else 280)))
for iteration in range(iterations):
solution, solution_fitness = mh(problem, init_state, budget=budget // iterations, mh_parameters=params)
print("%s:%s, solution: %s, fitness: %s." % (mh.__name__, iteration, solution, solution_fitness))
init_state = solution
problem.reset()
print("problem reset")
# -
optimal = [0, 1, 241, 242, 243, 240, 239, 238, 237, 236, 235, 234, 233, 232, 231, 230, 245, 244, 246, 249, 250, 229, 228, 227, 226, 225, 224, 223, 222, 221, 220, 219, 218, 217, 216, 215, 214, 213, 212, 211, 210, 209, 206, 205, 204, 203, 202, 201, 200, 197, 196, 195, 194, 193, 192, 191, 190, 189, 188, 187, 186, 185, 184, 183, 182, 181, 180, 175, 179, 178, 149, 177, 176, 150, 151, 155, 152, 154, 153, 128, 129, 130, 19, 20, 127, 126, 125, 124, 123, 122, 121, 120, 119, 118, 156, 157, 158, 159, 174, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 171, 170, 172, 173, 106, 105, 104, 103, 102, 101, 100, 99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 108, 107, 109, 110, 111, 87, 86, 112, 113, 114, 116, 115, 85, 84, 83, 82, 81, 80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66, 65, 64, 63, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 58, 62, 61, 117, 60, 59, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 21, 24, 22, 23, 13, 14, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 276, 275, 274, 273, 272, 271, 270, 15, 16, 17, 18, 131, 132, 133, 269, 268, 134, 135, 267, 266, 136, 137, 138, 148, 147, 146, 145, 144, 198, 199, 143, 142, 141, 140, 139, 265, 264, 263, 262, 261, 260, 259, 258, 257, 256, 253, 252, 207, 208, 251, 254, 255, 248, 247, 277, 278, 2, 279]
print(solution_result)
print(problem.fitness_fn.evaluate(optimal))
| mlrose_notebook/mlrose_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import numpy as np
import pymc3 as pm
from pyei.data import Datasets
from pyei.two_by_two import TwoByTwoEI
from pyei.goodmans_er import GoodmansER
from pyei.goodmans_er import GoodmansERBayes
from pyei.r_by_c import RowByColumnEI
from pyei.plot_utils import tomography_plot
from pyei.plot_utils import plot_precinct_scatterplot
# -
# # Plotting overview
#
# PyEI has a number of plots available. Here is a list of available plots.
#
# Plotting methods for any fitted EI model where inference involves sampling -- i.e. all approaches excepts the (non-Bayesian) Goodman's ER
#
# - Summary plots for distributions of **polity-wide** voter preferences
# - `plot`
# - `plot_kde` (2 by 2)
# - `plot_kdes` ($r$ by $c$)
# - `plot_boxplot` (2 by 2)
# - `plot_boxplots` ($r$ by $c$)
# - `plot_intervals` (2 by 2)
#
# - Plots of polarization
# - `plot_polarization_kde` (2 by 2)
# - `plot_polarization_kdes` ($r$ by $c$)
#
# For all approaches except Goodman's ER and the Bayesian Goodman's ER (which do not generate samples for each precinct)
#
# - Plots of **precinct-level** voter preferences
# - `precinct_level_plot`
# - `plot_intervals_by_precinct`
#
#
# `GoodmansER` objects also have a plot method
# - `plot`
#
# Additional plotting utilities: for tomography plotting and comparing precinct-level posterior means:
#
# - `plot_utils.tomography_plot`
# - `plot_precinct_scatterplot`
#
# We can also visualize the Bayesian models (see Visualizing Models, below).
#
# Below we show examples of these available plots (as well as loading example data and fitting example models).
#
# At the end of the notebook, we show how to save plots as files.
# # Load example data
# +
# Example 2x2 data
santa_clara_data = Datasets.Santa_Clara.to_dataframe()
group_fraction_2by2 = np.array(santa_clara_data["pct_e_asian_vote"])
votes_fraction_2by2 = np.array(santa_clara_data["pct_for_hardy2"])
demographic_group_name_2by2 = "e_asian"
candidate_name_2by2 = "Hardy"
# Example rxc data (here r=c=3)
group_fractions_rbyc = np.array(santa_clara_data[['pct_ind_vote', 'pct_e_asian_vote', 'pct_non_asian_vote']]).T
votes_fractions_rbyc = np.array(santa_clara_data[['pct_for_hardy2', 'pct_for_kolstad2', 'pct_for_nadeem2']]).T
candidate_names_rbyc = ["Hardy", "Kolstad", "Nadeem"]
demographic_group_names_rbyc = ["ind", "e_asian", "non_asian"]
# Data we'll use in both 2x2 and rbyc
precinct_pops = np.array(santa_clara_data["total2"])
precinct_names = santa_clara_data['precinct']
# -
# # Tomography_plot
#
# For the 2 by 2 case. If
# - $X^i$ is the fraction of voters in the demographic group of interest (in Precinct i) (from `group_fraction`)
# - $T^i$ is the fraction of votes for the candidate of group of interest (in Precinct i) (from `votes_fraction`)
# - $b_1^i$ is the fraction of voters in the demographic group of interest that vote for the candidate of interest (in Precinct i)
# - $b_2^i$ is the fraction of voters in the complement of the demographic group of interest that vote for the candidate of interest (in Precinct i)
#
# then what is sometimes called the **accounting identity** tells us that:
# $$ T^i = b_1^i X^i + b_2^i (1-X^i)$$
#
# (that is, the total votes for a candidate come from summing their votes from each of the two groups).
#
# We can visualize this linear relationship between $b_1^i$ and $b_2^i$ by plotting them against each other. If we plot a line segment for each precinct, and display a line segment for each precinct, we get what has been called a **tomography_plot**. (Note that this plot does not depend on a fitted model.)
tomography_plot(group_fraction_2by2, votes_fraction_2by2, demographic_group_name=demographic_group_name_2by2, candidate_name=candidate_name_2by2)
# # Fit example models
#
# The rest of the plots are applicable to fitted models.
# +
# Create a TwobyTwoEI object
ei_2by2 = TwoByTwoEI(model_name="king99_pareto_modification", pareto_scale=8, pareto_shape=2)
# Fit the model
ei_2by2.fit(group_fraction_2by2,
votes_fraction_2by2,
precinct_pops,
demographic_group_name=demographic_group_name_2by2,
candidate_name=candidate_name_2by2,
precinct_names=precinct_names,
)
# +
# Create a RowByColumnEI object
ei_rbyc = RowByColumnEI(model_name='multinomial-dirichlet-modified', pareto_shape=100, pareto_scale=100)
# Fit the model
ei_rbyc.fit(group_fractions_rbyc,
votes_fractions_rbyc,
precinct_pops,
demographic_group_names=demographic_group_names_rbyc,
candidate_names=candidate_names_rbyc,
#precinct_names=precinct_names,
)
# +
# Create a GoodmansER object
goodmans_er = GoodmansER()
# Fit the model
goodmans_er.fit(
group_fraction_2by2,
votes_fraction_2by2,
demographic_group_name=demographic_group_name_2by2,
candidate_name=candidate_name_2by2
)
# +
# Create a GoodmansERBayes object
bayes_goodman_ei = GoodmansERBayes("goodman_er_bayes", weighted_by_pop=True, sigma=1)
# Fit the model
bayes_goodman_ei.fit(
group_fraction_2by2,
votes_fraction_2by2,
precinct_pops,
demographic_group_name=demographic_group_name_2by2,
candidate_name=candidate_name_2by2
)
# -
# # Summary plots for distributions of polity-wide voter preferences
#
# Available plotting methods for fitted `TwoByTwoEI`, `RowByColumEI`, `GoodmansERBayes` objects
#
# - `plot`
# - `plot_kde` (2 by 2)
# - `plot_kdes` ($r$ by $c$)
# - `plot_boxplot` (2 by 2)
# - `plot_boxplots` ($r$ by $c$)
# - `plot_intervals` (2 by 2)
ei_2by2.plot() # Summary plot
ei_2by2.plot_kde()
ei_2by2.plot_boxplot()
ei_2by2.plot_intervals()
ei_rbyc.plot()
# **For r by c kde and boxplots, specify whether you'd like to organize the plots by group or by candidate**
ei_rbyc.plot_kdes(plot_by="group")
ei_rbyc.plot_kdes(plot_by="candidate")
ei_rbyc.plot_boxplots(plot_by="group")
ei_rbyc.plot_boxplots(plot_by="candidate")
# # Plots of polarization
#
# Available for fitted `TwoByTwoEI`, `RowByColumEI`, `GoodmansERBayes` objects.
#
# These plots visualize the estimated difference in the levels of support for a candidate between two demographic groups. Thus, they give insight into the polarization of those two groups with respect to a particular candidate.
#
# We can specify a threshold and show the probability that the difference in support is at least as large as that threshold.
#
# Alternatively, we can specify a percentile, and show the associated credible interval of the difference in voter support levels.
# For example, we might specify the percentile of 95, and be returned something like: **there is a 95% probability that the difference in support for the candidate is in [.34, .73].** The fact that the interval is far from 0 (where 0 represents no difference in support for the candidate between the two groups), suggests that the two groups have very different levels of support for the candidate.
ei_2by2.plot_polarization_kde(threshold=0.4, show_threshold=True) #set show_threshold to false to just view the kde
ei_2by2.plot_polarization_kde(percentile=95, show_threshold=True) #set show_threshold to false to just view the kde
# For the r by c case, we need to specify the two groups we'd like to compare, and the candidate of interest
ei_rbyc.plot_polarization_kde(percentile=95, groups=['ind', 'e_asian'], candidate='Kolstad', show_threshold=True) #set show_threshold to false to just view the kde
# # Plots of precinct-level voter preferences
#
# Available for `TwoByTwoEI`, `RowByColumEI` objects.
# *(Note that neither GoodmansER approach produce precinct-level estimates)*
#
# - `precinct_level_plot`
# - `plot_intervals_by_precinct`
ei_2by2.precinct_level_plot()
# +
#ei_rbyc.precinct_level_plot()
# -
# # GoodmansER plots
#
# Like `TwoByTwoEI`, `RowByColumEI` objects, `GoodmansER` and `GoodmansERBayes` objects have a plot function to show a summary plot; however, since Goodman models rest on a linear regresion, their summary plots visualize that regression, and so look different from the other summary plots.
#
# Note that the y-coordinate of the point of intersection of the line with $x=1$ gives the estimated support of the demographic group of interest for the candidate of interest. Note that the y-coordinate of the point of intersection of the line with $x=0$ gives the estimated support of the complement of the demographic group of interest for the candidate of interest.
goodmans_er.plot()
bayes_goodman_ei.plot()
# # plot_precinct_scatterplot
#
# This is a method for comparing the outputs of two different ei methods by comparing the point estimates that the two methods give for each precinct.
ei_2by2_alternative = TwoByTwoEI(model_name='truncated_normal')
ei_2by2_alternative.fit(group_fraction_2by2,
votes_fraction_2by2,
precinct_pops,
demographic_group_name=demographic_group_name_2by2,
candidate_name=candidate_name_2by2,
precinct_names=precinct_names,
)
plot_precinct_scatterplot([ei_2by2, ei_2by2_alternative], run_names=['king-pareto', 'truncated normal'], candidate='Hardy')
# # Visualizing models
#
# We can use pymc3's `model_to_graphviz` function to visualize the models themselves, by showing the associated graphical model.
#
# Available for `TwoByTwoEI`, `RowByColumnEI`, and `BayesGoodmansER` objects
model = ei_2by2.sim_model
pm.model_to_graphviz(model)
model = bayes_goodman_ei.sim_model
pm.model_to_graphviz(model)
# # Saving plots
#
# PyEI's plotting functions return matplotlib Axes objects. One simple way to save a figure that you've generated in a notebook is to use matplotlib.pyplot's `savefig` method. We provide an example here, in case you are new to matplotlib.
# +
import matplotlib.pyplot as plt
goodmans_er.plot()
plt.savefig('goodman_er_testfig.png', dpi=300) # Creates a file named `goodman_er_testfig.png`
| pyei/intro_notebooks/Plotting_with_PyEI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
from torchtext import data
from torchtext import datasets
import random
# !python -m spacy download en
SEED = 1
TEXT = data.Field(tokenize='spacy')
LABEL = data.LabelField(dtype=torch.float)
# +
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state=random.seed(SEED))
# +
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size=BATCH_SIZE,
device=device)
# -
train_iterator
item = train_data.__getitem__(0)
item.text
import torch
import torch.nn as nn
embedding = nn.Embedding(1000, 100)
TEXT.vocab_cls
import torchtext
glove = torchtext.vocab.GloVe(name='6B', dim=100)
| examples/bidaf/flask server/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Plot of complex model solutions
# This notebook plots the solution figure for a specific solution from the set of results of the multiple inversion notebook for the data produced by the complex model.
# +
import matplotlib as mpb
# show the figures in windows
# show all the matplotlib backends
#mpb.rcsetup.all_backends
# force matplotlib to use the 'Qt5Agg' backend
#mpb.use(arg='Qt5Agg', force=True)
# +
import cPickle as pickle
import matplotlib.pyplot as plt
# importing my functions
import sys
sys.path.insert(0, '../../code')
import mag_polyprism_functions as mfun
import plot_functions as pf
# +
# importing the pickle file of results
result_path = 'results/multiple-outcrop-3366665/'
with open(result_path+'inversion.pickle') as w:
inversion = pickle.load(w)
# importing the true model
with open('model.pickle') as w:
model = pickle.load(w)
# -
model['z0']
inversion['regularization']
inversion['initial_dz']
# directory to save the figures and filename
filename = '../../manuscript/figures/dipping-solution.png'
#filename = ''
inversion['results'][8][0][-1]*5
# # Results
pf.plot_solution(inversion['x'], inversion['y'],
inversion['z'], inversion['results'][8][3],
inversion['results'][8][2][-1],
inversion['results'][8][2][0],
(13,10), 200,(0.6, 0.95),
[7, 38, 12, -167, 12, 21], [-1,6,-4,3],
model['prisms'],filename)
# Application to complex model data. (a) residual data given by the difference between the noise-corrupted data and the predicted data (not shown) produced by the estimated model. The inset in (a) shows the histogram of the residuals and the Gaussian curve (dashed line) whose mean and standard deviation are, respectively, $\mu = 0.09$ nT and $\sigma=6.66$ nT. (b) perspective view of the initial approximate (red prisms) and the true model (blue prisms). (c) and (d) comparison between the estimated source (red prisms) and the true model (blue prisms) in perspective views.
| code/dipping/plot_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basics of MLP
# - Objective: create vanilla neural networks (i.e., Multilayer perceptrons) for simple regression/classification tasks with Keras
# ## MLP Structures
# - Each MLP model is consisted of one input layer, several hidden layers, and one output layer
# - Number of neurons in each layer is not limited
# <img src="http://cs231n.github.io/assets/nn1/neural_net.jpeg" style="width: 300px"/>
# <br>
# <center>**MLP with one hidden layer**</center>
# - Number of input neurons: 3
# - Number of hidden neurons: 4
# - Number of output neurons: 2
#
# <img src="http://cs231n.github.io/assets/nn1/neural_net2.jpeg" style="width: 500px"/>
# <br>
# <center>**MLP with two hidden layers**</center>
# - Number of input neurons: 3
# - Number of hidden neurons: (4, 4)
# - Number of output neurons: 1
#
# ## MLP for Regression tasks
# - When the target (**y**) is continuous (real)
# - For loss function and evaluation metric, mean squared error (MSE) is commonly used
from keras.datasets import boston_housing
(X_train, y_train), (X_test, y_test) = boston_housing.load_data()
# ### Dataset Description
# - Boston housing dataset has total 506 data instances (404 training & 102 test)
# - 13 attributes (features) to predict "the median values of the houses at a location"
# - Doc: https://keras.io/datasets/
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# ### 1. Creating a model
# - Keras model object can be created with Sequential class
# - At the outset, the model is empty per se. It is completed by **'adding'** additional layers and compilation
# - Doc: https://keras.io/models/sequential/
from keras.models import Sequential
model = Sequential()
# ### 1-1. Adding layers
# - Keras layers can be **added** to the model
# - Adding layers are like stacking lego blocks one by one
# - Doc: https://keras.io/layers/core/
from keras.layers import Activation, Dense
# Keras model with two hidden layer with 10 neurons each
model.add(Dense(10, input_shape = (13,))) # Input layer => input_shape should be explicitly designated
model.add(Activation('sigmoid'))
model.add(Dense(10)) # Hidden layer => only output dimension should be designated
model.add(Activation('sigmoid'))
model.add(Dense(10)) # Hidden layer => only output dimension should be designated
model.add(Activation('sigmoid'))
model.add(Dense(1)) # Output layer => output dimension = 1 since it is regression problem
# This is equivalent to the above code block
model.add(Dense(10, input_shape = (13,), activation = 'sigmoid'))
model.add(Dense(10, activation = 'sigmoid'))
model.add(Dense(10, activation = 'sigmoid'))
model.add(Dense(1))
# ### 1-2. Model compile
# - Keras model should be "compiled" prior to training
# - Types of loss (function) and optimizer should be designated
# - Doc (optimizers): https://keras.io/optimizers/
# - Doc (losses): https://keras.io/losses/
from keras import optimizers
sgd = optimizers.SGD(lr = 0.01) # stochastic gradient descent optimizer
model.compile(optimizer = sgd, loss = 'mean_squared_error', metrics = ['mse']) # for regression problems, mean squared error (MSE) is often employed
# ### Summary of the model
model.summary()
# ### 2. Training
# - Training the model with training data provided
model.fit(X_train, y_train, batch_size = 50, epochs = 100, verbose = 1)
# ### 3. Evaluation
# - Keras model can be evaluated with evaluate() function
# - Evaluation results are contained in a list
# - Doc (metrics): https://keras.io/metrics/
results = model.evaluate(X_test, y_test)
print(model.metrics_names) # list of metric names the model is employing
print(results) # actual figure of metrics computed
print('loss: ', results[0])
print('mse: ', results[1])
# ## MLP for classification tasks
# - When the target (**y**) is discrete (categorical)
# - For loss function, cross-entropy is used and for evaluation metric, accuracy is commonly used
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
whole_data = load_breast_cancer()
X_data = whole_data.data
y_data = whole_data.target
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size = 0.3, random_state = 7)
# ### Dataset Description
# - Breast cancer dataset has total 569 data instances (212 malign, 357 benign instances)
# - 30 attributes (features) to predict the binary class (M/B)
# - Doc: http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html#sklearn.datasets.load_breast_cancer
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# ### 1. Creating a model
# - Same with regression model at the outset
from keras.models import Sequential
model = Sequential()
# ### 1-1. Adding layers
# - Keras layers can be **added** to the model
# - Adding layers are like stacking lego blocks one by one
# - It should be noted that as this is a classification problem, sigmoid layer (softmax for multi-class problems) should be added
# - Doc: https://keras.io/layers/core/
# Keras model with two hidden layer with 10 neurons each
model.add(Dense(10, input_shape = (30,))) # Input layer => input_shape should be explicitly designated
model.add(Activation('sigmoid'))
model.add(Dense(10)) # Hidden layer => only output dimension should be designated
model.add(Activation('sigmoid'))
model.add(Dense(10)) # Hidden layer => only output dimension should be designated
model.add(Activation('sigmoid'))
model.add(Dense(1)) # Output layer => output dimension = 1 since it is regression problem
model.add(Activation('sigmoid'))
# This is equivalent to the above code block
model.add(Dense(10, input_shape = (13,), activation = 'sigmoid'))
model.add(Dense(10, activation = 'sigmoid'))
model.add(Dense(10, activation = 'sigmoid'))
model.add(Dense(1, activation = 'sigmoid'))
# ### 1-2. Model compile
# - Keras model should be "compiled" prior to training
# - Types of loss (function) and optimizer should be designated
# - Doc (optimizers): https://keras.io/optimizers/
# - Doc (losses): https://keras.io/losses/
from keras import optimizers
sgd = optimizers.SGD(lr = 0.01) # stochastic gradient descent optimizer
model.compile(optimizer = sgd, loss = 'binary_crossentropy', metrics = ['accuracy'])
# ### Summary of the model
model.summary()
# ### 2. Training
# - Training the model with training data provided
model.fit(X_train, y_train, batch_size = 50, epochs = 100, verbose = 1)
# ### 3. Evaluation
# - Keras model can be evaluated with evaluate() function
# - Evaluation results are contained in a list
# - Doc (metrics): https://keras.io/metrics/
results = model.evaluate(X_test, y_test)
print(model.metrics_names) # list of metric names the model is employing
print(results) # actual figure of metrics computed
print('loss: ', results[0])
print('accuracy: ', results[1])
| keras-tutorials/1. MLP/1-Basics-of-MLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/suyash091/Cardiovascular-Disease-Prediction/blob/master/Cardiovascular%20inference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="sds-Zr7iZ2-l" colab_type="code" colab={}
from sklearn.externals import joblib
import pandas as pd
import numpy as np
clf = joblib.load('Cardiomodel.pkl')
# + id="FehWom3CbKt0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 337} outputId="640e27b1-3597-4ae8-b7bd-1ce7f6abca74"
#inp=list(map(str,[18393, 2, 168, 62.0, 110, 80, 1, 1, 0, 0, 1]))
inp=list(map(str,'20228 1 156 85.0 140 90 3 1 0 0 1'.split()))
inp = np.array(inp).reshape((1, -1))
output=clf.predict(inp)
| Cardiovascular inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
########################### Reflection ##################################3
# Current pipeline is using all the helper functions initially provided. I also implemented a few fuctions like load_image() and
#save_image().
#Could not complete the line averaging part. I will work on it a bit more.
# +
#importing some useful packages
from os import listdir
from os.path import isfile, join
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
# %matplotlib inline
# -
def load_image(image_path):
"""load the image from the given path."""
return mpimg.imread(image_path)
def save_image(dir_name, name, img):
"""save the image in the directory"""
#cv2.imwrite(join(dir_name, name), img)
plt.imsave(join(dir_name, name), img)
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# +
def draw_lines(img, lines, color=[255, 0, 0], thickness=9):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# print ("its a new image")
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
# left_slope = []
# right_slope = []
# for line in lines:
# for x1,y1,x2,y2 in line:
# m,b = np.polyfit(np.array([x1,x2]),np.array([y1,y2]),1)
# if m < 0:
# left_slope.append(m)
# elif m > 0:
# right_slope.append(m)
# print (m,b,line)
# print ("This is the end")
# print (sum(right_slope)/(len(right_slope)+1))
# print (sum(left_slope)/(len(left_slope)+1))
# -
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# +
# Python 3 has support for cool math symbols.
def weighted_img(edges,img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
color_edges = np.dstack((edges, edges, edges))
return cv2.addWeighted(initial_img, α, img, β, λ)
# -
def process_img(image):
image = load_image(image)
gray_image = grayscale(image)
blur_gray = gaussian_blur(gray_image, 5)
edges = canny(blur_gray, 50, 150)#50,150
imshape = image.shape
vertices = np.array([[(80,imshape[0]),(450, 310),(imshape[1]-470, 310), (imshape[1]-70,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
lines = hough_lines(masked_edges, 1, np.pi/180, 30, 100, 160)
final_image = weighted_img(edges,image,lines)
return final_image
def process_image(image):
gray_image = grayscale(image)
blur_gray = gaussian_blur(gray_image, 5)
edges = canny(blur_gray, 50, 150)
imshape = image.shape
vertices = np.array([[(80,imshape[0]),(450, 310),(imshape[1]-470, 310), (imshape[1]-70,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
lines = hough_lines(masked_edges, 1, np.pi/180, 30, 100, 160)
final_image = weighted_img(edges,image,lines)
return final_image
def process_images(input_dir,output_dir):
names = listdir(input_dir)
for name in names:
final = process_img(input_dir+name)
save_image(output_dir, "final_"+name, final)
process_images("test_images/","test_images_output")
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
white_output = 'test_videos_output/solidWhiteRight.mp4'
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
clip2 = VideoFileClip("test_videos/solidYellowLeft.mp4")
yellow_clip = clip2.fl_image(process_image) #NOTE: this function expects color images!!
# %time yellow_clip.write_videofile(yellow_output, audio=False)
| Basic_Lane_Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp softwares.inhouse.r
# -
# # inhouserscript
# +
# export
import os
from pybiotools4p.softwares.base import Base, modify_cmd
# +
# export
class Inhouerscript(Base):
def __init__(self, software, fd):
super(Inhouerscript, self).__init__(software)
self._default = fd
if '/' in software:
bin = os.path.dirname(software) + '/'
else:
bin = ''
self._deseq2 = bin + 'deseq2.R'
def cmd_version(self):
'''
:return:
'''
return 'echo {repr} ;{software}'.format(
repr=self.__repr__(),
software=self._software
)
@modify_cmd
def cmd_deg_deseq2(self, count,clinical,control,prefix):
'''
'''
return r'''
{software} {count} \
{clinical} \
{control} \
{prefix}
'''.format(
software=self._deseq2,
**locals()
)
def __repr__(self):
return 'inhouserscript:' + self._software
def __str__(self):
return '''
In-house Rscripts rep, https://github.com/btrspg/bioinfo-rscripts
'''
# +
import configparser
config=configparser.ConfigParser()
config.read('pybiotools4p/default.ini')
ihr=Inhouerscript(config['software']['ihr'],config['hisat2'])
# -
ihr
print(ihr)
ihr.cmd_version()
print(ihr.cmd_deg_deseq2('count','clinical','control','prefix'))
| 08_inhouse_rscript.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="tuMCRm56ml43" colab_type="text"
# ## Installing Required Packages
# + id="PE_GFGqMnlMC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 730} executionInfo={"status": "ok", "timestamp": 1596806842591, "user_tz": -270, "elapsed": 22694, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02059390058057528509"}} outputId="58d18c31-bc0d-4841-e570-f86d27783134"
# !pip install SpeechRecognition wavio ffmpeg-python gtts
# !mkdir sounds
# !wget https://raw.githubusercontent.com/myprogrammerpersonality/Voice_Recognition/master/Template.csv
# + [markdown] id="3Aoz9iOqm13b" colab_type="text"
# ## Import Packages and Define Functions
# + id="JTN2053injmU" colab_type="code" colab={}
import scipy
from scipy.io.wavfile import read as wav_read
import io
from IPython.display import HTML, Audio, clear_output
from google.colab.output import eval_js
from base64 import b64decode
import numpy as np
import ffmpeg
import IPython.display as ipd
from IPython.display import Javascript
import speech_recognition as sr
import matplotlib.pyplot as plt
import time
import wavio
import pandas as pd
from gtts import gTTS #Import Google Text to Speech
RECORD = """
const sleep = time => new Promise(resolve => setTimeout(resolve, time))
const b2text = blob => new Promise(resolve => {
const reader = new FileReader()
reader.onloadend = e => resolve(e.srcElement.result)
reader.readAsDataURL(blob)
})
var record = time => new Promise(async resolve => {
stream = await navigator.mediaDevices.getUserMedia({ audio: true })
recorder = new MediaRecorder(stream)
chunks = []
recorder.ondataavailable = e => chunks.push(e.data)
recorder.start()
await sleep(time)
recorder.onstop = async ()=>{
blob = new Blob(chunks)
text = await b2text(blob)
resolve(text)
}
recorder.stop()
})
"""
output_html = """<style>
fieldset {{
font-family: sans-serif;
border: 5px solid #1F497D;
background: #ddd;
border-radius: 5px;
padding: 15px;
}}
fieldset legend {{
background: #1F497D;
color: #fff;
padding: 5px 10px ;
font-size: 32px;
border-radius: 10px;
box-shadow: 0 0 0 5px #ddd;
margin-left: 20px;
}}
</style>
<section style="margin: 15px;">
<fieldset style="min-height:100px;">
<legend><b> {} </b> </legend>
<label> <h1 style="font-size: 80px;float: top;">{} ==> Sample {}</h1><br/> </label>
</fieldset>"""
def record(sec=3, file_name = 'temp.wav', verbose=False):
if verbose: print('Start Recording :')
display(Javascript(RECORD))
s = eval_js('record(%d)' % (sec*1000))
b = b64decode(s.split(',')[1])
process = (ffmpeg
.input('pipe:0')
.output('pipe:1', format='wav')
.run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True, quiet=True, overwrite_output=True))
output, err = process.communicate(input=b)
riff_chunk_size = len(output) - 8
# Break up the chunk size into four bytes, held in b.
q = riff_chunk_size
b = []
for i in range(4):
q, r = divmod(q, 256)
b.append(r)
# Replace bytes 4:8 in proc.stdout with the actual size of the RIFF chunk.
riff = output[:4] + bytes(b) + output[8:]
sr, audio = wav_read(io.BytesIO(riff))
if verbose: print('Recording Finished')
return audio, sr
def hearing(step_sec = 5, key_word = 'go', stop_word = 'stop', verbose = False):
key = key_word.lower()
key_stop = stop_word.lower()
num = 0
while True:
num += 1
if verbose: print(f'Round{num}')
# Part 1: Recording
t1 = time.time()
audio, sound_rate = record(sec=step_sec, verbose=False)
# Part 2: Saving Audio File
t2 = time.time()
wavio.write('sound.wav', audio, sound_rate)
# Part 3: Try to Recognize and Check for Key_Word
t3 = time.time()
r = sr.Recognizer()
with sr.WavFile('sound.wav') as source:
audio = r.record(source)
try:
text = r.recognize_google(audio)
text = text.lower()
if verbose >= 2: print(f'You Said :{text}')
if key in text:
return 1
if key_stop in text:
return 0
except:
pass
if verbose:print(f'Part 1 {t2-t1}')
if verbose:print(f'Part 2 {t3-t2}')
if verbose:print(f'Part 3 {time.time()-t3}')
# + [markdown] id="b8iOQBUYnA5z" colab_type="text"
# ## Text to Speech
# + id="fk4b9fLZeH2H" colab_type="code" colab={}
data = pd.read_csv('Template.csv')
main_dict = {}
for name in data['Metabolite']:
vols = list(data[data['Metabolite']==name].iloc[:,1:].values[0])
main_dict[name] = [vols, sorted(range(len(vols)), key=lambda k: vols[k])]
for name in main_dict.keys():
tts = gTTS('Start Aliquoting {}'.format(name)) #Provide the string to convert to speech
tts.save('sounds/{}.wav'.format(name)) #save the string converted to speech as a .wav file
for i, vol in enumerate(main_dict[name][0]):
tts = gTTS('{} in Sample {}'.format(vol, i+1))
tts.save('sounds/{}_{}.wav'.format(name, i))
# + [markdown] id="ZE0QK82anFxM" colab_type="text"
# ## Main Part
# + id="eKtnGBQKgHY-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 629} executionInfo={"status": "ok", "timestamp": 1596808904483, "user_tz": -270, "elapsed": 65383, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02059390058057528509"}} outputId="91043c14-f2d3-4d01-a4b4-42b1e1f372ca"
# sorted version within each metabolite
for name in main_dict.keys():
print('Start Aliquoting ', name)
display(Audio('sounds/{}.wav'.format(name), autoplay=True))
display(HTML(output_html.format(name, "#", "#")))
time.sleep(4)
clear_output(wait=True)
time.sleep(2)
for i in range(len(main_dict[name][0])):
display(Audio('sounds/{}_{}.wav'.format(name, main_dict[name][1][i]), autoplay=True))
display(HTML(output_html.format(name, main_dict[name][0][main_dict[name][1][i]], main_dict[name][1][i]+1)))
if hearing(step_sec=5, key_word='go', stop_word='stop', verbose=2):
pass
else:
clear_output(wait=True)
break
clear_output(wait=True)
| My_Try_Stream v4.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise November, 4<sup>th</sup> 2019 : Prediksi dan Rekomendasi Pembelian Tiket Pesawat untuk Waktu dan Tujuan Tertentu Berdasarkan Waktu Delay
# Author : <NAME>
# Tujuan akhir dari workflow ini adalah untuk menyediakan rekomendasi kepada *customer* untuk memilih *flight* yang tidak terlambat untuk bulan Desember, 2018.
#
# Tahapan-tahapan dalam workflow ini adalah
#
# 1. Tahap Persiapan
# 2. EDA dan Feature Engineering
# 3. Membuat Algoritma Rekomendasi
# 4. Pemodelan Delay untuk Bulan Desember
# 5. Algoritma Rekomendasi berdasarkan Model
# ## Tahap Persiapan
#
# Tahap persiapan terdiri dari
#
# a. Memasukkan modul dan data
# <br>b. Overview data
# <br>c. Mempersiapkan data
# ### Memasukkan Modul dan Data
# +
# memasukkan modul yang diperlukan
# modul dasar untuk rekayasa data, perhitungan, dan pengaturan direktori
import pandas as pd
import numpy as np
from scipy import stats
import os
os.chdir('D:/Titip')
from sklearn.utils.extmath import cartesian
# modul untuk visualisasi data
import matplotlib.pyplot as plt
import seaborn as sns
# modul untuk scrapping data
from bs4 import BeautifulSoup as bs
import requests
import time
import lxml.html as lh
# modul untuk manipulasi data text
import re
from collections import defaultdict
# modul untuk manipulasi data tanggal
import datetime
from datetime import datetime, timedelta
# modul untuk scaling data dan clustering
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans
# modul untuk pemodelan
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn import metrics
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
import statsmodels.formula.api as smf
# +
# memasukkan data
df_train = pd.read_csv('./training_dataset.csv')
df_test = pd.read_csv('./test_dataset.csv')
# -
# ### Overview Data
# +
def resumetable(df):
print(f"Dataset Shape: {df.shape}")
summary = pd.DataFrame(df.dtypes,columns=['dtypes'])
summary = summary.reset_index()
summary['Name'] = summary['index']
summary = summary[['Name','dtypes']]
summary['Perc Missing'] = df.isnull().sum().values / len(df) * 100
summary['Uniques'] = df.nunique().values
summary['First Value'] = df.loc[0].values
summary['Second Value'] = df.loc[1].values
summary['Third Value'] = df.loc[2].values
for name in summary['Name'].value_counts().index:
summary.loc[summary['Name'] == name, 'Entropy'] = round(stats.entropy(df[name].value_counts(normalize=True), base=2),2)
return summary
print('Resume dari data train adalah :')
display(resumetable(df_train))
print('Resume dari data_test adalah :')
display(resumetable(df_test))
# -
# ### Mempersiapkan Data
# +
# menggabungkan data untuk training dan data untuk testing
df_train['status'] = 'train'
df_test['status'] = 'test'
df_comb = pd.concat([df_train,df_test], axis = 0)
del df_comb['id']
# menghilangkan kolom arrival_airport gate karena missing value sekitar 90 %
del df_comb['arrival_airport_gate']
# menghilangkan data yang nilai unik-nya hanya 1
df_comb.drop(['departure_airport_country','departure_airport_region','departure_airport_timezone'], axis = 1, inplace = True)
# mengganti nama kolom flight_equipment_iata
df_comb = df_comb.rename(columns = {'flight_equipment_iata' : 'flight_equipment_data'})
# airline sudah diwakili oleh airline_name sehingga bisa dihilangkan
del df_comb['airline']
# departure dan arrival airport name sudah terwakilkan oleh departure dan arrival airport code maupun city
# sehingga bisa dibuang
del df_comb['departure_airport_name']
del df_comb['arrival_airport_name']
# menghapus flight duration karena tidak make sense untuk digunakan (lebih make sense memakai scheduled flight duration)
del df_comb['flight_duration']
# mengubah kota tujuan menjadi huruf kecil untuk menghindari case-sensitive
df_comb['arrival_airport_city'] = df_comb['arrival_airport_city'].str.lower()
# -
# ## Exploratory Data Analysis dan Feature Engineering
#
# Fitur yang akan dibuat di sini adalah :
#
# 1. Jenis penerbangan (premium / regular)
# 2. Penggolongan airline_name menjadi "Others"
# 3. Tanggal, hari, dan jam penerbangan serta rencana durasi penerbangan
# 4. Jarak penerbangan (scrapping)
# 5. Memisahkan gate menjadi huruf dan angkanya
# 6. Menghapus data terminal
# 7. Region secara spesifik
# 8. Jenis pesawat
# 9. Liburan / tidak
# 10. Pembagian waktu (pagi, siang, sore, malam, dinihari)
# 11. Kluster jarak dan durasi penerbangan
# 12. Penerbangan di dalam Malaysia atau keluar Malaysia
# 13. Kecepatan pesawat
# 14. Jumlah dan densitas penerbangan setiap waktu untuk setiap bandara (scrapping)
# 15. Cuaca (data eksternal)
# +
# menurut https://en.wikipedia.org/wiki/Flight_number,
# flight number mengandung arti sehingga akan diambil hanya angka saja
# lalu akan digolongkan tipe flight dari angkanya
df_comb['number'] = df_comb.number.str.extract('(\d+)')
df_comb['number'] = df_comb.number.astype(int)
def label(number) :
if number < 1000 :
value = 'Premium'
else :
value = 'Regular'
return value
df_comb['number'] = df_comb['number'].apply(label)
df_comb = df_comb.rename(columns = {'number':'type'})
df_comb['type'] = df_comb['type'].str.lower()
# +
# terdapat banyak nama airline dan akan divisualisasi tingkat rata-rata delay setiap airline name
d1 = df_comb[(df_comb['status'] == 'train')].groupby('airline_name')['delay'].agg('mean').reset_index()
d2 = df_comb[(df_comb['status'] == 'train')]['airline_name'].value_counts().reset_index().rename(columns = {'index' : 'airline_name','airline_name':'count'})
d3 = pd.merge(d1,d2, on = 'airline_name', how = 'inner')
d3
# +
# membuat scatter plot dari count dan delay
plt.figure(figsize = (10,6))
sns.scatterplot(x = 'count', y = 'delay', data=d3)
# -
display(d3[d3['delay'] > 25].sort_values(by = 'count', ascending = False))
# Bisa dilihat bahwa semakin rendah jumlah data dengan nama pesawat tertentu, delaynya makin tinggi. Kita akan set threshold dan mengubah yang termasuk dalam data airline_name tersebut
# +
# merubah nama pesawat pada data di atas menjadi Others
others = d3[d3['delay'] > 25].sort_values(by = 'count', ascending = False)['airline_name'].tolist()
df_comb.loc[df_comb['airline_name'].isin(others),'airline_name'] = 'Others'
# +
# mengambil tanggal, hari (dalam seminggu), dan jam (dalam menit) pada scheduled_departure_time dan
# scheduled_arrival_time
# juga mengambil selisih scheduled arrival dan departure untuk mengetahui durasi asli dari penerbangan
df_comb['date_dept'] = df_comb.scheduled_departure_time.apply(lambda x : x.split()[0])
df_comb['day_dept'] = [t.dayofweek for t in pd.DatetimeIndex(df_comb.scheduled_departure_time)]
df_comb['hour_dept'] = [t.hour for t in pd.DatetimeIndex(df_comb.scheduled_departure_time)]
df_comb['day_arr'] = [t.dayofweek for t in pd.DatetimeIndex(df_comb.scheduled_arrival_time)]
df_comb['hour_arr'] = [t.hour for t in pd.DatetimeIndex(df_comb.scheduled_arrival_time)]
df_comb['scheduled_duration'] = (pd.to_datetime(df_comb['scheduled_arrival_time']) - pd.to_datetime(df_comb['scheduled_departure_time']))
df_comb['scheduled_duration'] = df_comb['scheduled_duration'].dt.components['hours']*60 + df_comb['scheduled_duration'].dt.components['minutes']
df_comb.drop(['scheduled_departure_time','scheduled_arrival_time'], axis = 1, inplace = True)
# +
# mengambil data jarak dari kombinasi tempat yang ada di data dari https://www.prokerala.com/ (web-scrapping)
df_comb['departure_airport_code'] = df_comb['departure_airport_code'].str.lower()
df_comb['arrival_airport_code'] = df_comb['arrival_airport_code'].str.lower()
df_dist = df_comb[['departure_airport_code','arrival_airport_code']].drop_duplicates().reset_index(drop = True)
df_dept = df_dist['departure_airport_code'].tolist()
df_arr = df_dist['arrival_airport_code'].tolist()
distances = []
for i in range(len(df_dept)):
dept_code = str(df_dept[i])
arr_code = str(df_arr[i])
url = f"https://www.prokerala.com/travel/airports/distance/from-{dept_code}/to-{arr_code}/"
response = requests.get(url)
soup = bs(response.content, 'html.parser')
imp = [element.text for element in soup.find_all('div', {'class': 'tc'})]
if imp :
imp_inf = imp[1]
distance = re.findall(r"(\d+(\.\d+)?)", imp[1])[1][0]
distances.append(float(distance))
else :
distances.append(np.nan)
time.sleep(2)
df_dist['distances'] = pd.DataFrame(distances)
df_comb = pd.merge(df_comb, df_dist, on = ['departure_airport_code','arrival_airport_code'], how = 'inner')
df_comb = df_comb.drop(['departure_airport_code','arrival_airport_code'], axis = 1)
# +
# memisahkan gate menjadi huruf dan angka pada departure_airport_gate
df_comb['dept_gate_num'] = df_comb['departure_airport_gate'].str.replace('([A-Z]+)','')
df_comb['dept_gate_alpha'] = df_comb['departure_airport_gate'].str.extract('([A-Z]+)')
del df_comb['departure_airport_gate']
# +
# menghilangkan departure_airport_terminal dan arrival_airport_terminal karena inputnya tidak berpola dan informasinya
# dapat diwakilkan oleh prediktor lain
del df_comb['departure_airport_terminal']
del df_comb['arrival_airport_terminal']
# +
# mengambil region spesifik dari arrival_airport_timezone dan menghapus, region, serta timezone-nya
# karena sudah terwakilkan oleh region spesifik tersebut
df_comb['arrival_specific_region'] = df_comb['arrival_airport_timezone'].apply(lambda x : x.split("/")[1])
del df_comb['arrival_airport_region']
del df_comb['arrival_airport_timezone']
# +
# mengambil jenis pesawat dari flight_equipment_name
df_comb['type_of_plane'] = df_comb.flight_equipment_name.apply(lambda x : str(x).split(" ")[0])
del df_comb['flight_equipment_name']
# +
# membuat flag apakah tanggal keberangkatan ada di sekitar tanggal liburan
# berikut adalah data tanggal liburan di Malaysia
holiday = pd.read_csv('C:/Users/rangga.pertama/Downloads/holiday_2018.csv')
holiday['month'] = pd.DatetimeIndex(holiday['Date']).month
holiday = holiday[holiday['month'].isin([10,11,12])]
# +
# melanjutkan membuat flag sekitar liburan
holiday['Date'] = pd.to_datetime(holiday['Date'])
holiday['date_before'] = holiday['Date'] - timedelta(days = 1)
holiday['date_after'] = holiday['Date'] + timedelta(days = 1)
list_date = holiday['Date'].astype(str).tolist()
list_date_before = holiday['date_before'].astype(str).tolist()
list_date_after = holiday['date_after'].astype(str).tolist()
def holiday(date) :
if (date in list_date) or (date in list_date_before) or (date in list_date_after):
value = 'yes'
else :
value = 'no'
return value
df_comb['around_holiday'] = df_comb["date_dept"].apply(holiday)
# +
# membuat pembagian waktu keberangkatan menjadi pagi, siang, sore, malam, dan dinihari
def time(x) :
if 0 <= x <= 4 :
value = 'dinihari'
elif 5 <= x <= 10 :
value = 'pagi'
elif 11 <= x <= 15 :
value = 'siang'
elif 16 <= x <= 20 :
value = 'sore'
elif 21 <= x <= 23 :
value = 'malam'
return value
df_comb['cat_time_dept'] = df_comb['hour_dept'].apply(time)
# +
# membuat clustering distance dengan k-means
# karena data distance masih terdapat nan maka akan diimputasi terlebih dahulu dengan mean jarak dari arrival_specific_region
display(df_comb.loc[df_comb['distances'].isnull()]['arrival_specific_region'].value_counts())
display(df_comb.groupby(['arrival_specific_region']).agg('mean')['distances'])
# +
# melanjutkan imputasi lalu clustering
df_comb.loc[(df_comb['distances'].isnull()) & (df_comb['arrival_specific_region'] == 'Kuala_Lumpur'),'distances'] = 677.677027
df_comb.loc[(df_comb['distances'].isnull()) & (df_comb['arrival_specific_region'] == 'Makassar'), 'distances'] = 1969.265502
df_comb.loc[(df_comb['distances'].isnull()) & (df_comb['arrival_specific_region'] == 'Tehran'),'distances'] = 5875.555776
df_comb.loc[(df_comb['distances'].isnull()) & (df_comb['arrival_specific_region'] == 'Jakarta'),'distances'] = 2940.378251
df_comb['distances'] = MinMaxScaler().fit_transform(df_comb[['distances']])
kmeans = KMeans(n_clusters=3, max_iter=600, algorithm = 'auto')
kmeans.fit(df_comb[['distances']])
idx = np.argsort(kmeans.cluster_centers_.sum(axis=1))
lut = np.zeros_like(idx)
lut[idx] = np.arange(3)
df_comb['distances_scaled'] = pd.DataFrame(lut[kmeans.labels_])
# +
# membuat clustering scheduled_duration dengan k-means
# karena data scheduled_duration masih terdapat nan maka akan diimputasi terlebih dahulu
df_comb.loc[(df_comb['scheduled_duration'].isnull()) & (df_comb['arrival_airport_city'] == 'penang'),'scheduled_duration'] = df_comb[df_comb['arrival_airport_city']=='penang']['scheduled_duration'].mean()
df_comb.loc[(df_comb['scheduled_duration'].isnull()) & (df_comb['arrival_airport_city'] == 'cairns'), 'scheduled_duration'] = df_comb[df_comb['arrival_airport_city']=='cairns']['scheduled_duration'].mean()
df_comb.loc[(df_comb['scheduled_duration'].isnull()) & (df_comb['arrival_airport_city'] == 'guam'), 'scheduled_duration'] = df_comb['scheduled_duration'].median()
df_comb['scheduled_duration'] = MinMaxScaler().fit_transform(df_comb[['scheduled_duration']])
kmeans = KMeans(n_clusters=3, max_iter=600, algorithm = 'auto')
kmeans.fit(df_comb[['scheduled_duration']])
idx = np.argsort(kmeans.cluster_centers_.sum(axis=1))
lut = np.zeros_like(idx)
lut[idx] = np.arange(3)
df_comb['duration_scaled'] = pd.DataFrame(lut[kmeans.labels_])
# +
# membaut kecepatan yang seharusnya dari setiap pesawat
df_comb['speed'] = df_comb['distances'] / df_comb['scheduled_duration']
# +
# membuat jumlah dan densitas penerbangan setiap kategori waktu pada suatu hari
flight = df_comb.groupby(['date_dept','departure_airport_city','cat_time_dept'])['airline_name'].count().reset_index().rename(columns = {'airline_name':'cnt_flight'})
def dens(flight) :
if flight['cat_time_dept'] == 'dinihari' :
values = flight['cnt_flight'] / 4
elif flight['cat_time_dept'] == 'pagi' :
values = flight['cnt_flight'] / 5
elif flight['cat_time_dept'] == 'siang' :
values = flight['cnt_flight'] / 5
elif flight['cat_time_dept'] == 'sore' :
values = flight['cnt_flight'] / 4
elif flight['cat_time_dept'] == 'malam' :
values = flight['cnt_flight'] / 2
return values
flight['dens_flight'] = flight.apply(dens,axis = 1)
df_comb = pd.merge(df_comb, flight, on = ['date_dept','departure_airport_city','cat_time_dept'], how = 'inner')
# +
# membuat pengkategorian penerbangan keluar dari malaysia atau di dalam malaysia
def inout(df_comb) :
if df_comb['arrival_airport_country'] == 'MY' :
value = 1
elif df_comb['arrival_airport_country'] != 'MY' :
value = 0
return value
df_comb['in_MY'] = df_comb.apply(inout, axis = 1)
# +
# scrap data flight_equipment_iata dari http://www.flugzeuginfo.net/table_accodes_iata_en.php
url = "http://www.flugzeuginfo.net/table_accodes_iata_en.php"
page = requests.get(url)
doc = lh.fromstring(page.content)
tr_elements = doc.xpath('//tr')
# +
# cek banyaknya kolom di tiap tr_elements
display([len(T) for T in tr_elements[:12]])
# mengambil data tiap row-nya
col = []
i = 0
for t in tr_elements[0]:
i+=1
name=t.text_content()
col.append((name,[]))
display(col)
for j in range(1,len(tr_elements)):
T=tr_elements[j]
if len(T)!=4:
break
i=0
for t in T.iterchildren():
data=t.text_content()
if i>0:
try:
data=int(data)
except:
pass
col[i][1].append(data)
i+=1
display([len(C) for (title,C) in col])
Dict={title:column for (title,column) in col}
df_iata=pd.DataFrame(Dict)
# -
df_iata = df_iata.rename(columns = {'IATA':'flight_equipment_data'})
df_comb = pd.merge(df_comb, df_iata[['flight_equipment_data','Wake']], on = 'flight_equipment_data', how = 'inner')
# +
# mengambil data cuaca https://rp5.ru/Weather_in_the_world
df_weather = pd.read_csv('D:/Titip/weather_data_comp.csv', )
# -
df_weather['WW'] = df_weather['WW'].str.strip()
df_weather['WW'] = df_weather['WW'].map({'Haze.' : 'Haze', 'Clouds generally dissolving or becoming less developed.' : 'Cloudy',
'State of sky on the whole unchanged.' : 'Normal',
'Clouds generally forming or developing.' : 'Cloudy',
'Lightning visible, no thunder heard.' : 'Lightning',
'Mist.' : 'Fog',
'Rain, not freezing, continuous, slight at time of observation.' : 'Rain',
'Thunderstorm, slight or moderate, without hail, but with rain and/or snow at time of observation.' : 'Thunderstorm',
'Precipitation within sight, reaching the ground or the surface of the sea, but distant, i.e. estimated to be more than 5 km from the station.' : 'Fog',
'Rain (not freezing) not falling as shower(s).' : 'Rain',
'Thunderstorm, but no precipitation at the time of observation.' : 'Thunderstorm',
'Rain, not freezing, intermittent, slight at time of observation.' : 'Rain',
'Thunderstorm (with or without precipitation).' : 'Thunderstorm',
'Slight rain at time of observation. Thunderstorm during the preceding hour but not at time of observation.' : 'Rain',
'Shower(s) of rain.' : 'Rain',
'Rain shower(s), slight.' : 'Rain',
'Rain, not freezing, continuous, moderate at time of observation.' : 'Rain',
'Rain shower(s), moderate or heavy.' : 'Rain',
'Fog or ice fog.' : 'Fog',
'Rain, not freezing, continuous, heavy at time of observation.' : 'Rain',
'Rain, not freezing, intermittent, moderate at time of observation.' : 'Rain',
'Moderate or heavy rain at time of observation. Thunderstorm during the preceding hour but not at time of observation.' : 'Rain',
'Thunderstorm, heavy, without hail, but with rain and/or snow at time of observation.' : 'Thunderstorm',
'Fog or ice fog, sky visible, has begun or has become thicker during the preceding hour.' : "Fog",
'Rain, not freezing, intermittent, heavy at time of observation.' : 'Rain',
'Fog or ice fog, sky visible (has become thinner during the preceding hour).' : 'Fog',
'Fog or ice fog, sky visible, no appreciable change during the preceding hour.' : 'Fog',
'Drizzle, not freezing, intermittent, slight at time of observation.' : 'Fog',
'Rain shower(s), violent.' : 'Rain'})
# +
# mengkategorikan nilai kolom bertipe object
df_weather['N'] = df_weather['N'].str.rstrip()
df_weather['N'] = df_weather['N'].map({'90 or more, but not 100%' : 'high', '100%.' : 'very high', '70 – 80%.' : 'medium',
'60%.' : 'low'})
# -
df_weather['DD'] = df_weather['DD'].str[22:]
del df_weather['Cl']
df_weather['H'] = df_weather['H'].str.rstrip()
df_weather['H'] = df_weather['H'].map({'Less than 50' : 'very_low', '50-100' : 'low','100-200' : 'medium',
'200-300' : 'high', '300-600' : 'very_high', '600-1000' : 'extremely_high'})
del df_weather['Nh']
def time(x) :
if x in [23,22,21] :
value = 23
elif x in [20,19,18] :
value = 20
elif x in [17,16,15] :
value = 17
elif x in [14,13,12] :
value = 14
elif x in [11,10,9] :
value = 11
elif x in [8,7,6] :
value = 8
elif x in [5,4,3] :
value = 5
elif x in [2,1,0] :
value = 2
return value
df_weather
df_comb['time_merg'] = df_comb['hour_dept'].apply(time)
df_weather = df_weather.rename(columns = {'time' : 'time_merg','city' : 'departure_airport_city','date' : 'date_dept'})
df_weather
df_comb = pd.merge(df_comb, df_weather, on = ['date_dept','time_merg','departure_airport_city'], how = 'inner')
del df_comb['time_merg']
# ## Membuat Algoritma Rekomendasi
#
# Di sini akan dibuat algoritma rekomendasi untuk memilih pesawat dengan delay paling sebentar untuk setiap tanggal keberangkatan, jam keberangkatan, kota tujuan, dan jenis penerbangan. Data yang digunakan adalah data train (Oktober dan November) dengan inputnya adalah tanggal keberangkatan, jam keberangkatan, kota tujuan, dan jenis.
#
# Teknik dalam membuat algoritma ini pada dasarnya menggunakan agregasi berupa rata-rata delay yang terjadi dengan kondisi input yang telah dimasukkan. Namun bila kita hanya mengambil data tepat untuk kondisi seperti pada input, maka agregasi yang dihasilkan akan sedikit dan mungkin tidak mewakilkan. Maka akan dicari terlebih dahulu variabel lain yang berhubungan dengan tanggal, jam, dan kota tujuan dan akan digambarkan hubungannya dengan tingkat delay yang terjadi.
df_train = df_comb[df_comb['status'] == 'train']
df_test = df_comb[df_comb['status'] == 'test']
# +
# menghilangkan terlebih dahulu nilai delay yang termasuk extreme outliers (lebih dari 5 kali iqr)
iqr_1 = np.quantile(df_train['delay'], 0.25)
iqr_3 = np.quantile(df_train['delay'], 0.75)
df_train_wo = df_train[(df_train['delay'] < iqr_3 + 6*(iqr_3 - iqr_1)) & (df_train['delay'] > iqr_1 - 6*(iqr_3 - iqr_1))]
del df_train_wo['status']
del df_test['status']
# +
# variabel yang berhubungan dengan tanggal adalah around_holiday
# akan dilihat perbedaan tingkat delay untuk 'yes' dan 'no' pada around_holiday untuk setiap jenis penerbangan
plt.figure(figsize = (10,6))
ax = sns.boxplot(x = 'around_holiday', y = 'delay', hue = 'type',
data = df_train_wo)
# +
# tanggal juga berhubungan dengan hari dalam seminggu jadi akan dilihat juga bagaimana hubungannya terhadap delay
plt.figure(figsize=(10,6))
ax = sns.violinplot(x="day_dept", y="delay", hue="type",
data=df_train_wo, palette="Set2", split=True, scale="count")
# -
# Dapat dilihat bahwa hari dalam minggu dan apakah tanggal tersebut di sekitar hari libur tidak berpengaruh signifikan kepada tingkat delay. Jadi sebenarnya untuk suatu tanggal, tidak perlu mengambil data hari dalam seminggu yang sama seperti tanggal tersebut atau status sekitar hari libur yang mirip seperti tanggal tersebut.
# +
# sekarang akan dicek untuk variabel yang berhubungan dengan jam yaitu kategori waktu
plt.figure(figsize = (10,6))
sns.pointplot(x="cat_time_dept", y="delay", hue="type", data=df_train_wo,
palette={"regular": "green", "premium": "red"},
markers=["*", "o"], linestyles=["-", "--"])
# -
# Ternyata cukup signifikan perbedaan untuk setiap kategori waktu dan kita bisa mengambil data semua kategori waktu dengan kategori waktu yang sama seperti data input.
# +
# akan diperiksa variabel-variabel yang berhubungan dengan kota tujuan
# misalnya arrival_specific_region
plt.figure(figsize = (20,6))
ax = sns.boxplot(x = 'arrival_specific_region', y = 'delay', hue = 'type',
data = df_train_wo.sort_values(by = ['delay'], ascending = False).head(50000))
# -
# Ternyata setiap specific region berbeda-beda secara signifikan (dari visualisasi) sehingga kita bisa mengambil specific region yang sama seperti data kota tujuan yang diinput.
# +
# akan dibandingkan delay antar jarak (distances)
plt.figure(figsize = (10,6))
ax = sns.boxenplot(x = 'distances_scaled', y = 'delay', hue = 'type',
data = df_train_wo)
# -
plt.figure(figsize = (10,6))
ax = sns.boxenplot(x = 'duration_scaled', y = 'delay', hue = 'type',
data = df_train_wo)
# Ternyata diketahui bahwa keterlambatan terjadi banyak pada penerbangan jarak dekat dan dengan durasi yang lama sehingga pengkategorian jarak dan durasi yang sama dengan data yang diinput bisa dimanfaatkan juga.
# +
# membuat hubungan antara dept_gate_num dan dept_gate_alpha
plt.figure(figsize = (10,6))
ax = sns.scatterplot(x="dept_gate_num", y="delay", hue="type",
data=df_train_wo)
# -
plt.figure(figsize = (10,6))
ax = sns.catplot(x="dept_gate_alpha", y="delay", hue="type",
data=df_train_wo)
# Ternyata tidak ada pola antara gate dan delay. Maka kita hanya akan mengambil kategori waktu, region spesifik, jarak, dan durasi yang sama dengan input data yang diambil untuk mencari agregasi rata-rata delay yang paling kecil untuk direkomendasikan.
# +
# input
berangkat = '2018-10-06' # masukkan tanggal dengan format tahun-bulan-tanggal
jam = 6 # masukkan jam keberangkatan
kota_tujuan = 'jakarta' # masukkan nama kota tujuan
jenis = 'regular' # masukkan jenis penerbangan ('regular' atau 'premium')
# +
# menunjukkan rekomendasi nama pesawat untuk input seperti di atas
# menunjukkan pesawat dengan delay tersedikit dengan pengelompokan jam, region, jarak, dan waktu yang sesuai dengan input
df_dum = df_train_wo[(df_train_wo['date_dept'] == berangkat) & (df_train_wo['hour_dept'] == jam) & (df_train_wo['arrival_airport_city'] == kota_tujuan) & (df_train_wo['type'] == jenis)]
cat_jam = df_dum['cat_time_dept'].values[0]
reg_spes = df_dum['arrival_specific_region'].values[0]
jar = df_dum['distances_scaled'].values[0]
dur = df_dum['duration_scaled'].values[0]
df_dum = df_train_wo[(df_train_wo['date_dept'] == berangkat) & (df_train_wo['cat_time_dept'] == cat_jam) & (df_train_wo['distances_scaled'] == jar) & \
(df_train_wo['duration_scaled'] == dur) & (df_train_wo['type'] == jenis)].groupby('airline_name')['delay'].agg('mean').reset_index().sort_values(by = ['delay'], ascending = True).reset_index(drop = True)
# menunjukkan pesawat dari daftar pesawat dengan delay tercepat yang punya jadwal penerbangan sesuai input
list_air = list(df_dum['airline_name'])
list_air_inp = []
for i in range(len(list_air)) :
name = list_air[i]
list_dum_dest = list(df_train_wo[(df_train_wo['airline_name'] == name) & (df_train_wo['date_dept'] == berangkat) & (df_train_wo['type'] == jenis) & (df_train_wo['hour_dept'] == jam)]['arrival_airport_city'])
if kota_tujuan in list_dum_dest :
list_air_inp.append(name)
df_rec = df_dum[df_dum.airline_name.isin(list_air_inp)].reset_index(drop = True)
# bila error, maka tidak ada data keberangkatan pada tanggal, jam, kota tujuan, dan jenis seperti pada input
# +
# menampilkan hasil rekomendasinya
df_rec
# -
# Pada contoh di atas, terlihat bahwa bila kita ingin pergi ke Jakarta dari Kuala Lumpur pada tanggal 6 Oktober 2018 jam 5 pagi, pesawat yang direkomendasikan pertama sesuai waktu delay-nya adalah Etihad Airways, lalu Garuda Indonesia, dan terakhir Qatar Airways.
# ## Membuat Model Prediksi
#
# Model yang akan digunakan adalah regresi linear, logistik, dan decision tree sementara metrics yang akan digunakan adalah RMSE dan Adjusted R-squared untuk regresi serta akurasi untuk klasifikasi
# ### Mempersiapkan Data
# +
# dari hasil pengamatan di atas terlihat bahwa data yang berhubungan dengan gate tidak berhubungan dengan delay
df_train_wo.drop(['dept_gate_alpha','dept_gate_num'], axis = 1, inplace = True)
df_test.drop(['dept_gate_alpha','dept_gate_num'], axis = 1, inplace = True)
# date_dept juga lebih baik tidak diikutkan karena formatnya tanggal dan terlalu banyak dan tidak make sense
del df_train_wo['date_dept']
del df_test['date_dept']
# membuat kategori delay lebih dari 60 menit
def more(delay) :
if delay > 60 :
value = 1
else :
value = 0
return value
df_train_wo['del_more'] = df_train_wo['delay'].apply(more)
df_test['del_more'] = df_test['delay'].apply(more)
# merubah day_dept, day_arr, distances_scaled, dan duration_scaled menjadi kategorikal
list = ['day_dept','day_arr', 'distances_scaled', 'duration_scaled']
df_train_wo['day_dept'] = df_train_wo['day_dept'].astype(str)
df_train_wo['distances_scaled'] = df_train_wo['distances_scaled'].astype(str)
df_train_wo['duration_scaled'] = df_train_wo['duration_scaled'].astype(str)
df_train_wo['day_arr'] = df_train_wo['day_arr'].astype(str)
df_test['day_dept'] = df_test['day_dept'].astype(str)
df_test['distances_scaled'] = df_test['distances_scaled'].astype(str)
df_test['duration_scaled'] = df_test['duration_scaled'].astype(str)
df_test['day_arr'] = df_test['day_arr'].astype(str)
# menskala variabel delay
max_delay = max(df_train_wo['delay'])
min_delay = min(df_train_wo['delay'])
df_train_wo['delay'] = (df_train_wo['delay'] - min_delay) / (max_delay - min_delay)
# +
# imputasi kecepatan yang masih nan
df_train_wo.loc[df_train_wo['speed'].isnull(), 'speed'] = df_train_wo[(df_train_wo['airline_name'] == 'Malaysia Airlines') & (df_train_wo['type'] == 'regular')]['speed'].mean()
df_test.loc[df_test['speed'].isnull(), 'speed'] = df_test[(df_test['airline_name'] == 'Malaysia Airlines') & (df_test['type'] == 'regular')]['speed'].mean()
# +
# untuk hour_arr yang masih nan tidak akan diimputasi dan akan dibuang
df_train_wo = df_train_wo.dropna(subset = ['hour_arr'])
df_test = df_test.dropna(subset = ['hour_arr'])
# +
# # membuat dummy untuk variabel bertipe object
df_train_wo = pd.get_dummies(df_train_wo, drop_first=True)
df_test = pd.get_dummies(df_test, drop_first = True)
# +
# membuang data yang tidak ada di data_test karena tidak akan bisa membuat model dari sana
list = [x for x in df_train_wo.columns.tolist() if x not in df_test.columns.tolist()]
col = [x for x in df_train_wo.columns if x not in list]
df_train_wo = df_train_wo[col]
list2 = [x for x in df_test.columns.tolist() if x not in df_train_wo.columns.tolist()]
col2 = [x for x in df_test.columns if x not in list2]
df_test = df_test[col2]
# +
# memisahkan data train dan split serta membagi prediktor dan respons
y_train_delay = df_train_wo['delay']
y_train_more = df_train_wo['del_more']
y_test_delay = df_test['delay']
y_test_more = df_test['del_more']
X_train = df_train_wo.drop(['delay','del_more'], axis = 1)
X_test = df_test.drop(['delay','del_more'], axis = 1)
# -
# ### Membuat Model
# +
# model regresi untuk variabel delay
lr = LinearRegression()
model_delay = lr.fit(X_train, y_train_delay)
y_pred_delay = model_delay.predict(X_test)
y_pred_delay = y_pred_delay * (max_delay - min_delay) + min_delay
np.sqrt(metrics.mean_squared_error(y_pred_delay,y_test_delay))
# -
# Dapat dilihat bahwa base model regresi linear untuk delay menghasilkan rata-rata error sebesar 28.79 menit.
base_train = pd.concat([X_train, y_train_delay], axis = 1)
base_train.columns = base_train.columns.str.strip()
base_train.columns = base_train.columns.str.replace(' ', '_')
base_train.columns = base_train.columns.str.replace('.0','')
base_train.columns = base_train.columns.str.replace('-','_')
base_train.columns = base_train.columns.str.replace('(','')
base_train.columns = base_train.columns.str.replace(')','')
features = " + ".join(base_train.drop("delay", axis=1).columns)
res = smf.ols(formula = f"delay ~ {features}", data = base_train).fit()
res.summary()
# Adjusted R-squared dari regresi linear menggunakan data aslinya adalah 0.218. Dapat dilihat bahwa terdapat beberapa fitur yang tidak signifikan dari p-value (lebih dari 0.05). Kemungkinan ini terjadi karena efek multikolinearitas sehingga akan dicoba untuk dihilangkan.
# +
# menghilangkan data yang berkorelasi tinggi
def correlation(dataset, threshold):
dataset_no_high_corr = dataset.copy()
col_corr = set() # Set of all the names of deleted columns
corr_matrix = dataset_no_high_corr.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if (corr_matrix.iloc[i, j] >= threshold) and (corr_matrix.columns[j] not in col_corr):
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
if colname in dataset_no_high_corr.columns:
del dataset_no_high_corr[colname] # deleting the column from the dataset
dataset = dataset_no_high_corr.copy()
# -
correlation(X_train, 0.4)
# +
# menyesuaikan hasil pembuangan data
list = [x for x in X_test.columns.tolist() if x not in X_train.columns.tolist()]
col = [x for x in X_test.columns if x not in list]
X_test = X_test[col]
df_dum1 = pd.concat([X_train,y_train_delay], axis = 1)
# +
# cek korelasi tertinggi terhadap variabel delay
# Plot korelasi tertinggi terstandardisasi terhadap variabel target yang bernilai yes (membeli)
corr1 = df_dum1[df_dum1.columns[1:]].corr()['delay'][:].reset_index()
corr1 = corr1.sort_values(by = ['delay'], ascending = False).reset_index(drop = True)[1:].head(5)
corr2 = df_dum1[df_dum1.columns[1:]].corr()['delay'][:].reset_index()
corr2 = corr2.sort_values(by = ['delay'], ascending = True).reset_index(drop = True)[1:].head(5)
corr = pd.concat([corr1,corr2], axis = 0).reset_index(drop = True)
corr = corr.rename(columns = {'index':'variable'})
x = corr.loc[:, ['delay']]
corr['delay_z'] = (x - x.mean())/x.std()
corr['colors'] = ['red' if x < 0 else 'darkgreen' for x in corr['delay_z']]
corr.sort_values('delay_z', inplace=True)
corr.reset_index(inplace=True)
plt.figure(figsize=(14,16), dpi= 80)
plt.scatter(corr.delay_z, corr.index, s=450, alpha=.6, color=corr.colors)
for x, y, tex in zip(corr.delay_z, corr.index, corr.delay_z):
t = plt.text(x, y, round(tex, 1), horizontalalignment='center',
verticalalignment='center', fontdict={'color':'white'})
plt.gca().spines["top"].set_alpha(.3)
plt.gca().spines["bottom"].set_alpha(.3)
plt.gca().spines["right"].set_alpha(.3)
plt.gca().spines["left"].set_alpha(.3)
plt.yticks(corr.index, corr.variable)
plt.title('Korelasi Tertinggi Terhadap Delay', fontdict={'size':20})
plt.xlabel('$Correlation$')
plt.grid(linestyle='--', alpha=0.5)
plt.xlim(-1, 1)
plt.show()
# +
# model regresi untuk variabel delay dengan kolom yang berkorelasi telah dibuang
lr = LinearRegression()
model_delay = lr.fit(X_train, y_train_delay)
y_pred_delay_reg = model_delay.predict(X_test)
y_pred_delay_reg = y_pred_delay * (max_delay - min_delay) + min_delay
np.sqrt(metrics.mean_squared_error(y_pred_delay_reg,y_test_delay))
# -
# Dapat dilihat bahwa ternyata saat data-data yang berkorelasi lebih dari 0.4 dibuang maka menghasilkan RMSE yang semakin tinggi sehingga model regresi dengan data awal masihlah yang terbaik.
# +
# Membuat model decision tree
decision_tree = DecisionTreeRegressor()
decision_tree.fit(X_train, y_train_delay)
y_pred = decision_tree.predict(X_test)
y_pred_delay = y_pred_delay * (max_delay - min_delay) + min_delay
np.sqrt(metrics.mean_squared_error(y_pred_delay,y_test_delay))
# -
# Ternyata regresi menggunakan regresi linear sederhana dengan data awal lebih baik daripada regresi menggunakan decision tree. Maka algoritma rekomendasi akan menggunakan hasil prediksi dari regresi linear dengan data awal.
# +
# membuat model regresi logistik untuk prediksi klasifikasi delay > 60 menit
logreg = LogisticRegression()
logreg.fit(X_train, y_train_more)
y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, y_train_more) * 100, 2)
display(acc_log)
display(roc_auc_score(y_test_more, y_pred))
# -
# Akurasi model logistik dengan data yang telah dibuang variabel yang berkorelasi tinggi adalah 98.5 % namun AUC-nya masih buruk yaitu 0.5
coeff_df = pd.DataFrame(X_train.columns)
coeff_df.columns = ['Prediktor']
coeff_df["Koefisien"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Koefisien', ascending=False)
# +
# membuat model decision tree untuk prediksi klasifikasi delay > 60 menit
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, y_train_more)
y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, y_train_more) * 100, 2)
acc_decision_tree
# -
roc_auc_score(y_test_more, y_pred)
# Dapat dilihat bahwa akurasi untuk model training mencapai 100 % dengan decision tree dan terdapat sedikit peningkatan di AUC. Dapat disimpulkan bahwa model ini terlalu overfit pada hasil di data training. Ini kemungkinan disebabkan oleh distribusi data training yang tidak sama dengan data testing.
# ## Membuat Rekomendasi Delay sesuai Hasil Prediksi untuk Bulan Desember
df_test2 = df_comb[df_comb['status'] == 'test'].reset_index(drop = True)
df_test2 = pd.concat([df_test2,pd.DataFrame(y_pred_delay)], axis = 1).rename(columns = {0 : 'delay'})
# +
# input
berangkat = '2018-12-02' # masukkan tanggal dengan format tahun-bulan-tanggal
jam = 6 # masukkan jam keberangkatan
kota_tujuan = 'kota bharu' # masukkan nama kota tujuan
jenis = 'regular' # masukkan jenis penerbangan ('regular' atau 'premium')
# +
# menunjukkan rekomendasi nama pesawat untuk input seperti di atas
# menunjukkan pesawat dengan delay tersedikit dengan pengelompokan jam, region, jarak, dan waktu yang sesuai dengan input
df_dum = df_test2[(df_test2['date_dept'] == berangkat) & (df_test2['hour_dept'] == jam) & (df_test2['arrival_airport_city'] == kota_tujuan) & (df_test2['type'] == jenis)]
cat_jam = df_dum['cat_time_dept'].values[0]
reg_spes = df_dum['arrival_specific_region'].values[0]
df_dum = df_test2[(df_test2['date_dept'] == berangkat) & (df_test2['cat_time_dept'] == cat_jam) & \
(df_test2['type'] == jenis)].groupby('airline_name')['delay'].agg('mean').reset_index().sort_values(by = ['delay'], ascending = True).reset_index(drop = True)
del df_dum['level_1']
# menunjukkan pesawat dari daftar pesawat dengan delay tercepat yang punya jadwal penerbangan sesuai input
list_air = df_dum['airline_name'].tolist()
list_air_inp = []
for i in range(len(list_air)) :
name = list_air[i]
list_dum_dest = df_test2[(df_test2['airline_name'] == name) & (df_test2['date_dept'] == berangkat) & (df_test2['type'] == jenis) & (df_test2['hour_dept'] == jam)]['arrival_airport_city'].tolist()
if kota_tujuan in list_dum_dest :
list_air_inp.append(name)
df_rec = df_dum[df_dum.airline_name.isin(list_air_inp)].reset_index(drop = True).groupby('airline_name')['delay'].agg('mean').reset_index().sort_values(by = 'delay', ascending = True).reset_index(drop = True)
# -
df_rec
# Jadi bila ingin ke Kota Bharu dari Kuala Lumpur pada tanggal 2 Desember jam 6 pagi, yang direkomendasikan adalah menaiki Royal Brunei Airlines, Etihad Airways, dan Malaysia Airlines.
| Exercise November, 4th 2019.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Activity 02: Indexing, Slicing, and Iterating
# Our client wants to prove that our dataset is nicely distributed around the mean value of 100.
# They asked us to run some tests on several subsections of it to make sure they won't get a non-descriptive section of our data.
#
# Look at the mean value of each subtask.
# #### Loading the dataset
# importing the necessary dependencies
import numpy as np
# loading the Dataset
dataset = np.genfromtxt('./data/normal_distribution.csv', delimiter=',')
# ---
# #### Indexing
# Since we need several rows of our dataset to complete the given task, we have to use indexing to get the right rows.
# To recap, we need:
# - the second row
# - the last row
# - the first value of the first row
# - the last value of the second to the last row
# +
# indexing the second row of the dataset (2nd row)
second_row = dataset[1]
np.mean(second_row)
# +
# indexing the last element of the dataset (last row)
last_row = dataset[-1]
np.mean(last_row)
# +
# indexing the first value of the second row (1nd row, 1st value)
first_val_first_row = dataset[0][0]
np.mean(first_val_first_row)
# +
# indexing the last value of the second to last row (we want to use the combined access syntax here)
last_val_second_last_row = dataset[-2, -1]
np.mean(last_val_second_last_row)
# -
# ---
# #### Slicing
# Other than the single rows and values we also need to get some subsets of the dataset.
# Here we want slices:
# - a 2x2 slice starting from the second row and second element to the 4th element in the 4th row
# - every other element of the 5th row
# - the content of the last row in reversed order
# +
# slicing an intersection of 4 elements (2x2) of the first two rows and first two columns
subsection_2x2 = dataset[1:3, 1:3]
np.mean(subsection_2x2)
# -
# ##### Why is it not a problem if such a small subsection has a bigger standard deviation from 100?
# Several smaller values can cluster in such a small subsection leading to the value being really low.
# If we make our subsection larger, we have a higher chance of getting a more expressive view of our data.
# +
# selecting every second element of the fifth row
every_other_elem = dataset[6, ::2]
np.mean(every_other_elem)
# +
# reversing the entry order, selecting the first two rows in reversed order
reversed_last_row = dataset[-1, ::-1]
np.mean(reversed_last_row)
# -
# ---
# #### Splitting
# Our client's team only wants to use a small subset of the given dataset.
# Therefore we need to first split it into 3 equal pieces and then give them the first half of the first split.
# They sent us this drawing to show us what they need:
# ```
# 1, 2, 3, 4, 5, 6 1, 2 3, 4 5, 6 1, 2
# 3, 2, 1, 5, 4, 6 => 3, 2 1, 5 4, 6 => 3, 2 => 1, 2
# 5, 3, 1, 2, 4, 3 5, 3 1, 2 4, 3 3, 2
# 1, 2, 2, 4, 1, 5 1, 2 2, 4 1, 5 5, 3
# 1, 2
# ```
#
# > **Note:**
# We are using a very small dataset here but imagine you have a huge amount of data and only want to look at a small subset of it to tweak your visualizations
# splitting up our dataset horizontally on indices one third and two thirds
hor_splits = np.hsplit(dataset,(3))
# splitting up our dataset vertically on index 2
ver_splits = np.vsplit(hor_splits[0],(2))
# requested subsection of our dataset which has only half the amount of rows and only a third of the columns
print("Dataset", dataset.shape)
print("Subset", ver_splits[0].shape)
# ---
# #### Iterating
# Once you sent over the dataset they tell you that they also need a way iterate over the whole dataset element by element as if it would be a one-dimensional list.
# However, they want to also now the position in the dataset itself.
#
# They send you this piece of code and tell you that it's not working as mentioned.
# Come up with the right solution for their needs.
# iterating over whole dataset (each value in each row)
curr_index = 0
for x in np.nditer(dataset):
print(x, curr_index)
curr_index += 1
# iterating over whole dataset with indices matching the position in the dataset
for index, value in np.ndenumerate(dataset):
print(index, value)
| Lesson01/Activity02/activity02-solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:py36]
# language: R
# name: conda-env-py36-r
# ---
#Read Data
dataset=read.csv('cow.csv')
#know how of dataset
names(dataset)
#Applying Linear Regression(here y comes first then x)
regressor=lm(dataset$weight~dataset$time)
#brief of linear regression model
summary(regressor)
#finding intercept and slope
coef(regressor)
#Applying Linear Regression
plot(dataset$time, dataset$weight, main = 'Scatter Plot', xlab = 'Time', ylab = 'Weight' )
#plotting regression line
plot(dataset$time, dataset$weight, main = 'Scatter Plot', xlab = 'Time', ylab = 'Weight' )
abline(regressor, col='blue', lwd=2)
| Statistical Modelling/3. Regression/R Codes/SM Cow Dataset Linear Regression R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object Detection using Tensorflow
#
# ### Authors : <NAME>, <NAME>
# ## Objective
# Our objective is to identify and classify objects spotted on images and real-time video using TensorFlow and to determine the accuracy of each identification. We are considering two Models namely, SSD with MobileNet, SSD Inception V2 model and Faster RCNN Inception model to compare the accuracy and size of the models.
# The principal difference between the models is that Faster RCNN Inception V2 is optimized for accuracy, while the MobileNets are optimized to be small and efficient, at the cost of some accuracy. The SSD with MobileNets detects objects in only a single shot with just two components in its architecture namely, Feature Extraction and Detection Generator, while the Faster R-CNN consists of three components- Feature Extraction, Proposal Generation and Box Classifier.
#
# In the Faster R-CNN Inception model, a region proposal network is used to generate regions of interest and then either fully-connected layers or position-sensitive convolutional layers to classify those regions. SSD does the two in a “single shot,” simultaneously predicting the bounding box and the class as it processes the image.
#
# ## Understanding the terms
# Let us first understand few terms before we jump to the process of object detection and comparing the models.
# ### Convolutional Neural Networks (CNN):
#
# It is a class of deep, feed-forward artificial neural networks that has been applied to analyzing visual imagery.
# These are a special class of Multilayer perceptron which are well suited for pattern classification.
# It is specifically designed to recognize 2D shapes with a high level of invariance, skewing and scaling.
# They are made up of neurons that have learnable weights and biases. Each neuron receives some input, performs a dot product and optionally follows it with a non-linearity.
# The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. A simple ConvNet is a sequence of layers, and every layer of a ConvNet transforms one volume of activations to another through a differentiable function.
# There are three main types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer
# 
# ### SSD with MobileNet:
# Out of the many detection models, we chose to work with the combination of Single Shot Detectors(SSDs) and MobileNets architecture as they are fast, efficient and do not require huge computational capability to fulfill the Object Detection task. The SSD approach is based on a feed-forward convolutional neural network which produces a fixed-size collection of bounding boxes and scores for the object class instances present in those boxes.
#
# The main difference between a “traditional” CNN’s and the MobileNet architecture is instead of a single 3x3 convolution layer followed by batch norm and ReLU, MobileNets split the convolution into a 3x3 depthwise conv and a 1x1 pointwise conv.
#
# ### SSD Inception V2 Model:
#
# Given an input image and a set of ground truth labels, SSD does the following:
#
# • It passes the image through a series of convolutional layers, providing several sets of feature maps at different scales.
# • For each location in each of these feature maps, a 3x3 convolutional filter is used to evaluate a small set of default bounding boxes.
# • For each box, it simultaneously predicts the bounding box offset and the probabilities of each class.
# • During training, it matches the ground truth box with these predicted boxes based on IoU(Intersection over Union). The best predicted box is labeled a “positive” along with all the other boxes having an IoU with the truth greater than 0.5.
#
# ### Faster RCNN Inception Model:
#
# The main insight of Faster R-CNN was to replace the slow selective search algorithm that was used in the R-CNN (Region-based Convolutional Neural Network), with a fast neural net.
# Faster R-CNN is similar to the original R-CNN but is improved on its detection speed through two augmentations:
#
# • It performs feature extraction over the image even before proposing regions, thus running only one CNN over the entire image instead of running 2000 CNN’s across 2000 overlapping regions
# • It replaces the SVM with a softmax layer, thus extending the network for predictions instead of creating a new model
#
# ### TensorFlow:
#
# TensorFlow is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices
#
# ### OpenCV:
#
# OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. The C++ API provides a class ‘videocapture’ for capturing video from cameras or for reading video files and image sequences. It is basically used to access the Webcam of our computer to capture real-time videos
#
# ### Pre-Trained dataset:
# COCO Dataset is a large-scale object detection, segmentation and captioning dataset. It is downloaded from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md and it has 200K labelled images categorized into 90 classes.
#
# Choose any dataset you want from the following list
# 
#
# ### COCO mAP:
# The higher the mAp (minimum average precision), the better the model. Based on the observations, SSD with MobileNets provided much better results in terms of speed but the Faster RCNN Inception Model provided a higher accuracy with some compromise on the speed.
#
# ## Code With Documentation:
# ### Installation
# ### Install tensorflow in anaconda using the below command :
# pip install tensorflow
#
# ### Install OpenCV using following command:
# conda install -c https://conda.binstar.org/menpo opencv
#
# ### Imports:
#
# +
# importing the necessary libraries
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from utils import label_map_util
from utils import visualization_utils as vis_util
# in order to display the images in line
# %matplotlib inline
# -
# ### Model URL
# The model would download from the below url
#
# download_url = 'http://download.tensorflow.org/models/object_detection/'
# ### To change the model name according of your choice, change the following code
# "model = 'ssd_mobilenet_v1_coco_11_06_2017'" to
# "model = 'faster_rcnn_inception_v2_coco_2018_01_28"
# 
# 
#
# ### Protobuf file:
# Protocol Buffers is a method of serializing structured data. The pb file is a json file for us. You will find all the genral classes that are considered for detection.
# You will find a protobuf file in the <folder> in the given github link
# <githublink>
# A snapshot of the protobuf file (mscoco_label_map.pbtxt)
# 
#
# ### Opening the tar and downloading the model you have chosen
# opens the tar file and downloads the model to our system
opener = urllib.request.URLopener()
opener.retrieve(download_url + model_tar, model_tar)
file = tarfile.open(model_tar)
print(file)
for each_file in file.getmembers():
each_file_name = os.path.basename(each_file.name)
if 'frozen_inference_graph.pb' in each_file_name:
file.extract(each_file, os.getcwd())
# ### Loading frozen TF model in the memory by creating a graph
# loading a frozen tensorflow model into memory
graph_detection = tf.Graph()
with graph_detection.as_default():
graph_def = tf.GraphDef()
with tf.gfile.GFile(path, 'rb') as fid:
graph_serialized = fid.read()
graph_def.ParseFromString(graph_serialized)
tf.import_graph_def(graph_def, name='')
# ### Loading the lables from the frozen model
# loading labels and their mappings
label_map = label_map_util.load_labelmap(label_path)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=classes_num, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# ### Dimensions of an image:
# We are loading the dimensions of each image into a numpy array. The dimensions of the image are the height, the width and the RGB intensity at various points in the image.
#
#
# load images into a numpy array which consists of the dimensions of each image
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
# ### Giving path for images to detect objects
# +
# path where the test images are stored
test_images_dir = 'test_images'
test_image_path = [ os.path.join(test_images_dir, 'ILSVRC2017_test_00000013.jpeg')]
# Size of the output images in inches
image_size = (8, 5)
# -
# ### Detecting objects in an image or set of images
# The below code will be used for detecting objects in an image or a set of images
#
with graph_detection.as_default():
with tf.Session(graph=graph_detection) as sess:
for image_path in test_image_path:
# opening images from the path
image = Image.open(image_path)
# array representation of the image used later to prepare the result image with bounding boxes with labels on it
image_np = load_image_into_numpy_array(image)
# expanding the dimensions of the image as the model expects images to have the shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = graph_detection.get_tensor_by_name('image_tensor:0')
# each box represents parts of the image where a particular object was detected
boxes = graph_detection.get_tensor_by_name('detection_boxes:0')
# each score represents the level of confidence for each of the objects
# this score is shown on the result image along with the class label
scores = graph_detection.get_tensor_by_name('detection_scores:0')
classes = graph_detection.get_tensor_by_name('detection_classes:0')
num_detections = graph_detection.get_tensor_by_name('num_detections:0')
# Actual detection
(boxes, scores, classes, num_detections) = sess.run([boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of an indentified object
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=image_size)
plt.imshow(image_np)
# ### Detecting objects through webcam
# The below code will be used to detect objects through webcam.
# To capture a video, we need to create a VideoCapture object. Its argument can be either the device index or the name of a video file. Device index is just the number to specify which camera. Normally one camera will be connected (as in my case). So I simply pass 0 (or -1). You can select the second camera by passing 1 and so on. After that, you can capture frame-by-frame.
import cv2
cap=cv2.VideoCapture(0)
ret = True
with graph_detection.as_default():
with tf.Session(graph=graph_detection) as sess:
while(ret):
ret,image_np=cap.read()
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = graph_detection.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = graph_detection.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = graph_detection.get_tensor_by_name('detection_scores:0')
classes = graph_detection.get_tensor_by_name('detection_classes:0')
num_detections = graph_detection.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
cv2.imshow('image',cv2.resize(image_np,(1280,960)))
if cv2.waitKey(25) & 0xFF==ord('q'):
break
cv2.destroyAllWindows()
cap.release()
# ## Results:
# ### Below are the results that we got.
#
# For **SSD with MobileNet**, the accuracy of object detection in image is 83% for person and 81% for laptop. This model worked fast though had the least accuracy.
#
# 
#
# For **SSD Inception V2 model**, the accuracy of the objects detected in the images is 90% for the person and 95% for the laptop.
#
# 
#
# For **Faster RCNN Inception Model**, the accuracy of object detection in image is 99% for person and 99% for laptop
#
# 
#
# In addition, we were also successful in accessing the webcam of our system using OpenCV to detect real-time objects. **The model used here is SSD with MobileNets as it produces much faster results as compared to the other two models.**
# 
# ## Conclusion:
# **SSD with MobileNet model accuracy:**
# Person 83% and laptop 81%
#
# **SSD Inception V2 model accuracy:**
# Person 90% and laptop 95%
#
# **Faster RCNN Inception Model accuracy:**
# Person 99% and laptop 99%
#
# As we can see above, Faster RCNN Inception Model gives the highest accuracy and SSD with MobileNet gives the lowest.
# But MobileNets are optimized to be small and efficient, at the cost of some accuracy.
#
# ### References:
#
# https://github.com/tensorflow/models/tree/master/research/object_detection
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
| NEU_ADS_Student_Project_Portfolio_Examples/Object Detection using TensorFlow/Project/Portfolio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # 文本预处理
# :label:`sec_text_preprocessing`
#
# 对于序列数据处理问题,我们在 :numref:`sec_sequence`中
# 评估了所需的统计工具和预测时面临的挑战。
# 这样的数据存在许多种形式,文本是最常见例子之一。
# 例如,一篇文章可以被简单地看作是一串单词序列,甚至是一串字符序列。
# 本节中,我们将解析文本的常见预处理步骤。
# 这些步骤通常包括:
#
# 1. 将文本作为字符串加载到内存中。
# 1. 将字符串拆分为词元(如单词和字符)。
# 1. 建立一个词表,将拆分的词元映射到数字索引。
# 1. 将文本转换为数字索引序列,方便模型操作。
#
# + origin_pos=2 tab=["pytorch"]
import collections
import re
from d2l import torch as d2l
# + [markdown] origin_pos=4
# ## 读取数据集
#
# 首先,我们从H.G.Well的[时光机器](https://www.gutenberg.org/ebooks/35)中加载文本。
# 这是一个相当小的语料库,只有30000多个单词,但足够我们小试牛刀,
# 而现实中的文档集合可能会包含数十亿个单词。
# 下面的函数(**将数据集读取到由多条文本行组成的列表中**),其中每条文本行都是一个字符串。
# 为简单起见,我们在这里忽略了标点符号和字母大写。
#
# + origin_pos=5 tab=["pytorch"]
#@save
d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt',
'090b5e7e70c295757f55df93cb0a180b9691891a')
def read_time_machine(): #@save
"""将时间机器数据集加载到文本行的列表中"""
with open(d2l.download('time_machine'), 'r') as f:
lines = f.readlines()
return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines]
lines = read_time_machine()
print(f'# 文本总行数: {len(lines)}')
print(lines[0])
print(lines[10])
# + [markdown] origin_pos=6
# ## 词元化
#
# 下面的`tokenize`函数将文本行列表(`lines`)作为输入,
# 列表中的每个元素是一个文本序列(如一条文本行)。
# [**每个文本序列又被拆分成一个词元列表**],*词元*(token)是文本的基本单位。
# 最后,返回一个由词元列表组成的列表,其中的每个词元都是一个字符串(string)。
#
# + origin_pos=7 tab=["pytorch"]
def tokenize(lines, token='word'): #@save
"""将文本行拆分为单词或字符词元"""
if token == 'word':
return [line.split() for line in lines]
elif token == 'char':
return [list(line) for line in lines]
else:
print('错误:未知词元类型:' + token)
tokens = tokenize(lines)
for i in range(11):
print(tokens[i])
# + [markdown] origin_pos=8
# ## 词表
#
# 词元的类型是字符串,而模型需要的输入是数字,因此这种类型不方便模型使用。
# 现在,让我们[**构建一个字典,通常也叫做*词表*(vocabulary),
# 用来将字符串类型的词元映射到从$0$开始的数字索引中**]。
# 我们先将训练集中的所有文档合并在一起,对它们的唯一词元进行统计,
# 得到的统计结果称之为*语料*(corpus)。
# 然后根据每个唯一词元的出现频率,为其分配一个数字索引。
# 很少出现的词元通常被移除,这可以降低复杂性。
# 另外,语料库中不存在或已删除的任何词元都将映射到一个特定的未知词元“<unk>”。
# 我们可以选择增加一个列表,用于保存那些被保留的词元,
# 例如:填充词元(“<pad>”);
# 序列开始词元(“<bos>”);
# 序列结束词元(“<eos>”)。
#
# + origin_pos=9 tab=["pytorch"]
class Vocab: #@save
"""文本词表"""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# 按出现频率排序
counter = count_corpus(tokens)
self._token_freqs = sorted(counter.items(), key=lambda x: x[1],
reverse=True)
# 未知词元的索引为0
self.idx_to_token = ['<unk>'] + reserved_tokens
self.token_to_idx = {token: idx
for idx, token in enumerate(self.idx_to_token)}
for token, freq in self._token_freqs:
if freq < min_freq:
break
if token not in self.token_to_idx:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
@property
def unk(self): # 未知词元的索引为0
return 0
@property
def token_freqs(self):
return self._token_freqs
def count_corpus(tokens): #@save
"""统计词元的频率"""
# 这里的tokens是1D列表或2D列表
if len(tokens) == 0 or isinstance(tokens[0], list):
# 将词元列表展平成一个列表
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
# + [markdown] origin_pos=10
# 我们首先使用时光机器数据集作为语料库来[**构建词表**],然后打印前几个高频词元及其索引。
#
# + origin_pos=11 tab=["pytorch"]
vocab = Vocab(tokens)
print(list(vocab.token_to_idx.items())[:10])
# + [markdown] origin_pos=12
# 现在,我们可以(**将每一条文本行转换成一个数字索引列表**)。
#
# + origin_pos=13 tab=["pytorch"]
for i in [0, 10]:
print('文本:', tokens[i])
print('索引:', vocab[tokens[i]])
# + [markdown] origin_pos=14
# ## 整合所有功能
#
# 在使用上述函数时,我们[**将所有功能打包到`load_corpus_time_machine`函数中**],
# 该函数返回`corpus`(词元索引列表)和`vocab`(时光机器语料库的词表)。
# 我们在这里所做的改变是:
#
# 1. 为了简化后面章节中的训练,我们使用字符(而不是单词)实现文本词元化;
# 1. 时光机器数据集中的每个文本行不一定是一个句子或一个段落,还可能是一个单词,因此返回的`corpus`仅处理为单个列表,而不是使用多词元列表构成的一个列表。
#
# + origin_pos=15 tab=["pytorch"]
def load_corpus_time_machine(max_tokens=-1): #@save
"""返回时光机器数据集的词元索引列表和词表"""
lines = read_time_machine()
tokens = tokenize(lines, 'char')
vocab = Vocab(tokens)
# 因为时光机器数据集中的每个文本行不一定是一个句子或一个段落,
# 所以将所有文本行展平到一个列表中
corpus = [vocab[token] for line in tokens for token in line]
if max_tokens > 0:
corpus = corpus[:max_tokens]
return corpus, vocab
corpus, vocab = load_corpus_time_machine()
len(corpus), len(vocab)
# + [markdown] origin_pos=16
# ## 小结
#
# * 文本是序列数据的一种最常见的形式之一。
# * 为了对文本进行预处理,我们通常将文本拆分为词元,构建词表将词元字符串映射为数字索引,并将文本数据转换为词元索引以供模型操作。
#
# ## 练习
#
# 1. 词元化是一个关键的预处理步骤,它因语言而异。尝试找到另外三种常用的词元化文本的方法。
# 1. 在本节的实验中,将文本词元为单词和更改`Vocab`实例的`min_freq`参数。这对词表大小有何影响?
#
# + [markdown] origin_pos=18 tab=["pytorch"]
# [Discussions](https://discuss.d2l.ai/t/2094)
#
| pytorch/chapter_recurrent-neural-networks/text-preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Lesson 1: NumPy Part 1
#
# This notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:
#
# > NumPy is the fundamental package for scientific computing with Python. It contains among other things:
# > - a powerful N-dimensional array object
# > - sophisticated (broadcasting) functions
# > - tools for integrating C/C++ and Fortran code
# > - useful linear algebra, Fourier transform, and random number capabilities
# >
# > Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.
#
# ## Instructions
# This tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L01-Numpy_Part1-Practice.ipynb](./L01-Numpy_Part1-Practice.ipynb).
#
# Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook.
# ---
# ## 1. Getting Started
# First, we must import the NumPy library. All packages are imported at the top of the notebook. Execute the code in the following cell to get started with this notebook (type Ctrl+Enter in the cell below)
# Import numpy
import numpy as np
# The code above imports numpy as a variable named `np`. We can use this variables to access the functionality of NumPy. The above is what we will use for the rest of this class.
#
# You may be wondering why we didn't import numpy like this:
# ```python
# import numpy
# ```
# We could, but the first is far more commonly seen, and allows us to the `np` variable to access the functions and variables of the NumPy package. This makes the code more readable because it is not a mystery where the functions come from that we are using.
# ### Task 1a: Setup
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, import the following packages:
# + `numpy` as `np`
# ## 2. The NumPy Array
#
# What is an array? An array is a data structure that stores one or more objects of the same type (e.g. integers, strings, etc.) and can be multi-dimensional (e.g. 2D matricies). In python, the list data type provides this type of functionality, however, it lacks important operations that make it useful for scientific computing. Therefore, NumPy is a Python package that defines N-dimensional arrays and provides support for linear algebra, and other fucntions useful to scientific computing.
#
# From the Numpy QuickStart Tutorial:
# > NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes.
#
# _Note: a "tuple" is a list of numbers. For example, the pair of numbers surrounded by parentheses: (2,4), is a tuple containing two numbers.
#
# NumPy arrays can be visualized in the following way:
#
# <img src="http://community.datacamp.com.s3.amazonaws.com/community/production/ckeditor_assets/pictures/332/content_arrays-axes.png">
#
# (image source: https://www.datacamp.com/community/tutorials/python-numpy-tutorial)
# Using built-in Python lists, arrays are created in the following way:
#
# ```python
# # A 1-dimensional list of numbers.
# my_array = [1,2,3]
#
# # A 2-dimensional list of numbers.
# my_2d_array = [[1,2,3],[4,5,6]]
#
# # A 3-dimensional list of numbers.
# my_3d_array = [[[1,2,3], [4,5,6]], [[7,8,9], [10,11,12]]]
#
# # Two lists of boolean values
# a = [True, True, False, False]
# b = [False, False, True, True]
#
# ```
# Using NumPy, arrays are created using the `np.array()` function. For example, arrays with the same contents as above are created in the following way:
#
# ```python
# # A 1-dimensional list of numbers.
# my_array = np.array([1,2,3,4])
#
# # A 2-dimensional list of numbers.
# my_2d_array = np.array([[1,2,3,4], [5,6,7,8]])
#
# # A 3-dimensional list of numbers.
# my_3d_array = np.array([[[1,2,3,4], [5,6,7,8]], [[1,2,3,4], [9,10,11,12]]])
#
# # Two lists of boolean values
# a = np.array([True,True,False,False])
# b = np.array([False,False,True,True])
# ```
#
# In NumPy, these arrays are an object of type `ndarray`. You can learn more about the `ndarray` class on the [NumPy ndarray introduction page](https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html). However, this tutorial will walk you through some of the most important attributes, functions and uses of NumPy.
# ### Task 2a: Creating Arrays
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
# - Create a 1-dimensional numpy array and print it.
# - Create a 2-dimensional numpy array and print it.
# - Create a 3-dimensional numpy array and print it.
# ## 3. Accessing Array Attributes
# For this section we will retrieve information about the arrays. Once an array is created you can access information about the array such as the number of dimensions, its shape, its size, the data type that it stores, and the number of bytes it is consuming. There are a variety of attributes you can use such as:
# + `ndim`
# + `shape`
# + `size`
# + `dtype`
# + `itemsize`
# + `data`
# + `nbytes`
#
# For example, to get the number of dimensions for an array:
# ```Python
# # Print the number of dimensions for the array:
# print(my_3d_array.ndim)
# ```
#
# You can learn more about these attributes, and others from the [NumPy ndarray reference page](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) if you need help understanding the attributes.
#
# Notice that we use dot notation to access these attributes, yet we do not provide the parenthesis `()` like we would for a function call. This is because we are accessing attributes (i.e. member variables) of the numpy object, we are not calling a function
# ### Task 3a: Accessing Array Attributes
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
#
# - Create a NumPy array.
# - Write code that prints these attributes (one per line): `ndim`, `shape`, `size`, `dtype`, `itemsize`, `data`, `nbytes`.
# - Add a comment line, before each line describing what value the attribute returns.
#
# + [markdown] tags=[]
# ## 4. Creating Initialized Arrays
#
# Here we will learn to create initialized arrays. These arrays are pre-initalized with default values. NumPy provides a variety of functions for creating and intializing an array in easy-to-use functions. Some of these include:
#
# # + [np.ones()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ones.html#numpy.ones): Returns a new array of given shape and type, filled with ones.
# # + [np.zeros()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html#numpy.zeros): Returns a new array of given shape and type, filled with zeros.
# # + [np.empty()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.empty.html#numpy.empty): Return a new array of given shape and type, without initializing entries.
# # + [np.full()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.full.html#numpy.full): Returns a new array of given shape and type, filled with a given fill value.
# # + [np.arange()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html#numpy.arange): Returns a new array of evenly spaced values within a given interval.
# # + [np.linspace()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html#numpy.linspace): Returns a new array of evenly spaced numbers over a specified interval.
# # + [np.random.random](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.random.html): Can be used to return a single random value or an array of random values between 0 and 1.
#
# Take a moment, to learn more about the functions listed above by clicking on the function name as it links to the NumPy documentation. Pay attention to the arguments that each receives and the type of output (i.e array) it generates.
#
# NumPy has a large list of array creation functions, you can learn more about these functions on the [array creation routins page](https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html) of the NumPy documentation.
#
# To demonstrate the use of these functions, the following code will create a two-dimensional array with 3 rows and 4 columns (i.e 3 *x* 4) filled with 0's.
#
# ```Python
# zeros = np.zeros((3, 4))
# ```
#
# The following creates a 1D array of values between 3 and 7
#
# ```Python
# np.arange(3, 7)
# ```
# The result is: `array([3, 4, 5, 6])`
#
# The following creates a 1D array of values between 0 and 10 spaced every 2 integers:
#
# ```Python
# np.arange(0, 10, 2)
# ```
# The result is: `array([0, 2, 4, 6, 8])`
#
# Notice that just like with Python list slicing, the range uncludes up-to, but not including the "stop" value of the range.
#
# -
# ### Task 4a: Initializing Arrays
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
#
# + Create an initialized array by using these functions: `ones`, `zeros`, `empty`, `full`, `arange`, `linspace` and `random.random`. Be sure to follow each array creation with a call to `print()` to display your newly created arrays.
# + Add a comment above each function call describing what is being done.
# ## 5. Performing Math and Broadcasting
#
# At times you may want to apply mathematical operations between arrays. For example, suppose you wanted to add, multiply or divide the contents of two arrays. If the two arrays are the same size this is straightfoward. However if the arrays are not the same size then it is more challenging. This is where Broadcasting comes to play:
#
# > The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
#
# ### 5.1 Arrays of the same size
# To demonstrate math with arrays of the same size, the following cell contains code that creates two arrays of the exact same size: _3 x 4_. Execute the cell to create those arrays:
# +
# Define demo arrays:
demo_a = np.ones((3,4))
demo_b = np.random.random((3,4))
# Print the shapes of each array.
print(f"demo_a shape: {demo_a.shape}")
print(f"demo_b Shape: {demo_b.shape}")
# -
# Let's print the array to see what they contain:
print(demo_a)
print(demo_b)
# Because these arrays are the same size we can perform basic math by using common arithamtic symbols. Exectue the following cell to see the results of adding the two demo arrays:
# These arrays have the same shape,
demo_a + demo_b
# The addition resulted in the corresponding positions in each matrix being added to the other and creating a new matrix. If you need clarification for how two matricies can be added or subtracted see the [Purple Math](https://www.purplemath.com/modules/mtrxadd.htm) site for examples.
# ### 5.2 Broadcasting for Arrays of Different Sizes
# When arrays are not the same size, you cannot perform simple math. For this, NumPy provides a service known as "broadcasting". To broadcast, NumPy automatically resizes the arrays to match, and fills in newly created empty cells with values.
#
# To Broadcast, NumPy begins at the right-most dimensions of the array and comparses them then moves left and compares the next set. As long as each set meet the following criteria, Broadcasting can be performed:
#
# + The dimensions are equal or
# + One of the dimensions is 1.
#
# Consider two arrays of the following dimensions:
#
# + 4D array 1: 10 x 1 x 3 x 1
# + 3D array 2: 2 x 1 x 9
#
# These arrays are not the same size, but they are compatible with broadcasting because at each diemsion (from right to left) the dimension crtieria is met. When performing math, the value in each dimension of size 1 is broadcast to fill that dimesion (an example is provided below). The resulting array, if the above arrays are added, will be broadcasted to a size of _10 x 2 x 3 x 9_
# To demonstrate math with arrays of different size, the following cell contains code that creates two arrays: one of size _3 x 4_ and onther of size _4 x 1_. Execute the cell to create those arrays:
# +
# Create the arrays.
demo_c = np.ones((3,4))
demo_d = np.arange(4)
# Print the array shapes.
print(f"demo_c shape: {demo_c.shape}")
print(f"demo_d Shape: {demo_d.shape}")
# -
# Let's print the array to see what they contain:
print(demo_c)
print(demo_d)
# Because these arrays meet our brodcasting requirements, we can perform basic math by using common arithamtic symbols. Exectue the following cell to see the results of adding the two demo arrays:
demo_c + demo_d
# The addition resulted in the value in each dimension of size 1, being "broadcast" or "streched" throughout that dimesion and then used in the operation.
# ### 5.3 Broadcasting With Higher Dimensions
# Consider the following arrays of 2 and 3 dimensions.
demo_e = np.ones((3, 4))
demo_f = np.random.random((5, 1, 4))
print(f"demo_e shape: {demo_e.shape}")
print(f"demo_f shape: {demo_f.shape}")
# +
new_arr = np.array([[0,0,0,0],[1,1,1,1],[2,2,2,2]])
print(new_arr)
demo_e = demo_e + new_arr
# -
# Print the arrays to see what they contain:
print(demo_e)
print(demo_f)
# These two arrays meet the rules for broadcasting becuase they both have a 4 in their last dimension and there is a 1 in the `demo_f` 2nd dimension.
#
# Perform the math by executing the following cell:
result = demo_e + demo_f
print(result)
# The resulting array has dimensions of _5 x 3 x 4_. For this math to work, the values from `demo_f` had to be "stretched" (i.e. copied and then added) in the second dimension
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ### Task 5a: Broadcasting Arrays
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
#
# # + Create two arrays of differing sizes but compatible with broadcasting.
# # + Perform addition, multiplication and subtraction.
# # + Create two additional arrays of differing size that do not meet the rules for broadcasting and try a mathematical operation.
# -
# ## 6. NumPy Aggregate Functions
# NumPy also provides a variety of functions that "aggregate" data. Examples of aggreagation of data include calculating the sum of every element in the array, calculating the mean, standard deviation, etc. Below are a few examples of aggregation functions provided by NumPy.
#
# **Mathematics Functions**:
# + [np.sum()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html): sums the array elements over a given axis
# + [np.minimum()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.minimum.html#numpy.minimum): compares two arrays and returns a new array of the minimum at each position (i.e. element-wise)
# + [np.maximum()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.maximum.html#numpy.maximum): compares two arrays and returns a new array of the maximum at each position (i.e. element-wise).
# + [np.cumsum()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html#numpy.cumsum): returns the cummulative sum of the elements along a given axes.
#
# You can find more about mathematical functions for arrays at the [Numpy mathematical functions page](https://docs.scipy.org/doc/numpy/reference/routines.math.html).
#
# **Statistics**:
# + [np.mean()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html): compute the arithmetic mean along the specified axis.
# [np.median()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.median.html#numpy.median): compute the median along the specified axis.
# + [np.corrcoef()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html#numpy.corrcoef): return Pearson product-moment correlation coefficients between two 1D arrays or one 2D array.
# + [np.std()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html#numpy.std): compute the standard deviation along the specified axis.
# + [np.var()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.var.html#numpy.var): compute the variance along the specified axis.
#
# You can find more about statistical functions for arrays at the [Numpy statistical functions page](https://docs.scipy.org/doc/numpy/reference/routines.statistics.html).
#
#
# Take a moment, to learn more about the functions listed above by clicking on the function name as it links to the NumPy documentation. Pay attention to the arguments that each receives and the type of output it generates.
#
# For example:
# ```Python
# # Calculate the sum of our demo data from above
# np.sum(demo_e)
# ```
#
# ### Task 6a: Math/Stats Aggregate Functions
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
#
# + Create three to five arrays
# + Experiment with each of the aggregation functions: `sum`, `minimum`, `maximum`, `cumsum`, `mean`, `np.corrcoef`, `np.std`, `np.var`.
# + For each function call, add a comment line above it that describes what it does.
# ### 6.1 Logical Aggregate Functions
# When arrays contain boolean values there are additional logical aggregation functions you can use:
#
# + [logical_and()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html#numpy.logical_and): computes the element-wise truth value of two arrays using AND.
# + [logical_or()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_or.html#numpy.logical_or): computes the element-wise truth value of two arrays using OR.
# + [logical_not()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_not.html#numpy.logical_not): computes the element-wise truth value of two arrays using NOT.
#
#
# You can find more about logical functions for arrays at the [Numpy Logic functions page](https://docs.scipy.org/doc/numpy/reference/routines.logic.html).
#
# Take a moment, to learn more about the functions listed above by clicking on the function name as it links to the NumPy documentation. Pay attention to the arguments that each receives and the type of output it generates.
#
# To demonstrate usage of the logical functions, please execute the following cells and examine the results produced.
# +
# Two lists of boolean values
a = [True, True, False, False]
b = [False, False, True, True]
# Perform a logical "or":
np.logical_or(a, b)
# -
# Perform a logical "and":
np.logical_or(a, b)
# ### Task 6b: Logical Aggregate Functions
#
# <span style="float:right; margin-left:10px; clear:both;">
# </span>
#
# In the practice notebook, perform the following.
#
# + Create two arrays containing boolean values.
# + Experiment with each of the aggregation functions: `logical_and`, `logical_or`, `logical_not`.
# + For each function call, add a comment line above it that describes what it does.
| .ipynb_checkpoints/L01-NumPy_Part1-Lesson-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test pandas_cub manually
#
# In this notebook, we will test pandas_cub manually.
import sys
sys.executable
# %load_ext autoreload
# %autoreload 2
import numpy as np
import pandas_cub as pdc
import pandas_cub_final as pdcf
import pandas as pd
# +
name = np.array(['Penelope', 'Niko', 'Eleni'])
state = np.array(['Texas', 'California', 'Texas'])
height = np.array([3.6, 3.5, 5.2])
school = np.array([True, False, True])
weight = np.array([45, 40, 130])
data = {'name': name, 'state': state, 'height': height,
'school': school, 'weight': weight}
df = pdc.DataFrame(data)
df_final = pdcf.DataFrame(data)
df_pandas = pd.DataFrame(data)
# -
df
df_final
df_pandas
df[df['state'] == 'Texas']
| Test Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kleite/Imersao_dados/blob/main/Imersao_dados.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="9Zm8s2RUzvut"
# # 0.0 Imports
# + id="G2zURtwXz8Kz"
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
# + [markdown] id="g7XcZaMqzyts"
# # 1.0 Abertura do arquivo
# + id="4m8TeLHQzqo2" outputId="c1a41366-734f-4c5b-da33-b12fc3ca8bfc" colab={"base_uri": "https://localhost:8080/", "height": 245}
df_enem = pd.read_csv('https://raw.githubusercontent.com/kleite/Imersao_dados/main/data/enem_2019_sample_43278')
df_enem.head()
# + [markdown] id="l92-Z4HS0Jj0"
# # 2.0 Descriçao dos dados
# + [markdown] id="WBZhk6LmmjMr"
# ## 2.1. Tamanho dos dados
# + id="SgYvo_bQjo00" outputId="302f2d1f-0e86-41e0-beb9-4adfd185d1eb" colab={"base_uri": "https://localhost:8080/", "height": 50}
print('Número de linhas: {}'.format(df_enem.shape[0]))
print('Número de colunas: {}'.format(df_enem.shape[1]))
# + [markdown] id="bvU1UL7smnpb"
# ## 2.2 Tipos de dados
# + id="_qx7362oj_Xs" outputId="90106c09-bf0d-4b5c-ed57-9bfc61330144" colab={"base_uri": "https://localhost:8080/", "height": 50}
num_attributes = df_enem.select_dtypes( include=['int64', 'float64'] )
cat_attributes = df_enem.select_dtypes( exclude=['int64','float64'] )
print('Número de variáveis do tipo numérico: {}'.format(num_attributes.columns.value_counts().sum()))
print('Número de tipo categorico: {}'.format(cat_attributes.columns.value_counts().sum()))
# + [markdown] id="3e9h72lyo7zd"
# ## 2.3 Dados repetidos
# + id="_fq8TWoZpDk5"
df_enem = df_enem.drop_duplicates()
# + [markdown] id="bSK09Pctmsw9"
# ## 2.4 Dados em branco
# + [markdown] id="IvA-FUj2qL7I"
# 2.4.1 Contando dados em branco
# + id="_3hB2AvYptxm" outputId="76907007-3260-4b2a-b83d-ee525273762b" colab={"base_uri": "https://localhost:8080/", "height": 217}
df_enem = df_enem.isnull().sum()/df_enem.shape[0]
df_enem[(df_enem) != 0].sort_values(ascending = False)
# + [markdown] id="F827XoyJqQ7H"
# 2.4.1 Preenchendo dados em branco
# + [markdown] id="Z9YR42f4qh3e"
# 2.5 Análise descritiva
# + id="DPJ0HtOrrpYa"
skew = num_attributes.skew()
skew = pd.DataFrame(skew, columns = ['skew']).T
kurt = num_attributes.kurt()
kurt = pd.DataFrame(kurt, columns = ['kurt']).T
# + id="VJc2biTKsXZg" outputId="040b9244-56f9-459d-daea-24b96ede73e2" colab={"base_uri": "https://localhost:8080/", "height": 394}
pd.concat([num_attributes.describe(), skew, kurt])
# + [markdown] id="xpWU9ht20RWE"
# # 3.0 Feature engineering
# + id="PNaBlL_B0Q25"
| Imersao_dados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
DATA_DIR = 'tmp/data'
NUM_STEPS = 1000
MINIBATCH_SIZE = 100
LEARNING_RATE = 0.5
DIR = "model"
# Layers sizes
L1 = 200
L2 = 100
L3 = 60
L4 = 30
L5 = 10
data = input_data.read_data_sets(DATA_DIR , one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
l1 = tf.layers.dense(x,L1,activation=tf.nn.relu,use_bias=True)
l2 = tf.layers.dense(l1,L2,activation=tf.nn.relu,use_bias=True)
l3 = tf.layers.dense(l2,L3,activation=tf.nn.relu,use_bias=True)
l4 = tf.layers.dense(l3,L4,activation=tf.nn.relu,use_bias=True)
y_pred = tf.layers.dense(l4,L5,activation=tf.nn.relu,use_bias=True)
y_true = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=y_pred, labels=y_true))
gd_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cross_entropy)
correct_mask = tf.equal(tf.argmax(y_pred , 1), tf.argmax(y_true , 1))
accuracy = tf.reduce_mean(tf.cast(correct_mask , tf.float32))
saver = tf.train.Saver(max_to_keep=7, keep_checkpoint_every_n_hours=1)
with tf.Session() as sess:
# Тренируем
sess.run(tf.global_variables_initializer())
for step in range(1,NUM_STEPS+1):
batch_x , batch_y = data.train.next_batch(MINIBATCH_SIZE)
sess.run(gd_step, feed_dict={x:batch_x,y_true:batch_y})
# Сохраняем модель каждую 50 итерацию
if step % 50 == 0:
saver.save(sess, os.path.join(DIR, "model_ckpt"), global_step=step)
ans = sess.run(accuracy, feed_dict={x: data.test.images, y_true: data.test.labels})
print("Accuracy: {:.4}%".format(ans*100))
# -
| Term 7/Methods of decision support/Lab 3/Task 3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Need for Speed
# You will learn how to time your code and locate its bottlenecks. You will learn how to alleviate such bottlenecks using techniques such as **comprehensions**, **generators**, **vectorization** and **parallelization**. You will be introduced to how to use the **Numba** library to speed-up your code. You will hear about the fundamental computational costs of mathematical operations and memory management (caching).
# +
import time
import numpy as np
import pandas as pd
from scipy import optimize
# %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# %load_ext autoreload
# %autoreload 2
# performance libraries
import numba as nb
import joblib # conda install joblib
import dask # conda install dask
import dask.dataframe as dd
# magics
# conda install line_profiler
# conda install memory_profiler
# %load_ext line_profiler
# %load_ext memory_profiler
# local module
import needforspeed
# -
import psutil
CPUs = psutil.cpu_count()
CPUs_list = set(np.sort([1,2,4,*np.arange(8,CPUs+1,4)]))
print(f'this computer has {CPUs} CPUs')
# # Computers
#
# We can represent a **computer** in a simplified diagram as:
#
# <img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/11/computer.gif" alt="computer" width=60% />
#
# **Performance goals:**
#
# 1. Minimize the number of logical and algebraic operations ([details](https://streamhpc.com/blog/2012-07-16/how-expensive-is-an-operation-on-a-cpu/))
# 2. Minimize the number of times new memory needs to be allocated (and the amount)
# 3. Minimize the number of read and write memory (and especially storage) operations
# Optimizing your code for **optimal performance is a very very complicated task**. When using Python a lot of stuff is happening *under the hood*, which you don't control.
#
# * Python is an **interpreted** language; each line of Python code is converted into machine code at runtime when the line is reached. Error checks and memory management are performed automatically.
# * Faster languages (C/C++, Fortran) are **compiled** to machine code before the program is run $\rightarrow$ faster, but you are required to specify e.g. types of variables beforehand. Error checks and memory management must be performed manually.
# **Often overlooked**, todays CPUs are so fast that feeding them data quickly enough can be a serious bottleneck.
# **Modern CPUs** can do a lot of smart, complicated, stuff.
#
# > **Single-instruction multiply data (SIMD):** The computional cost of multiplying one float with another is the same as multiplying e.g. vectors of 4 doubles at once (or 8 doubles if you have AVX-512).
#
# > **Out-of-order execution:** If you tell the computer to
# >
# > 1. read data ``X``
# > 2. run ``f(X)``
# > 3. read data ``Y``
# > 4. run ``g(Y)``
# >
# > then it might try to do step 2 and step 3 simultanously because they use different parts of the CPU.
# > **Caching:** Let ``x`` be a one-dimensional numpy array, and assume you read the value in ``x[i]`` and then read the value in ``x[j]``. If ``j`` is "close" to ``i`` then the value of ``x[j]`` will already be in the *cache* and the second read operation will be faster (almost instantanous).
# **Parallization:** Modern computers have multiple CPUs (or even other computing units such as GPUs). This is to some degree used implicitely by e.g. built-in Numpy and Scipy functions, but we also discuss how to do this manually later in this lecture. The clock speed of each CPU has stopped increasing for technical reasons, the number of transistors on each chip continue to increase exponentially (**Moore's Law**) due to more CPUs.
# <img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/11/moores_law.png" alt="moores_law" width=80% />
# **Memory:** We have many different kinds of memory
#
# 1. Cache
# 2. RAM (Random Access Memory)
# 3. Hard drive
# We control what is in the **RAM** and on the the **hard drive**; the latter is a lot slower than the former. The cache is used by the computer under the hood.
# <img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/11/memory.gif" alt="memory" width=40% />
# **Three important principles:**
#
# 1. **Use built-in features** of Python, Numpy, Scipy etc. whenever possible (often use fast compiled code).
# 2. **Ordered operations** is better than random operations.
# 3. **"Premature optimization is the root of all evil"** (Donald Knuth).
# There is a **trade-off** between **human time** (the time it takes to write the code) and **computer time** (the time it takes to run the code).
# # Timing and precomputations
# Consider the following function doing some simple algebraic operations:
def myfun(x,i):
y = 0
for j in range(100):
y += x**j
return y + i
# And another function calling the former function in a loop:
def myfun_loop(n):
mysum = 0
for i in range(n):
mysum += myfun(5,i)
return mysum
# **How long does it take to run ``myfun_loop``:**
# **A.** Manual timing
t0 = time.time()
mysum = myfun_loop(1000)
t1 = time.time()
print(f'{t1-t0:.8} seconds')
# **B.** Use the ``%time`` magic (work on a single line)
# %time mysum = myfun_loop(1000)
# %time mysum = myfun_loop(1000)
# > **ms** $\equiv$ milliseconds, $10^{-3}$ of a second.<br>
# > **$\mu$s** $\equiv$ mikroseconds, $10^{-6}$ of a second.<br>
# > **ns** $\equiv$ nanoseconds, $10^{-9}$ of a second.
# **C.** Use the ``%timeit`` magic to also see variability (work on single line)
# %timeit myfun_loop(1000)
# %timeit -r 5 -n 20 myfun_loop(1000)
# > ``%timeit`` report the best of ``r`` runs each calling the code ``n`` times in a loop
# **D.** Use the ``%%time`` magic (work on a whole cell)
# %%time
n = 1000
myfun_loop(n);
# **E.** Use the ``%%timeit`` magic to also see variabilty (work on a whole cell)
# %%timeit
n = 1000
myfun_loop(n)
# **Question:** How can we speed up the computation using **precomputation**?
# +
def myfun_loop_fast(n):
myfunx = myfun(5,0)
mysum = 0
for i in range(n):
mysum = myfunx+i
return mysum
# remember
def myfun_loop(n):
mysum = 0
for i in range(n):
mysum += myfun(5,i)
return mysum
def myfun(x,i):
y = 0
for j in range(100):
y += x**j
return y + i
# -
# **Answer:**
# + jupyter={"source_hidden": true}
def myfun_loop_fast(n):
myfunx = myfun(5,0) # precomputation
mysum = 0
for i in range(n):
mysum += myfunx + i
return mysum
# -
t0 = time.time()
mysum_fast = myfun_loop_fast(1000)
t1 = time.time()
print(f'{t1-t0:.8f} seconds')
# Too fast to be measured with ``time.time()``. The ``%timeit`` magic still works:
# %timeit myfun_loop(1000)
# %timeit myfun_loop_fast(1000)
# $\rightarrow$ **orders of magnitude faster!**
#
# Check the **results are the same**:
assert mysum == mysum_fast
# ## Premature optimization is the root of all evil
# **Important:** Before deciding whether to do a precomputation (which often makes the code harder to read) we should investigate, whether it alleviates a bottleneck.
#
# * **A.** Insert multiple ``time.time()`` to time different parts of the code.
# * **B.** Use the ``line_profiler`` with syntax (also works with methods for classes)
#
# ``%lprun -f FUNCTION_TO_PROFILE -f FUNCTION_TO_PROFILE FUNCTION_TO_RUN``
# **Baseline method:**
# %lprun -f myfun -f myfun_loop myfun_loop(1000)
# **Observation:** Most of the time is spend in ``myfun()``, more specifically the computation of the power in line 4. The precomputation solves this problem.
# **Compare with the fast method:**
# %lprun -f myfun_loop_fast myfun_loop_fast(1000)
# # List comprehensions are your friend
# We can find the first $n$ squares using a **loop**:
def squares(n):
result = []
for i in range(n):
result.append(i*i)
return result
# Or in a **list comprehension**:
def squares_comprehension(n):
return [i*i for i in range(n)]
# They give the **same result**:
n = 1000
mylist = squares(n)
mylist_fast = squares_comprehension(n)
assert mylist == mylist_fast
# But the **list comphrension is faster**:
# %timeit mylist = squares(n)
# %timeit mylist_fast = squares_comprehension(n)
# **Question:** Why is this slower?
# %timeit [i**2 for i in range(1,n+1)]
# ## Generators
# Assume you are only interested in the **sum of the squares**. Can be calculated as follows:
squares_list = [i*i for i in range(n)]
mysum = 0
for square in squares_list:
mysum += square
# **Problem:** In line 1 we create the full list even though we only need one element at a time<br>
# $\rightarrow $ *we allocate memory we need not allocate.*
#
# **Solution:** Can be avoided with a **generator**.
# +
squares_generator = (i*i for i in range(n)) # notice: parentheses instead of brackets
mysum_gen = 0
for square in squares_generator:
mysum_gen += square
assert mysum == mysum_gen
# -
# The **memory footprint** can be investigated with the **memory_profiler** with syntax
#
# ``%mprun -f FUNCTION_TO_PROFILE -f FUNCTION_TO_PROFILE FUNCTION_TO_RUN``
#
# **Caveat:** Needs to be a function in an external module.
# %mprun -f needforspeed.test_memory needforspeed.test_memory(10**6)
# > **MiB** 1 MiB = 1.048576 MB
# >
# > **Numpy:** Note how you can save memory by specifying the data type for the numpy array.
# **Alternative:** Generators can also be created as functions with a ``yield`` instead of a ``return``
# +
def f_func(n):
for i in range(n):
yield i*i
squares_generator = f_func(n)
mysum_gen = 0
for square in squares_generator:
mysum_gen += square
assert mysum == mysum_gen
# -
# ## Details on generators (+)
# As everything else in Python **a generator is just a special kind of class**:
# +
class f_class():
def __init__(self,n):
self.i = 0
self.n = n
def __iter__(self):
# print('calling __iter__')
return self
def __next__(self):
# print('calling __iter__')
if self.i < self.n:
cur = self.i*self.i
self.i += 1
return cur
else:
raise StopIteration()
squares_generator = f_class(n)
mysum_gen = 0
for square in squares_generator:
mysum_gen += square
assert mysum == mysum_gen
# -
# > **Note:** ``for x in vec`` first calls ``iter`` on vec and then ``next`` repeatly.
squares_generator = iter(f_class(n))
print(next(squares_generator))
print(next(squares_generator))
print(next(squares_generator))
print(next(squares_generator))
# **Illustrative example:**
# +
def g():
print('first run')
yield 1
print('running again')
yield 9
print('running again again')
yield 4
mygen = iter(g())
print(next(mygen))
print(next(mygen))
print(next(mygen))
try:
print(next(mygen))
except:
print('no more values to yield')
# -
for x in g():
print(x)
# # Optimizing Numpy
# ## Tip 1: Always use vectorized operations when available
#
# **Simple comparison:**
# +
x = np.random.uniform(size=500000)
def python_add(x):
y = []
for xi in x:
y.append(xi+1)
return y
def numpy_add(x):
y = np.empty(x.size)
for i in range(x.size):
y[i] = x[i]+1
return y
def numpy_add_vec(x):
return x+1
assert np.allclose(python_add(x),numpy_add(x))
assert np.allclose(python_add(x),numpy_add_vec(x))
# %timeit python_add(x)
# %timeit numpy_add(x)
# %timeit numpy_add_vec(x)
# -
# Even **stronger** when the **computation is more complicated:**
# +
def python_exp(x):
y = []
for xi in x:
y.append(np.exp(xi))
return y
def numpy_exp(x):
y = np.empty(x.size)
for i in range(x.size):
y[i] = np.exp(x[i])
return y
def numpy_exp_vec(x):
return np.exp(x)
assert np.allclose(python_exp(x),numpy_exp(x))
assert np.allclose(python_exp(x),numpy_exp_vec(x))
# %timeit python_exp(x)
# %timeit numpy_exp(x)
# %timeit numpy_exp_vec(x)
# -
# Also works for a **conditional sum**:
# +
def python_exp_cond(x):
return [np.exp(xi) for xi in x if xi < 0.5]
def numpy_exp_vec_cond(x):
y = np.exp(x[x < 0.5])
return y
def numpy_exp_vec_cond_alt(x):
y = np.exp(x)[x < 0.5]
return y
assert np.allclose(python_exp_cond(x),numpy_exp_vec_cond(x))
assert np.allclose(python_exp_cond(x),numpy_exp_vec_cond_alt(x))
# %timeit python_exp_cond(x)
# %timeit numpy_exp_vec_cond(x)
# %timeit numpy_exp_vec_cond_alt(x)
# -
# **Question:** Why do you think the speed-up is less pronounced in this case?
# ## Tip 2: Operations are faster on rows than on columns
#
# Generally, operate on the **outermost index**.
# +
n = 1000
x = np.random.uniform(size=(n,n))
def add_rowsums(x):
mysum = 0
for i in range(x.shape[0]):
mysum += np.sum(np.exp(x[i,:]))
return mysum
def add_colsums(x):
mysum = 0
for j in range(x.shape[1]):
mysum += np.sum(np.exp(x[:,j]))
return mysum
assert np.allclose(add_rowsums(x),add_colsums(x))
# %timeit add_rowsums(x)
# %timeit add_colsums(x)
# -
# <img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/11/numpy_memory_layout.png" alt="amdahls_law" width=60% />
# The **memory structure can be changed manually** so that working on columns (innermost index) is better than working on rows (outermost index):
y = np.array(x,order='F') # the default is order='C'
# %timeit add_rowsums(y)
# %timeit add_colsums(y)
# ## Tip 3: Also use vectorized operations when it is a bit cumbersome
# Consider the task of calculating the following **expected value**:
#
# $$
# \begin{aligned}
# W(a)&=\mathbb{E}\left[\sqrt{\frac{a}{\psi}+\xi}\right]\\
# \psi,\xi&\in \begin{cases}
# 0.25 & \text{with prob. }0.25\\
# 0.5 & \text{with prob. }0.25\\
# 1.5 & \text{with prob. }0.25\\
# 1.75 & \text{with prob. }0.25
# \end{cases}\end{aligned}
# $$
#
# for a vector of $a$-values.
# **Setup:**
# +
N = 5000
a_vec = np.linspace(0,10,N)
xi_vec = np.array([0.25,0.5,1.5,1.75])
psi_vec = np.array([0.25,0.5,1.5,1.75])
xi_w_vec = np.ones(4)/4
psi_w_vec = np.ones(4)/4
# -
# **Loop based solution:**
# +
def loop(a_vec,xi_vec,psi_vec,xi_w_vec,psi_w_vec):
w_vec = np.zeros(a_vec.size)
for i,a in enumerate(a_vec):
for xi,xi_w in zip(xi_vec,xi_w_vec):
for psi,psi_w in zip(psi_vec,psi_w_vec):
m_plus = a/psi + xi
v_plus = np.sqrt(m_plus)
w_vec[i] += xi_w*psi_w*v_plus
return w_vec
loop_result = loop(a_vec,xi_vec,psi_vec,xi_w_vec,psi_w_vec)
# %timeit loop(a_vec,xi_vec,psi_vec,xi_w_vec,psi_w_vec)
# -
# **Prepare vectorized solution:**
# +
def prep_vec(a_vec,xi_vec,psi_vec,xi_w_vec,psi_w_vec):
# a. make a (1,N) instead of (N,)
a = a_vec.reshape((1,N))
# b. make xi and psi to be (xi.size*psi.size,1) vectors
xi,psi = np.meshgrid(xi_vec,psi_vec)
xi = xi.reshape((xi.size,1))
psi = psi.reshape((psi.size,1))
# c. make xi and psi to be (xi.size*psi.size,1) vectors
xi_w,psi_w = np.meshgrid(xi_w_vec,psi_w_vec)
xi_w = xi_w.reshape((xi_w.size,1))
psi_w = psi_w.reshape((psi_w.size,1))
return a,xi,psi,xi_w,psi_w
a,xi,psi,xi_w,psi_w = prep_vec(a_vec,xi_vec,psi_vec,xi_w_vec,psi_w_vec)
# %timeit prep_vec(a,xi,psi_vec,xi_w_vec,psi_w_vec)
# -
# **Apply vectorized solution:**
# +
def vec(a,xi,psi,xi_w,psi_w):
m_plus_vec = a/psi + xi # use broadcasting, m_plus_vec.shape = (xi.size*psi.size,N)
v_plus_vec = np.sqrt(m_plus_vec) # vectorized funciton call
w_mat = xi_w*psi_w*v_plus_vec
w_vec = np.sum(w_mat,axis=0) # sum over rows
return w_vec
vec_result = vec(a,psi,xi,xi_w,psi_w)
assert np.allclose(loop_result,vec_result)
# %timeit vec(a,psi,xi,xi_w,psi_w)
# -
# **Conclusion:** Much much faster.
# **Apply vectorized solution without preperation:**
# +
def vec(a,xi,psi,xi_w,psi_w):
m_plus_vec = a[:,np.newaxis,np.newaxis]/psi[np.newaxis,:,np.newaxis] + xi[np.newaxis,np.newaxis,:]
v_plus_vec = np.sqrt(m_plus_vec)
w_mat = xi_w[np.newaxis,np.newaxis,:]*psi_w[np.newaxis,:,np.newaxis]*v_plus_vec
w_vec = np.sum(w_mat,axis=(1,2))
return w_vec
vec_result_noprep = vec(a_vec,psi_vec,xi_vec,xi_w_vec,psi_w_vec)
assert np.allclose(loop_result,vec_result_noprep)
# %timeit vec(a_vec,psi_vec,xi_vec,xi_w_vec,psi_w_vec)
# -
# # Numba
# Writing **vectorized code can be cumbersome**, and in some cases it is impossible. Instead we can use the **numba** module.
#
# Adding the decorator `nb.njit` on top of a function tells numba to compile this function **to machine code just-in-time**. This takes some time when the function is called the first time, but subsequent calls are then a lot faster. *The input types can, however, not change between calls because numba infer them on the first call.*
# +
def myfun_numpy_vec(x1,x2):
y = np.empty((1,x1.size))
I = x1 < 0.5
y[I] = np.sum(np.exp(x2*x1[I]),axis=0)
y[~I] = np.sum(np.log(x2*x1[~I]),axis=0)
return y
# setup
x1 = np.random.uniform(size=10**6)
x2 = np.random.uniform(size=np.int(100*CPUs/8)) # adjust the size of the problem
x1_np = x1.reshape((1,x1.size))
x2_np = x2.reshape((x2.size,1))
# timing
# %timeit myfun_numpy_vec(x1_np,x2_np)
# -
# **Numba:** The first call is slower, but the result is the same, and the subsequent calls are faster:
# +
@nb.njit
def myfun_numba(x1,x2):
y = np.empty(x1.size)
for i in range(x1.size):
if x1[i] < 0.5:
y[i] = np.sum(np.exp(x2*x1[i]))
else:
y[i] = np.sum(np.log(x2*x1[i]))
return y
# call to just-in-time compile
# %time myfun_numba(x1,x2)
# actual measurement
# %timeit myfun_numba(x1,x2)
assert np.allclose(myfun_numpy_vec(x1_np,x2_np),myfun_numba(x1,x2))
# -
# **Further speed up:** Use
#
# 1. parallelization (with ``prange``), and
# 2. faster but less precise math (with ``fastmath``)
# +
@nb.njit(parallel=True)
def myfun_numba_par(x1,x2):
y = np.empty(x1.size)
for i in nb.prange(x1.size): # in parallel across threads
if x1[i] < 0.5:
y[i] = np.sum(np.exp(x2*x1[i]))
else:
y[i] = np.sum(np.log(x2*x1[i]))
return y
assert np.allclose(myfun_numpy_vec(x1_np,x2_np),myfun_numba_par(x1,x2))
# %timeit myfun_numba_par(x1,x2)
# +
@nb.njit(parallel=True,fastmath=True)
def myfun_numba_par_fast(x1,x2):
y = np.empty(x1.size)
for i in nb.prange(x1.size): # in parallel across threads
if x1[i] < 0.5:
y[i] = np.sum(np.exp(x2*x1[i]))
else:
y[i] = np.sum(np.log(x2*x1[i]))
return y
assert np.allclose(myfun_numpy_vec(x1_np,x2_np),myfun_numba_par_fast(x1,x2))
# %timeit myfun_numba_par_fast(x1,x2)
# -
# **Caveats:** Only a limited number of Python and Numpy features are supported inside just-in-time compiled functions.
#
# - [Supported Python features](https://numba.pydata.org/numba-doc/dev/reference/pysupported.html)
# - [Supported Numpy features](https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html)
#
# **Parallization** can not always be used. Some problems are inherently sequential. If the result from a previous iteration of the loop is required in a later iteration, the cannot be executed seperately in parallel (except in some special cases such as summing). The larger the proportion of the code, which can be run in parallel is, the larger the potential speed-up is. This is called **Amdahl's Law**.
#
# <img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/11/amdahls_law.png" alt="amdahls_law" width=40% />
# # Parallization without Numba
# ## serial problem
# Assume we need to **solve the following optimization problem**
def solver(alpha,beta,gamma):
return optimize.minimize(lambda x: (x[0]-alpha)**2 +
(x[1]-beta)**2 +
(x[2]-gamma)**2,[0,0,0],method='nelder-mead')
# $n$ times:
# +
n = 100*CPUs
alphas = np.random.uniform(size=n)
betas = np.random.uniform(size=n)
gammas = np.random.uniform(size=n)
def serial_solver(alphas,betas,gammas):
results = [solver(alpha,beta,gamma) for (alpha,beta,gamma) in zip(alphas,betas,gammas)]
return [result.x for result in results]
# %time xopts = serial_solver(alphas,betas,gammas)
# -
# **Numba:** Numba can *not* be used for parallization here because we rely on the non-Numba function ``scipy.optimize.minimize``.
# ## joblib
# **Joblib** can be used to run python code in **parallel**.
#
# 1. ``joblib.delayed(FUNC)(ARGS)`` create a task to call ``FUNC`` with ``ARGS``.
# 2. ``joblib.Parallel(n_jobs=K)(TASKS)`` execute the tasks in ``TASKS`` in ``K`` parallel processes.
#
# +
def parallel_solver_joblib(alphas,betas,gammas,n_jobs=1):
tasks = (joblib.delayed(solver)(alpha,beta,gamma) for (alpha,beta,gamma) in zip(alphas,betas,gammas))
results = joblib.Parallel(n_jobs=n_jobs)(tasks)
return [result.x for result in results]
for n_jobs in CPUs_list:
if n_jobs > 36: break
print(f'n_jobs = {n_jobs}')
# %time xopts = parallel_solver_joblib(alphas,betas,gammas,n_jobs=n_jobs)
print(f'')
# -
# **Drawback:** The inputs to the functions are serialized and copied to each parallel process.
#
# [More on Joblib](https://joblib.readthedocs.io/en/latest/index.html) ([examples](https://joblib.readthedocs.io/en/latest/parallel.html))
# **Question:** What happens if you remove the ``method=nelder-mead`` in the ``solver()`` function? Why?
# ## dask (+)
# dask can also be used to run python code in **parallel**.
#
# 1. ``dask.delayed(FUNCS)(ARGS)`` create a task to call ``FUNC`` with ``ARGS``.
# 2. ``dask.compute(TASKS,scheduler='processes',num_workers=K)`` execute the tasks in ``TASKS`` in ``K`` parallel processes.
# +
def parallel_solver_dask(alphas,betas,num_workers=2):
tasks = (dask.delayed(solver)(alpha,beta,gamma) for (alpha,beta,gamma) in zip(alphas,betas,gammas))
results = dask.compute(tasks,scheduler='processes',num_workers=num_workers)
return [result.x for result in results[0]]
for num_workers in CPUs_list:
if num_workers > 36:
break
print(f'num_workers = {num_workers}')
# %time xopts = parallel_solver_dask(alphas,betas,num_workers=num_workers)
print('')
# -
# **Overhead:** dask does not work optimally in our situation (too large overhead), but it has other interesting features where it can be used on a cluster or to solve more complex problem (see below).
#
# [More on dask](http://docs.dask.org/en/latest/) ([examples](http://docs.dask.org/en/latest/delayed.html), [youtube tutorial](https://youtu.be/mqdglv9GnM8))
# ### Some details on dask
# Dask can also handle algorithms, where only some parts can be done in parallel, while others must be done sequentially.
# +
def inc(x):
return x + 1
def double(x):
return x + 2
def add(x, y):
return x + y
data = [1, 2, 3, 4, 5]
output = []
for x in data:
a = inc(x)
b = double(x)
c = add(a, b)
output.append(c)
total = sum(output)
print(total)
# +
output = []
for x in data:
a = dask.delayed(inc)(x)
b = dask.delayed(double)(x)
c = dask.delayed(add)(a, b)
output.append(c)
total = dask.delayed(sum)(output)
print(total.compute())
# -
# # Pandas
# Create a test dataset of $N$ units in $K$ groups.
# +
def create_test_data(K,N):
np.random.seed(1986)
groups = np.random.randint(low=0,high=K,size=N)
values = np.random.uniform(size=N)
df = pd.DataFrame({'group':groups,'value':values})
return df
K = 10
N = 10**5
df = create_test_data(K,N)
df.head()
# -
df.info()
# ## Example 1: Capping values
# **A. Loops:**
#
# Use a **raw loop**:
# +
def loop(df):
result = df.value.copy()
for i in range(len(df)):
if df.loc[i,'value'] < 0.1:
result[i] = 0.1
elif df.loc[i,'value'] > 0.9:
result[i] = 0.9
return result
# %time loop(df)
loop(df).head()
# -
# Use **apply row-by-row**:
# +
def cap(value):
if value < 0.1:
return 0.1
elif value > 0.9:
return 0.9
else:
return value
# slower:
# # %time df.apply(lambda x: cap(x['value']),axis=1)
# %timeit df.value.apply(lambda x: cap(x))
df.value.apply(lambda x: cap(x)).head()
# -
# **B. Vectorization**: Avoid loop over rows.
#
# Use the **transform method**:
# +
def cap_col(col):
result = col.copy()
I = result < 0.1
result[I] = 0.1
I = result > 0.9
result[I] = 0.9
return result
# slower:
# # %timeit df.transform({'value':cap_col})
# %timeit df.value.transform(cap_col)
df.value.transform(cap_col).head()
# -
# Do it **manually**:
# %timeit cap_col(df.value)
cap_col(df.value).head()
# Do it **manually with a numpy array**
# +
def cap_col_np(col):
result = col.copy()
I = result < 0.1
result[I] = 0.1
I = result > 0.9
result[I] = 0.9
return result
# %timeit result = pd.Series(cap_col_np(df.value.values))
pd.Series(cap_col_np(df.value.values)).head()
# -
# **Observation:** The manual call of a numpy function is the fastest option.
#
# **Note:** The ``cap_col_np`` function could be speeded-up by numba just like any other function taking numpy inputs.
# +
# write your code here
@nb.njit
def cap_col_np_nb(col):
result = col.copy()
I = result < 0.1
result[I] = 0.1
I = result > 0.9
result[I] = 0.9
return result
# -
# **Answer:**
# + jupyter={"source_hidden": true}
@nb.njit
def cap_col_np_nb(col):
result = col.copy()
I = result < 0.1
result[I] = 0.1
I = result > 0.9
result[I] = 0.9
return result
pd.Series(cap_col_np_nb(df.value.values)).head()
# -
# %timeit result = pd.Series(cap_col_np_nb(df.value.values))
# ## Example 2: Demean within group
# Do it **manually:**
# +
def manually(df):
result = df.value.copy()
for group in range(K):
I = df.group == group
group_mean = df[I].value.mean()
result[I] = result[I]-group_mean
return result
# %timeit result = manually(df)
manually(df).head()
# -
# Use **groupby.agg** and **merge**:
# +
def demean_agg_merge(df):
means = df.groupby('group').agg({'value':'mean'}).reset_index()
means = means.rename(columns={'value':'mean'})
df_new = pd.merge(df,means,on='group',how='left')
return df_new['value'] - df_new['mean']
# %timeit demean_agg_merge(df)
demean_agg_merge(df).head()
# -
# Use **groupby.value.apply**:
# +
def demean_apply(df):
return df.groupby('group').value.apply(lambda x: x-x.mean())
# %timeit demean_apply(df)
demean_apply(df).head()
# -
# Use **groupby.value.transform:**
# +
def demean_transform(df):
return df.groupby('group').value.transform(lambda x: x-x.mean())
# %timeit demean_transform(df)
demean_transform(df).head()
# -
# Use **groupby.value.transform** with **built-in mean**:
# +
def demean_transform_fast(df):
means = df.groupby('group').value.transform('mean')
result = df.value - means
return result
# %timeit demean_transform_fast(df)
demean_transform_fast(df).head()
# -
# **Observation:** ``demean_transform_fast`` is the winner so far.
# ### Parallization with dask and numba (+)
# Create a **bigger dataset** and set the index to group and sort by it.
K = 10
N = 5*10**7
df = create_test_data(K,N)
df = df.set_index('group')
df = df.sort_index()
df.head()
df.info()
# **Standard pandas:**
# %time df.groupby('group').value.max()
# %time df.groupby('group').value.mean()
# %time df.groupby('group').value.sum()
# %time demean_apply(df)
print('')
# %time demean_transform_fast(df)
demean_transform_fast(df).head()
# **Dask dataframe:**
#
# We can work with dask dataframes instead, which imply that some computations are done in parallel.
#
# The syntax is very similar to pandas, but far from all features are implemented (e.g. not transform). There are two central differences between dask dataframes and pandas dataframe:
#
# 1. Dask dataframes are divided into **partitions**, where each partitution is a sub-set of the index in the dataset. Computations can be done in parallel across partitions.
# 2. Dask dataframes use **lazy evaluation**. Nothing is actually done before the ``.compute()`` method is called.
#
# The ``.compute()`` method returns a pandas series or dataframe.
#
# > **More info:** [documentation for dask.dataframe](http://docs.dask.org/en/latest/dataframe.html)
# Note how **dask create partitions based on the index**:
for k in [2,5,10]:
ddf = dd.from_pandas(df, npartitions=k)
print(f'k = {k}:',ddf.divisions)
# The time gains are, however, very modest, if there at all:
# +
def demean_apply_dd(dff):
result = dff.groupby('group').value.apply(lambda x: x-x.mean(), meta=('value','float64'))
return result.compute()
for k in [1,K]:
print(f'number of partitions = {k}')
# %time ddf = dd.from_pandas(df, npartitions=k)
print('')
# %time ddf.groupby('group').value.max().compute()
# %time ddf.groupby('group').value.mean().compute()
# %time ddf.groupby('group').value.sum().compute()
# %time demean_apply_dd(ddf)
print('')
demean_apply_dd(ddf).head()
# -
# **Observations:** Some computations are faster after the partioning (not on all computers though). The missing speed-up is most likely explained by fetching of memory being the bottleneck rather than performing the calculations. Generally, the size and complexity of the problem and how many CPUs you have will determine how large the is benefit is.
# **Numba:** Handwritten numba functions for each task can also provide a speed-up in some cases.
# +
@nb.njit(parallel=True)
def groupby_demean_numba(group,value,K):
result = np.zeros(value.size)
for i in nb.prange(K):
I = group == i
mean = np.mean(value[I])
result[I] = value[I]-mean
return result
for _ in range(3):
# %time result = pd.Series(groupby_demean_numba(df.index.values,df.value.values,K))
pd.Series(groupby_demean_numba(df.index.values,df.value.values,K)).head()
# -
# **Observation:** The speed-up is modest. Again the size and complexity of the problem and how many CPUs you have will determine how large the benefit is.
# # Summary
# **This lecture:** You learned that optimizing performance is a difficult task, but the recommendation is to follow the following 6-step procedure:
#
# 1. Choose the **right algorithm**
# 2. Implement **simple and robust code** for the algorithm
# 3. Profile the code to **find bottlenecks**
# 4. Use **precomputations**, **comphrensions** and **vectorization** to speed-up the code
# 5. Still need more speed? Consider **numba**, **joblib** or **dask**
# 6. Still not enough? **C++** is the next level
# **Next lecture:** Perspectives on other programming languages
| 14/The_need_for_speed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook was prepared by [<NAME>](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# # Challenge Notebook
# ## Problem: Determine if a string s1 is a rotation of another string s2, by calling (only once) a function is_substring.
#
# * [Constraints](#Constraints)
# * [Test Cases](#Test-Cases)
# * [Algorithm](#Algorithm)
# * [Code](#Code)
# * [Unit Test](#Unit-Test)
# * [Solution Notebook](#Solution-Notebook)
# ## Constraints
#
# * Can we assume the string is ASCII?
# * Yes
# * Note: Unicode strings could require special handling depending on your language
# * Is this case sensitive?
# * Yes
# * Can we use additional data structures?
# * Yes
# * Can we assume this fits in memory?
# * Yes
# ## Test Cases
#
# * Any strings that differ in size -> False
# * None, 'foo' -> False (any None results in False)
# * ' ', 'foo' -> False
# * ' ', ' ' -> True
# * 'foobarbaz', 'barbazfoo' -> True
# ## Algorithm
#
# Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/rotation/rotation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
# ## Code
class Rotation(object):
def is_substring(self, s1, s2):
# TODO: Implement me
return s1 in s2 + s2
def is_rotation(self, s1, s2):
# TODO: Implement me
# Call is_substring only once
if s1 is None or s2 is None:
return False
if len(s1) != len(s2):
return False
return self.is_substring(s1, s2)
# ## Unit Test
#
#
# **The following unit test is expected to fail until you solve the challenge.**
# +
# # %load test_rotation.py
from nose.tools import assert_equal
class TestRotation(object):
def test_rotation(self):
rotation = Rotation()
assert_equal(rotation.is_rotation('o', 'oo'), False)
assert_equal(rotation.is_rotation(None, 'foo'), False)
assert_equal(rotation.is_rotation('', 'foo'), False)
assert_equal(rotation.is_rotation('', ''), True)
assert_equal(rotation.is_rotation('foobarbaz', 'barbazfoo'), True)
print('Success: test_rotation')
def main():
test = TestRotation()
test.test_rotation()
if __name__ == '__main__':
main()
# -
# ## Solution Notebook
#
# Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/rotation/rotation_solution.ipynb) for a discussion on algorithms and code solutions.
| arrays_strings/rotation/rotation_challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise
#
# Author: <NAME>
#
# We'll use Python to explore the content of the MNE-sample-data folder
# + tags=[]
from mne.datasets import sample
data_path = sample.data_path()
print(data_path)
# + tags=[]
import glob
fif_files = glob.glob(data_path + '/MEG/sample/' + '*.fif')
print(fif_files)
# -
subjects_dir = data_path + '/subjects'
subject = 'sample'
surf_dir = subjects_dir + "/" + subject + '/surf'
print(surf_dir)
# # Questions
#
# - What is the type of the variable `fif_files` ?
# - Sort inplace the content of `fif_files`
# - What is the length of `fif_files` ?
# - How many files is there in the folder `surf_dir` ?
# - How many files have a name that starts with `lh`?
# - How many files are raw files (end with `raw.fif`) ?
# - Create a new list restricted to the files of type raw?
| 0d-Exercises_Python_MNE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn import model_selection
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn import svm
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA
url="https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/parkinsons.data"
parkinson=pd.read_csv(url)
df=parkinson.copy()
df.shape
df
df["result"]=df["status"]
df.drop(["name","status"],axis=1,inplace=True)#dropping cols
df.shape
df.columns=[i for i in range(23)]
print(df.shape)
print(df.head())
df.describe() #No missing values
# df.columns
data=df.values
X=data[:,:22]
Y=data[:,22]
X_train,X_test,Y_train,Y_test=model_selection.train_test_split(X,Y,test_size=0.3,random_state=0)
scaler=StandardScaler()
X_train_scaled=scaler.fit_transform(X_train)
X_test_scaled=scaler.transform(X_test)
num_feature=len(df.columns)-1
len(set(Y))
pca = PCA()
pca.fit_transform(X_train_scaled)
#Calculating optimal k
k = 0
total = sum(pca.explained_variance_)
current_sum = 0
while(current_sum / total < 0.99):
current_sum += pca.explained_variance_[k]
k += 1
print(k)
num_feature=k
pca = PCA(n_components=k, whiten=True)
X_train_scaled = pca.fit_transform(X_train_scaled)
X_test_scaled = pca.transform(X_test_scaled)
log=LogisticRegression()
log.fit(X_train_scaled,Y_train)
print(log.score(X_train_scaled,Y_train))
log.score(X_test_scaled,Y_test)
rnd=RandomForestClassifier(n_estimators=100,max_depth=10)
rnd.fit(X_train_scaled,Y_train)
print(rnd.score(X_train_scaled,Y_train))
rnd.score(X_test_scaled,Y_test)
svc=svm.SVC(kernel='linear')
svc.fit(X_train_scaled,Y_train)
print(svc.score(X_train_scaled,Y_train))
svc.score(X_test_scaled,Y_test)
svc = svm.SVC()
grid={'C':[1e2,1e3,5e3,1e4,5e4,1e5],'gamma':[1e-3,1e-4,5e-4,5e-3]}
abc=GridSearchCV(svc,grid)
abc.fit(X_train_scaled,Y_train)
abc.best_estimator_
svc=svm.SVC(C=100,gamma=0.0005)
svc.fit(X_train_scaled, Y_train)
Y_pred_svm = svc.predict(X_test_scaled)
svc_score = accuracy_score(Y_test, Y_pred_svm)
svc_score
# +
# #one hot encipoding for NN
# Y_train_encoded = list()
# Y_test_encoded = list()
# Y_train=Y_train.astype(int)
# Y_test=Y_test.astype(int)
# for value in Y_train:
# letter = [0 for i in range(16)]
# letter[value-1] = 1
# Y_train_encoded.append(letter)
# for value in Y_test:
# letter = [0 for i in range(16)]
# letter[value-1] = 1
# Y_test_encoded.append(letter)
# Y_test_encoded=np.array(Y_test_encoded)
# Y_train_encoded=np.array(Y_train_encoded)
# num_feature
# -
#Neural network
import keras
from keras.models import Sequential
from keras.layers import Dense,Dropout
model=Sequential()
model.add(Dense(units=64,activation='sigmoid',input_dim=num_feature))
model.add(Dense(units=32,activation='sigmoid'))
model.add(Dropout(rate=0.2))
model.add(Dense(units=1,activation='sigmoid'))
opti=keras.optimizers.Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
# opti=keras.optimizers.RMSprop(lr=0.01, rho=0.9, epsilon=None, decay=0.0)
# opti=keras.optimizers.Adagrad(lr=0.01, epsilon=None, decay=0.0)
model.compile(optimizer=opti,loss='binary_crossentropy',metrics=['accuracy'])
model.fit(X_train_scaled,Y_train,epochs=75,batch_size=50,validation_data=(X_test_scaled,Y_test))
score=model.evaluate(X_test_scaled,Y_test)
score
| parkinson.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pylab as plt
import numpy as np
plt.plot([1,2,3,4])
x= np.arange(0,10,.1)
y=np.sin(x)
plt.plot(x,y)
x= np.arange(0,10,.1)
y=np.exp(x)
plt.plot(x,y,'red')
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib nbagg
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
x=np.arange(-10,10,.1)
y = np.sin(x)
z = np.cos(x)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
ax.plot(x,y,z,c='red')
ax.view_init(20,70)
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
x = np.arange(-10,10,.1)
y = np.arange(-10,10,.1)
xv,yv = np.meshgrid(x,y)
zv = np.sin(xv)
ax.plot_wireframe(xv,yv,zv)
# +
y= lambda x: x**3 -3*x -1
yd= lambda x:3*(x**2) -3
x=0
for i in range(10000):
x2=x-(y(x)/yd(x))
x=x2;
print(x)
# -
np.roots([1,0,-3,-1])
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
from sympy import *
init_printing()
# -
| clases y trabajos/3 class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Part1
# +
from __future__ import unicode_literals
import pandas as pd
# %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import font_manager
from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r"/root/anaconda2/envs/python3/lib/python3.6/site-packages/matplotlib/mpl-data/fonts/ttf/msyh.ttf")
import numpy as np
from sksurv.nonparametric import kaplan_meier_estimator
from sksurv.preprocessing import OneHotEncoder
from sksurv.linear_model import CoxnetSurvivalAnalysis#CoxPHSurvivalAnalysis
from sksurv.linear_model import CoxPHSurvivalAnalysis
from sksurv.metrics import concordance_index_censored
from sksurv.metrics import concordance_index_ipcw
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# +
data1 = pd.read_csv("398908-3.csv", encoding = "GB2312")
#data1 = data1[data1["部件装上使用小时数"]!="00:00"]
data1["部件本次装机使用小时"] = data1["部件本次装机使用小时"].str.split(':').str[0].astype(int)
data1 = data1[data1["部件本次装机使用小时"]>0]
data1["IsPlanned"] = data1["非计划"]=="X"
print(data1["IsPlanned"].value_counts())
data_y = data1[["IsPlanned", "部件本次装机使用小时"]]
data_y["部件本次装机使用小时"].hist(bins=12, range=(0,60000))
data1["IsPlaneNew"] = data1["部件装上飞行小时数"]=="00:00"
data1["IsPartNew"] = data1["部件装上使用小时数"]=="00:00"
def CheckNew(p1,p2):
if p1 and p2:
return "PlaneNew-PartNew"
elif p1 and not p2:
return "PlaneNew-PartOld"
elif not p1 and p2:
return "PlaneOld-PartNew"
elif not p1 and not p2:
return "PlaneOld-PartOld"
#print([CheckNew(row["IsPlaneNew"], row["IsPartNew"]) for idx, row in data1.iterrows()])
data1["PlanePartType"] = [CheckNew(row["IsPlaneNew"], row["IsPartNew"]) for idx, row in data1.iterrows()]
data1["安装日期"] = pd.to_datetime(data1["安装日期"])
data1["安装年度"] = data1["安装日期"].dt.year
di = {"霍尼韦尔": "HONEYWELL"}
data1.replace({"最近送修公司": di}, inplace=True)
data1["最近送修公司"].fillna("Unknown", inplace=True)
data1["FH TSN"].fillna("00:00", inplace=True)
data1["部件装上飞行小时数"] = data1["部件装上飞行小时数"].str.split(':').str[0].astype(int)
data1["部件装上使用小时数"] = data1["部件装上使用小时数"].str.split(':').str[0].astype(int)
data1["部件装上飞行小时数-Range"] = pd.cut(data1['部件装上飞行小时数'], 8)
#data1["部件装上飞行循环数-Range"] = pd.cut(data1['部件装上飞行循环数'], 8)
data1["部件装上使用小时数-Range"] = pd.cut(data1['部件装上使用小时数'], 8)
#data1["部件装上使用循环数-Range"] = pd.cut(data1['部件装上使用循环数'], 8)
data1["CY TSN-Range"] = pd.cut(data1['CY TSN'], 8)
data1["FH TSN-Range"] = pd.cut(data1['FH TSN'], 8)
#data_x = data1[["机型","制造序列号","机号","参考类型","指令类型","序号","拆换原因","部件装上飞行循环数","部件装上使用循环数",
# "部件拆下飞行循环数","部件拆下使用循环数","装上序号","最近送修公司","CY TSN","FH TSN"]]
#data_x = data1[["机型","参考类型","指令类型","拆换原因","部件装上飞行循环数","部件装上使用循环数",
# "部件拆下飞行循环数","部件拆下使用循环数","CY TSN","FH TSN"]]
data_x = data1[["机型","安装年度","部件装上飞行小时数-Range","部件装上使用小时数-Range","FH TSN-Range", "最近送修公司","PlanePartType"]]
# -
time, survival_prob = kaplan_meier_estimator(data_y["IsPlanned"], data_y["部件本次装机使用小时"])
plt.step(time, survival_prob, where="post")
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
# +
# "机型","拆换年度","部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range", "最近送修公司"
#col = "机型"
#col = "参考类型"
col = "PlanePartType"
#col = "安装年度"
#col = "机型"
#print((data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP"))
y = data_y
x = data_x
for value in x[col].unique():
mask = x[col] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(y["IsPlanned"][mask],
y["部件本次装机使用小时"][mask])
plt.step(time_cell, survival_prob_cell, where="post", label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="upper right", prop=font)
# +
# "机型","拆换年度","部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range", "最近送修公司"
#col = "机型"
#col = "参考类型"
col = "最近送修公司"
#col = "安装年度"
#col = "机型"
#print((data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP"))
filter1 = (data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP") & (data_x["最近送修公司"]!="海航技术")
y = data_y[filter1]
x = data_x[filter1]
for value in x[col].unique():
mask = x[col] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(y["IsPlanned"][mask],
y["部件本次装机使用小时"][mask])
plt.step(time_cell, survival_prob_cell, where="post", label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="upper right", prop=font)
# -
#data_x.select_dtypes(exclude=['int','int64' 'float']).columns
data_x.describe()
# +
#"部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range",
#
x = data_x.copy()
cat_features = ["机型", "安装年度","部件装上飞行小时数-Range","部件装上使用小时数-Range","FH TSN-Range", "最近送修公司","PlanePartType"]
for col in cat_features:
x[col] = x[col].astype('category')
data_x_numeric = OneHotEncoder().fit_transform(x[cat_features])
data_x_numeric.head()
# -
null_columns=data1.columns[data1.isnull().any()]
data1[null_columns].isnull().sum()
#data_y = data_y.as_matrix()
y = data_y.to_records(index=False)
estimator = CoxPHSurvivalAnalysis() #CoxnetSurvivalAnalysis()
estimator.fit(data_x_numeric, y)
# +
#pd.Series(estimator.coef_, index=data_x_numeric.columns)
# -
prediction = estimator.predict(data_x_numeric)
result = concordance_index_censored(y["IsPlanned"], y["部件本次装机使用小时"], prediction)
print(result[0])
result = concordance_index_ipcw(y, y, prediction)
print(result[0])
# +
def fit_and_score_features(X, y):
n_features = X.shape[1]
scores = np.empty(n_features)
m = CoxnetSurvivalAnalysis()
for j in range(n_features):
Xj = X[:, j:j+1]
m.fit(Xj, y)
scores[j] = m.score(Xj, y)
return scores
scores = fit_and_score_features(data_x_numeric.values, y)
pd.Series(scores, index=data_x_numeric.columns).sort_values(ascending=False)
# -
x_new = data_x_numeric.loc[[46,77,200,593]]
#print(x_new)
data_x.loc[[46,77,200,593]]
y[[46,77,200,593]]
pred_surv = estimator.predict_survival_function(x_new)
for i, c in enumerate(pred_surv):
plt.step(c.x, c.y, where="post", label="Sample %d" % (i + 1))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
pipe = Pipeline([('encode', OneHotEncoder()),
('select', SelectKBest(fit_and_score_features, k=3)),
('model', CoxPHSurvivalAnalysis())])
# +
param_grid = {'select__k': np.arange(1, data_x_numeric.shape[1] -3)}
gcv = GridSearchCV(pipe, param_grid=param_grid, return_train_score=True, cv=3, iid=True)
gcv.fit(x, y)
pd.DataFrame(gcv.cv_results_).sort_values(by='mean_test_score', ascending=False)
# +
pipe.set_params(**gcv.best_params_)
pipe.fit(x, y)
encoder, transformer, final_estimator = [s[1] for s in pipe.steps]
pd.Series(final_estimator.coef_, index=encoder.encoded_columns_[transformer.get_support()])
# -
# # Part2
from sklearn.model_selection import train_test_split
from sksurv.metrics import (concordance_index_censored,
concordance_index_ipcw,
cumulative_dynamic_auc)
data_x = data1[["安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN"]]
def df_to_sarray(df):
"""
Convert a pandas DataFrame object to a numpy structured array.
This is functionally equivalent to but more efficient than
np.array(df.to_array())
:param df: the data frame to convert
:return: a numpy structured array representation of df
"""
v = df.values
cols = df.columns
if False: # python 2 needs .encode() but 3 does not
types = [(cols[i].encode(), df[k].dtype.type) for (i, k) in enumerate(cols)]
else:
types = [(cols[i], df[k].dtype.type) for (i, k) in enumerate(cols)]
dtype = np.dtype(types)
z = np.zeros(v.shape, dtype)
for (i, k) in enumerate(z.dtype.names):
z[:,i] = v[:, i]
return z
y = data_y.to_records(index=False)
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.2)#, random_state=1)
x_train = x_train.values
x_test = x_test.values
# +
y_events_train = y_train[y_train['IsPlanned']==False]
train_min, train_max = y_events_train["部件本次装机使用小时"].min(), y_events_train["部件本次装机使用小时"].max()
y_events_test = y_test[y_test['IsPlanned']==False]
test_min, test_max = y_events_test["部件本次装机使用小时"].min(), y_events_test["部件本次装机使用小时"].max()
assert train_min <= test_min < test_max < train_max, \
"time range or test data is not within time range of training data."
# -
times = np.percentile(data_y["部件本次装机使用小时"], np.linspace(5, 95, 15))
print(times)
import matplotlib
matplotlib.matplotlib_fname()
# +
num_columns = ["安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN"]
def plot_cumulative_dynamic_auc(risk_score, label, color=None):
auc, mean_auc = cumulative_dynamic_auc(y_train, y_test, risk_score, times)
plt.plot(times, auc, marker="o", color=color, label=label)
plt.legend(prop = font)
plt.xlabel("time时间",fontproperties=font)
plt.ylabel("time-dependent AUC")
plt.axhline(mean_auc, color=color, linestyle="--")
for i, col in enumerate(num_columns):
plot_cumulative_dynamic_auc(x_test[:, i], col, color="C{}".format(i))
ret = concordance_index_ipcw(y_train, y_test, x_test[:, i], tau=times[-1])
# -
# # Part3
# +
data_x = data1[["机型","安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN", "最近送修公司","PlanePartType"]]
cat_features = ["机型", "安装年度", "最近送修公司","PlanePartType"]
for col in cat_features:
data_x[col] =data_x[col].astype('category')
# -
times = np.percentile(data_y["部件本次装机使用小时"], np.linspace(5, 95, 15))
print(times)
estimator = CoxPHSurvivalAnalysis() #CoxnetSurvivalAnalysis()
estimator.fit(data_x_numeric, y)
# +
from sklearn.pipeline import make_pipeline
y = data_y.to_records(index=False)
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.2)#, random_state=1)
cph = make_pipeline(OneHotEncoder(), CoxPHSurvivalAnalysis())
cph.fit(x_train, y_train)
result = concordance_index_censored(y_test["IsPlanned"], y_test["部件本次装机使用小时"], cph.predict(x_test))
print(result[0])
# estimate performance on training data, thus use `va_y` twice.
va_auc, va_mean_auc = cumulative_dynamic_auc(y_train, y_test, cph.predict(x_test), times)
plt.plot(times, va_auc, marker="o")
plt.axhline(va_mean_auc, linestyle="--")
plt.xlabel("time from enrollment")
plt.ylabel("time-dependent AUC")
plt.grid(True)
print(y_test["部件本次装机使用小时"])
print(cph.predict_survival_function(x_test))
print(y_test["部件本次装机使用小时"] - cph.predict(x_test))
# -
# # Part4
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas
import seaborn as sns
from sklearn.model_selection import ShuffleSplit, GridSearchCV
from sksurv.datasets import load_veterans_lung_cancer
from sksurv.column import encode_categorical
from sksurv.metrics import concordance_index_censored
from sksurv.svm import FastSurvivalSVM
sns.set_style("whitegrid")
# +
data_x = data1[["机型","安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN", "最近送修公司","PlanePartType"]]
cat_features = ["机型", "安装年度", "最近送修公司","PlanePartType"]
for col in cat_features:
data_x[col] = data_x[col].astype('category')
# -
x = OneHotEncoder().fit_transform(data_x)#encode_categorical(data_x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)#, random_state=1)
estimator = FastSurvivalSVM(optimizer="rbtree",rank_ratio=0.0, max_iter=1000, tol=1e-6, random_state=0, alpha=2.**-6)
estimator.fit(x_train, y_train)
prediction = estimator.predict(x_test)
result = concordance_index_censored(y_test["IsPlanned"], y_test["部件本次装机使用小时"], prediction)
print(result[0])
estimator.predict(x_train)
estimator = FastSurvivalSVM(optimizer="rbtree", max_iter=1000, tol=1e-6, random_state=0)
def score_survival_model(model, X, y):
prediction = model.predict(X)
result = concordance_index_censored(y['IsPlanned'], y['部件本次装机使用小时'], prediction)
return result[0]
param_grid = {'alpha': 2. ** np.arange(-12, 13, 2)}
cv = ShuffleSplit(n_splits=20, test_size=0.4, random_state=0)
gcv = GridSearchCV(estimator, param_grid, scoring=score_survival_model,
n_jobs=12, iid=False, refit=False,
cv=cv)
param_grid
import warnings
y = data_y.to_records(index=False)
warnings.filterwarnings("ignore", category=UserWarning)
gcv = gcv.fit(x, y)
gcv.best_score_, gcv.best_params_
def plot_performance(gcv):
n_splits = gcv.cv.n_splits
cv_scores = {"alpha": [], "test_score": [], "split": []}
order = []
for i, params in enumerate(gcv.cv_results_["params"]):
name = "%.5f" % params["alpha"]
order.append(name)
for j in range(n_splits):
vs = gcv.cv_results_["split%d_test_score" % j][i]
cv_scores["alpha"].append(name)
cv_scores["test_score"].append(vs)
cv_scores["split"].append(j)
df = pandas.DataFrame.from_dict(cv_scores)
_, ax = plt.subplots(figsize=(11, 6))
sns.boxplot(x="alpha", y="test_score", data=df, order=order, ax=ax)
_, xtext = plt.xticks()
for t in xtext:
t.set_rotation("vertical")
plot_performance(gcv)
from sksurv.svm import FastKernelSurvivalSVM
from sksurv.kernels import clinical_kernel
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.5)#, random_state=1)
kernel_matrix = clinical_kernel(x_train)
kssvm = FastKernelSurvivalSVM(optimizer="rbtree", kernel="precomputed", random_state=0, alpha=2.**-6)
kssvm.fit(kernel_matrix, y_train)
x_test.shape
kernel_matrix = clinical_kernel(x_test[0:552])
prediction = kssvm.predict(kernel_matrix)
result = concordance_index_censored(y_test[0:552]["IsPlanned"], y_test[0:552]["部件本次装机使用小时"], prediction)
print(result[0])
kernel_matrix = clinical_kernel(data_x)
kssvm = FastKernelSurvivalSVM(optimizer="rbtree", kernel="precomputed", random_state=0, alpha=2.**-12)
kgcv = GridSearchCV(kssvm, param_grid, score_survival_model,
n_jobs=12, iid=False, refit=False,
cv=cv)
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
kgcv = kgcv.fit(kernel_matrix, y)
kgcv.best_score_, kgcv.best_params_
plot_performance(kgcv)
| Aircondition_MRO/HNA_Survival_Analysis/10-HNA-Stephen-398908-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ascent Bridge
# language: python
# name: ascent_jupyter_bridge
# ---
# # Histogram Demo
# ## Setup the Jupyter Extract
# Before attempting to use this notebook see Ascent's documentation and follow the Jupyter extract demo.
# ## Connect
# Use the `%connect` magic to find your Ascent backend and connect to it. Once you select the correct backend from the dropdown menu click the Connect button.
# %connect
# ## Run the MPI Histogram from Demo 4
# You can run code in a jupyter cell just as you would in a python extract.
# +
import numpy as np
from mpi4py import MPI
# obtain a mpi4py mpi comm object
comm = MPI.Comm.f2py(ascent_mpi_comm_id())
# get this MPI task's published blueprint data
mesh_data = ascent_data().child(0)
# fetch the numpy array for the energy field values
e_vals = mesh_data["fields/energy/values"]
# find the data extents of the energy field using mpi
# first get local extents
e_min, e_max = e_vals.min(), e_vals.max()
# declare vars for reduce results
e_min_all = np.zeros(1)
e_max_all = np.zeros(1)
# reduce to get global extents
comm.Allreduce(e_min, e_min_all, op=MPI.MIN)
comm.Allreduce(e_max, e_max_all, op=MPI.MAX)
# compute bins on global extents
bins = np.linspace(e_min_all, e_max_all)
# get histogram counts for local data
hist, bin_edges = np.histogram(e_vals, bins = bins)
# declare var for reduce results
hist_all = np.zeros_like(hist)
# sum histogram counts with MPI to get final histogram
comm.Allreduce(hist, hist_all, op=MPI.SUM)
# print result on mpi task 0
if comm.Get_rank() == 0:
print("\nEnergy extents: {} {}\n".format(e_min_all[0], e_max_all[0]))
print("Histogram of Energy:\n")
print("Counts:")
print(hist_all)
print("\nBin Edges:")
print(bin_edges)
print("")
# -
# ## Display the Histogram using MatPlotLib
# You can use Python's scientific computing libraries (SciPy, MatPlotLib, Numpy, etc.) to analyze your data and see the output in Jupyter.
# +
from matplotlib import pyplot as plt
plt.hist(bin_edges[2:-1], weights=hist_all[2:], bins=bin_edges[2:])
plt.title("Histogram of Energy")
plt.show()
# -
# ## Disconnect
# Disconnecting lets the simulation continue and and eventually stop at a future timestep or terminate completely. Once you disconnect using the `%disconnect` magic you can wait for the simulation to stop again then go back to the top of the notebook and use `%connect` to access the data for the new timestep in the simulation.
# %disconnect
| src/examples/tutorial/jupyter_extract/notebooks/ascent_jupyter_histogram_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] heading_collapsed=true
# # Imports
# + hidden=true
# Basic imports
import pandas as pd
import numpy as np
# Warnings
import warnings
warnings.simplefilter("ignore")
# + hidden=true
# Plot
from IPython import display
import seaborn as sns
import matplotlib
import matplotlib.pylab as plt
from jupyterthemes import jtplot
jtplot.style('gruvboxd')
matplotlib.use('nbagg')
# + [markdown] heading_collapsed=true
# # Data Reading
# + hidden=true
from catboost.datasets import titanic
# Data Reading
df_train, df_test = titanic()
df_train.set_index('PassengerId', inplace=True)
df_test.set_index('PassengerId', inplace=True)
# Split X_train, y_train
target = 'Survived'
features = df_test.columns
y_train = df_train[target]
df_train = df_train[features]
df_train.head()
# + [markdown] heading_collapsed=true
# # Preprocessing
# + hidden=true
from robusta.preprocessing.category import *
from robusta.preprocessing.numeric import *
from robusta.preprocessing.base import *
from robusta.pipeline import *
nums = ['Age', 'Fare', 'SibSp', 'Parch']
cats = ['Pclass', 'Sex', 'Embarked']
data_prep = FeatureUnion([
("numeric", make_pipeline(
ColumnSelector(nums),
Imputer(strategy="median"),
GaussRank(),
#ColumnRenamer(prefix='gr_'),
)),
("category", make_pipeline(
ColumnSelector(cats),
Imputer(strategy="most_frequent"),
LabelEncoder(),
#ColumnRenamer(prefix='le_'),
)),
])
X_train = data_prep.fit_transform(df_train)
X_test = data_prep.transform(df_test)
X_train.head()
# -
# # Fold Preparation
# +
from sklearn.model_selection import KFold
from robusta.resampler import *
encoder = FeatureUnion([
('category', make_pipeline(
ColumnSelector(cats),
TypeConverter('object'),
TargetEncoderCV(cv=4).set_params(encoder__smoothing=200.0),
)),
('numeric', make_pipeline(
ColumnSelector(nums),
Identity(),
)),
])
resampler = SMOTE(random_state=50, k_neighbors=30)
fold_pipe = make_pipeline(resampler, encoder)
F_train = fold_pipe.fit_transform(X_train, y_train)
F_train.sample(5, random_state=555)
# -
# # Evaluation
# +
from sklearn.model_selection import RepeatedStratifiedKFold
from robusta.crossval import *
cv = 5
# -
# # Model
# +
# %%time
from robusta.model import get_model
model = get_model('LGB', 'classifier', n_estimators=100)
model.fit(X_train, y_train)
model = make_pipeline(model)
# -
# # Submit
# %%time
y_oof, y_sub = crossval_predict(model, cv, X_train, y_train, None, X_test, verbose=1, n_jobs=1,
scoring=['roc_auc', 'accuracy', 'neg_log_loss']
#scoring='neg_log_loss'
#scoring=None
)
# %%time
y_oof, y_sub = crossval_predict(model, cv, X_train, y_train, None, X_test, verbose=2, n_jobs=1,
scoring=['roc_auc', 'accuracy', 'neg_log_loss']
#scoring='neg_log_loss'
#scoring=None
)
# %%time
y_oof, y_sub = crossval_predict(model, cv, X_train, y_train, None, X_test, verbose=2, n_jobs=1,
#scoring=['roc_auc', 'accuracy', 'neg_log_loss']
#scoring='neg_log_loss'
#scoring=None
)
# `0.8652 ± 0.0323`
| examples/titanic/0 baseline-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import HbEZ library
from jnpr.healthbot import HealthBotClient
from pprint import pprint
hb = HealthBotClient('10.209.7.33', 'regress', 'MaRtInI')
# # Use Case: Device
pprint(hb.device.get_ids())
# ### Getting help for any given function
help(hb.device.get_ids)
# ### Get config related to given device-id
# Get config related to given device-id
obj = hb.device.get('demo')
obj.host
pprint(obj)
# ### get facts for the given device id
# Get device facts of a given device id
pprint(hb.device.get_facts('demo'))
# ### Add a new device
# +
from jnpr.healthbot import DeviceSchema
ds = DeviceSchema(device_id='demo', host='10.221.136.140',
authentication={"password": {"password": "<PASSWORD>", "username": "regress"}})
print(hb.device.add(schema=ds))
# -
help(DeviceSchema)
dev = hb.device.get('demo')
pprint(dev)
# ### By default, get() returns uncommited data (from candidate DB)
pprint(hb.device.get('demo', uncommitted=False))
# ### Why we choose to go with Schema. We can easily access and edit any attribute.
# Existing system_id
print (dev.system_id)
# Editing system_id
dev.system_id = "Demo:HbEZ"
print(hb.device.update(dev))
# ### HealthBot API provide commit and rollback config
hb.commit()
dev = hb.device.get('demo')
pprint(dev)
print(dev.system_id)
# +
# To delete a device
hb.device.delete('demo')
# if a device is part of device group, to make sure we delete it first from device group
hb.device.delete('demo', force=True)
# -
# # Use Case: Devices
# ### Get details of all devices in System
# Get config details of all the device
obj = hb.device.get()
obj
# ### Get device facts for all the devices in HB
pprint(hb.device.get_facts())
# ### Add device group using DevicegroupSchema and APIs provided
from jnpr.healthbot import DeviceGroupSchema
dgs = DeviceGroupSchema(device_group_name="edge", devices=['demo'])
# ### We can also set any param/attribute after creating schema object
dgs.description="All devices on the edge"
# ### Now add device group using provided API
print(hb.device_group.add(dgs))
# ### Let see whats the plus point of using Schema
#
# 1> Helps with any missing paramters
# 2> Checks for any rule associated with given params
# +
# Error for missing mandatory parameters
dgs = DeviceGroupSchema()
# +
# Error for not following rule for give parameter
dgs = DeviceGroupSchema(device_group_name="edge group")
# +
# Now we are in compliance with rules
dgs = DeviceGroupSchema(device_group_name="edge")
# -
dgs.devices = ['demo']
# ### We can also pass all Schema params to add_ APIs, internally it will use these params to create Schema
print(hb.device_group.add(device_group_name="edge", description="All devices on the edge", devices=['demo']))
hb.commit()
# Get details for a given device group
hb.device_group.get('real')
hb.device_group.delete('edge', force=True)
# ### Add an existing device to existing group
hb.device_group.add_device_in_group('vmx', 'edge')
obj = hb.device_group.get('edge')
print (obj)
obj.devices
# ### Get details of all the device groups
print(hb.device_group.get())
# ### We can also update any given device group
print(hb.device_group.get('edge'))
dgs = hb.device_group.get('edge')
dgs.devices.append('vmx')
hb.device_group.update(dgs)
# Check for devices list
dgs = hb.device_group.get('edge')
print(dgs)
dgs.devices = ['avro']
hb.device_group.update(dgs)
dgs = hb.device_group.get('edge')
print(dgs)
# can also update by passing Device Group Schema kwargs
dgs = hb.device_group.get('edge')
pprint(dgs)
from jnpr.healthbot import DevicegroupSchemaLogging
logSchema = DevicegroupSchemaLogging('warn')
hb.device_group.update(device_group_name='edge', logging=logSchema)
# +
### Delete device and device group
# -
hb.device_group.delete('edge')
hb.device.delete('demo')
# Lets commit all the changes
hb.commit()
# ### Add Network Group
hb.network_group.add(network_group_name="HbEZ")
print(hb.network_group.get(network_group_name="HbEZ"))
hb.network_group.delete(network_group_name="HbEZ")
# ### Add Network Group using Schema
from jnpr.healthbot import NetworkGroupSchema
ngs = NetworkGroupSchema(network_group_name="HbEZ")
hb.network_group.add(ngs)
hb.network_group.get(network_group_name="HbEZ")
# # Use Case: Rules
hb.rule.get('linecard.ospf', 'check-ddos-statistics')
# ### Add new rule
from jnpr.healthbot.modules.rules import RuleSchema
rs = RuleSchema(rule_name="hbez-fpc-heap-utilization")
# ### setting rule schema params
rs.description = "HealthBot EZ example"
rs.synopsis = "Using python client for demo"
rs.sensor = [{'description': 'Monitors FPC buffer, heap and cpu utilization',
'iAgent': {'file': 'fpc-utilization.yml',
'frequency': '30s',
'table': 'FPCCPUHEAPutilizationTable'},
'sensor_name': 'fpccpuheaputilization'}]
rs.field = [{'constant': {'value': '{{fpc-buffer-usage-threshold}}'},
'description': 'This field is for buffer usage threshold',
'field_name': 'linecard-buffer-usage-threshold'},
{'constant': {'value': '{{fpc-cpu-usage-threshold}}'},
'description': 'This field is for linecard cpu usage threshold',
'field_name': 'linecard-cpu-usage-threshold'},
{'constant': {'value': '{{fpc-heap-usage-threshold}}'},
'description': 'This field is for linecard heap usage threshold',
'field_name': 'linecard-heap-usage-threshold'}]
rs.keys = ['slot']
rs.variable = [{'description': 'Linecard Buffer Memory usage threshold value',
'name': 'fpc-buffer-usage-threshold',
'type': 'int',
'value': '80'},
{'description': 'Linecard CPU usage threshold value',
'name': 'fpc-cpu-usage-threshold',
'type': 'int',
'value': '80'},
{'description': 'Linecard Heap Memory usage threshold value',
'name': 'fpc-heap-usage-threshold',
'type': 'int',
'value': '80'}]
rs.trigger = [{'description': 'Sets health based on linecard buffer memory',
'frequency': '60s',
'synopsis': 'Linecard buffer memory kpi',
'term': [{'term_name': 'is-buffer-memory-utilization-greater-than-threshold',
'then': {'status': {'color': 'red',
'message': 'FPC buffer memory '
'utilization '
'($memory-buffer-utilization) '
'is over threshold '
'($linecard-buffer-usage-threshold)'}},
'when': {'greater_than': [{'left_operand': '$memory-buffer-utilization',
'right_operand': '$linecard-buffer-usage-threshold'}]}},
{'term_name': 'buffer-utilization-less-than-threshold',
'then': {'status': {'color': 'green'}}}],
'trigger_name': 'fpc-buffer-memory-utilization'},
{'description': 'Sets health based on linecard cpu utilization',
'frequency': '60s',
'synopsis': 'Linecard cpu utilization kpi',
'term': [{'term_name': 'is-cpu-utilization-greater-than-80',
'then': {'status': {'color': 'red',
'message': 'FPC CPU utilization '
'($cpu-total) is over '
'threshold '
'($linecard-cpu-usage-threshold)'}},
'when': {'greater_than': [{'left_operand': '$cpu-total',
'right_operand': '$linecard-cpu-usage-threshold',
'time_range': '180s'}]}},
{'term_name': 'cpu-utilization-less-than-threshold',
'then': {'status': {'color': 'green'}}}],
'trigger_name': 'fpc-cpu-utilization'},
{'description': 'Sets health based on linecard heap memory '
'utilization',
'frequency': '60s',
'synopsis': 'Linecard heap memory kpi',
'term': [{'term_name': 'is-heap-memory-utilization-greater-than-threshold',
'then': {'status': {'color': 'red',
'message': 'FPC heap memory '
'utilization '
'($memory-heap-utilization) '
'is over threshold '
'($linecard-heap-usage-threshold)'}},
'when': {'greater_than': [{'left_operand': '$memory-heap-utilization',
'right_operand': '$linecard-heap-usage-threshold'}]}},
{'term_name': 'heap-memory-utilization-less-than-threshold',
'then': {'status': {'color': 'green'}}}],
'trigger_name': 'fpc-heap-memory-utilization'}]
# ### If the topic name is not present, first it will create given topic
hb.rule.add('hbez', schema=rs)
# +
# hb.rules.delete_rule(topic_name='external', rule_name="hbez-fpc-heap-utilization")
# -
# # Use Case: Playbooks
pprint(hb.playbook.get('linecard-kpis-playbook'))
from jnpr.healthbot.modules.playbooks import PlaybookSchema
pbs = PlaybookSchema(playbook_name="HbEZ-example")
pbs.description = "HbEZ Demo Examples"
pbs.synopsis = 'fpc status'
pbs.rules = ['hbez/hbez-fpc-heap-utilization']
hb.playbook.add(pbs)
hb.playbook.delete(playbook_name="HbEZ-example")
# # Use Case: Health
pprint(hb.health.get_device_health('avro'))
pprint(hb.health.get_device_group_health('real'))
# # Use Case: Database
# This we might need to take it off
print (hb.tsdb.query("show databases"))
obj = hb.tsdb.query('select * from "protocol-eventd-host/check-host-traffic/packet-loss" limit 10', database='Core:vmx')
pprint(obj.raw)
print(hb.version)
# +
### Load any helper file
# -
hb.upload_helper_file('/Users/nitinkr/xxx/xyz.rule')
# # Use Case: Notification
# +
from jnpr.healthbot import NotificationSchema
from jnpr.healthbot import NotificationSchemaSlack
ns = NotificationSchema(notification_name='HbEZ-notification')
ns.description = "example of adding notification via API"
nss = NotificationSchemaSlack(channel="HbEZ", url='http://testing')
ns.slack = nss
hb.settings.notification.add(ns)
# -
print(hb.settings.notification.get())
pprint(hb.settings.notification.delete('HbEZ-notification'))
# # Use Case: Settings
# +
from jnpr.healthbot import RetentionPolicySchema
rps = RetentionPolicySchema(retention_policy_name='HbEZ-testing')
hb.settings.retention_policy.add(rps)
# -
print(hb.settings.retention_policy.get())
# +
from jnpr.healthbot import SchedulerSchema
sc = SchedulerSchema(name='HbEZ-schedule', repeat={'every': 'week'}, start_time="2019-07-22T05:32:23Z")
hb.settings.scheduler.add(sc)
from jnpr.healthbot import DestinationSchema
ds = DestinationSchema(name='HbEZ-destination', email={'id': '<EMAIL>'})
hb.settings.destination.add(ds)
from jnpr.healthbot import ReportSchema
rs = ReportSchema(name="HbEZ-report", destination=['HbEZ-destination'], format="html", schedule=["HbEZ-schedule"])
hb.settings.report.add(rs)
# -
# # Use Case: PlayBook Instance
# +
from jnpr.healthbot.modules.playbooks import PlayBookInstanceBuilder
# case where we dont need to set any variable
pbb = PlayBookInstanceBuilder(hb, 'forwarding-table-summary', 'HbEZ-instance', 'Core')
pbb.apply()
#hb.commit()
# -
# ### when we need to set any rule variable for given device/group in playbook instance
# +
from jnpr.healthbot.modules.playbooks import PlayBookInstanceBuilder
pbb = PlayBookInstanceBuilder(hb, 'forwarding-table-summary', 'HbEZ-instance', 'Core')
variable = pbb.rule_variables["protocol.routesummary/check-fib-summary"]
variable.route_address_family = 'pqr'
variable.route_count_threshold = 100
# Apply variable to given device(s)
pbb.apply(device_ids=['vmx'])
#clear all the variable if you want to set it something else for group or other device(s)
pbb.clear()
variable = pbb.rule_variables["protocol.routesummary/check-fib-summary"]
variable.route_address_family = 'abc'
variable.route_count_threshold = 200
pbb.apply()
#hb.commit()
# -
# ## Thanks
| lib/.ipynb_checkpoints/HbEZ-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dzLKpmZICaWN" colab={"base_uri": "https://localhost:8080/"} outputId="08b20f2b-0892-4b09-ddb4-a0a1cb04ae09"
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.datasets import mnist
print(tf.__version__)
# + [markdown] id="yR0EdgrLCaWR"
# ## Import the Fashion MNIST dataset
# + id="7MqDQO0KCaWS"
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
#shuffling the data
shuffle_index=np.random.permutation(60000)
train_images,train_labels= train_images[shuffle_index] , train_labels[shuffle_index]
# + id="IjnLH5S2CaWx"
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + id="zW5k_xz1CaWX" colab={"base_uri": "https://localhost:8080/"} outputId="610ff8d7-8f68-4f27-9086-b00b5589706e"
train_images.shape
# + [markdown] id="cIAcvQqMCaWf"
# Likewise, there are 60,000 labels in the training set:
# + id="TRFYHB2mCaWb" colab={"base_uri": "https://localhost:8080/"} outputId="e1de1ba3-0d96-4b97-a3b3-5025d3847ee7"
len(train_labels)
# + [markdown] id="YSlYxFuRCaWk"
# Each label is an integer between 0 and 9:
# + id="XKnCTHz4CaWg" colab={"base_uri": "https://localhost:8080/"} outputId="bdcb07f1-e3c2-4f42-8894-e2dc1002627a"
train_labels
# + [markdown] id="TMPI88iZpO2T"
# There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
# + id="2KFnYlcwCaWl" colab={"base_uri": "https://localhost:8080/"} outputId="29ddb648-d262-4612-bb59-81809d9febe9"
test_images.shape
# + [markdown] id="rd0A0Iu0CaWq"
# And the test set contains 10,000 images labels:
# + id="iJmPr5-ACaWn" colab={"base_uri": "https://localhost:8080/"} outputId="e6e04545-2f8b-48db-a53c-66968624c69f"
len(test_labels)
# + [markdown] id="ES6uQoLKCaWr"
# ## Preprocess the data
#
#
# + id="m4VEw8Ud9Quh" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="39aa38fc-826b-4293-f33b-79bca2f7fad4"
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
# + [markdown] id="Wz7l27Lz9S1P"
# Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the *training set* and the *testing set* be preprocessed in the same way:
# + id="bW5WzIPlCaWv"
train_images = train_images / 255.0
test_images = test_images / 255.0
# + [markdown] id="Ee638AlnCaWz"
# To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the *training set* and display the class name below each image.
# + id="oZTImqg_CaW1" colab={"base_uri": "https://localhost:8080/", "height": 589} outputId="15a689dd-c0fe-4abb-e09f-668bf24a20e5"
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
# + [markdown] id="59veuiEZCaW4"
# ## Build the model
#
# Building the neural network requires configuring the layers of the model, then compiling the model.
# + id="9ODch-OFCaW4"
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10)
])
# + id="Lhan11blCaW7"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + [markdown] id="qKF6uW-BCaW-"
# ## Train the model
#
# Training the neural network model requires the following steps:
#
# 1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.
# 2. The model learns to associate images and labels.
# 3. You ask the model to make predictions about a test set—in this example, the `test_images` array.
# 4. Verify that the predictions match the labels from the `test_labels` array.
#
# + [markdown] id="Z4P4zIV7E28Z"
# ### Feed the model
#
# To start training, call the [`model.fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit) method—so called because it "fits" the model to the training data:
# + id="xvwvpA64CaW_" colab={"base_uri": "https://localhost:8080/"} outputId="c92205b3-6619-4d91-c51d-35d993fe3501"
model.fit(train_images, train_labels, epochs=10)
# + [markdown] id="W3ZVOhugCaXA"
# As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.91 (or 91%) on the training data.
# + [markdown] id="wCpr6DGyE28h"
# ### Evaluate accuracy
#
# Next, compare how the model performs on the test dataset:
# + id="VflXLEeECaXC" colab={"base_uri": "https://localhost:8080/"} outputId="4760b168-90e7-4782-e767-fbf3fd38a39d"
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
# + [markdown] id="v-PyD1SYE28q"
# ### Make predictions
#
# With the model trained, you can use it to make predictions about some images.
# The model's linear outputs, [logits](https://developers.google.com/machine-learning/glossary#logits). Attach a softmax layer to convert the logits to probabilities, which are easier to interpret.
# + id="DnfNA0CrQLSD"
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
# + id="Gl91RPhdCaXI"
predictions = probability_model.predict(test_images)
# + [markdown] id="x9Kk1voUCaXJ"
# Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
# + id="3DmJEUinCaXK" colab={"base_uri": "https://localhost:8080/"} outputId="7f523095-7f8a-40a0-9ce2-18a20e8cc9b3"
predictions[0]
# + [markdown] id="-hw1hgeSCaXN"
# A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value:
# + id="qsqenuPnCaXO" colab={"base_uri": "https://localhost:8080/"} outputId="777c1def-3e73-4a2e-e6f0-075f230d2a5f"
np.argmax(predictions[0])
# + [markdown] id="E51yS7iCCaXO"
# So, the model is most confident that this image is an ankle boot, or `class_names[9]`. Examining the test label shows that this classification is correct:
# + id="Sd7Pgsu6CaXP" colab={"base_uri": "https://localhost:8080/"} outputId="8b89e85a-96f4-414d-cbd2-c1592c14789f"
test_labels[0]
# + [markdown] id="ygh2yYC972ne"
# Graph this to look at the full set of 10 class predictions.
# + id="DvYmmrpIy6Y1"
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# + [markdown] id="Zh9yABaME29S"
# ### Verify predictions
#
# With the model trained, you can use it to make predictions about some images.
# + [markdown] id="d4Ov9OFDMmOD"
# Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
# + id="HV5jw-5HwSmO" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="1735b269-1bf3-48e9-b7b3-622a942790b4"
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
# + id="Ko-uzOufSCSe" colab={"base_uri": "https://localhost:8080/", "height": 211} outputId="d3346516-2631-4162-bc58-60cf3f46a0fa"
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
# + [markdown] id="kgdvGD52CaXR"
# Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
# + id="hQlnbqaw2Qu_" colab={"base_uri": "https://localhost:8080/", "height": 729} outputId="2063f2f6-ead2-48ea-c32d-d1d37b301750"
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
# + [markdown] id="R32zteKHCaXT"
# ## Use the trained model
#
# Finally, use the trained model to make a prediction about a single image.
# + id="yRJ7JU7JCaXT" colab={"base_uri": "https://localhost:8080/"} outputId="aad37b38-a5bd-45ee-c935-e7ee12418feb"
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# + [markdown] id="vz3bVp21CaXV"
# `tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
# + id="lDFh5yF_CaXW" colab={"base_uri": "https://localhost:8080/"} outputId="6898f06c-d78c-47dc-da1e-f6f30be5af11"
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
# + [markdown] id="EQ5wLTkcCaXY"
# Now predict the correct label for this image:
# + id="o_rzNSdrCaXY" colab={"base_uri": "https://localhost:8080/"} outputId="89a2067b-1688-480f-869f-6fedc6cbdd4b"
predictions_single = probability_model.predict(img)
print(predictions_single)
# + id="6Ai-cpLjO-3A" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="39fac012-fad6-4114-acbe-bfb0c473749d"
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
# + [markdown] id="cU1Y2OAMCaXb"
# `tf.keras.Model.predict` returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch:
# + id="2tRmdq_8CaXb" colab={"base_uri": "https://localhost:8080/"} outputId="98cc5286-6b05-4cb8-d92a-040aafa97c01"
np.argmax(predictions_single[0])
# + [markdown] id="YFc2HbEVCaXd"
# And the model predicts a label as expected.
| 09_2/Lab09_l181139.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
from torchvision.models import resnet
# define the model
model = resnet.resnet34(pretrained=True)
model.eval()
# trace model with a dummy input
traced_model = torch.jit.trace(model, torch.randn(1,3,224,224))
traced_model.save('resnet34.pt')
# !touch main.py
# !touch unzip_torch.py
# !touch deploy.sh
# +
import io
import os
import time
import boto3
import requests
import torch
from PIL import Image
from torchvision import transforms
s3_resource = boto3.resource('s3')
img_tranforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
def download_image(url):
try:
r = requests.get(url)
if r.status_code == 200:
f = io.BytesIO(r.content)
img = Image.open(f)
return img
else:
return None
except:
return None
# +
def download_model(bucket='', key=''):
location = f'/tmp/{os.path.basename(key)}'
if not os.path.exists(location):
s3_resource.Object(bucket, key).download_file(location)
return location
def classify_image(model_path, img):
model = torch.jit.load(model_path)
img = img_tranforms(img).unsqueeze(0)
cl = model(img).argmax().item()
return cl
def lambda_handler(event, context):
# download model
model_path = download_model(
bucket='simplelambdatestbucket', key='resnet34.pt')
# download image
img = download_image(event['url'])
# classify image
if img:
cl = classify_image(model_path, img)
return {
'statusCode': 200,
'class': cl
}
else:
return {
'statusCode': 404,
'class': None
}
# -
url = 'https://d1ra4hr810e003.cloudfront.net/media/27FB7F0C-9885-42A6-9E0C19C35242B5AC/0/D968A2D0-35B8-41C6-A94A0C5C5FCA0725/F0E9E3EC-8F99-4ED8-A40DADEAF7A011A5/dbe669e9-40be-51c9-a9a0-001b0e022be7/thul-IMG_2100.jpg'
img = download_image(url)
img
if img:
#cl = classify_image(model_path, img)
print( {
'statusCode': 200,
'class': 1
})
else:
print( {
'statusCode': 404,
'class': None
})
| aws_lambda/aws lambda/model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from tqdm import tqdm
from database.strategy import Strategy
from database.market import Market
from transformer.column_transformer import ColumnTransformer
from transformer.date_transformer import DateTransformer
import warnings
warnings.simplefilter(action='ignore', category=Warning)
import matplotlib.pyplot as plt
from datetime import datetime, timedelta, timezone
import math
import numpy as np
strat_db = Strategy("unity")
market = Market()
market.connect()
sp5 = market.retrieve_data("sp500")
market.close()
start = datetime(2018,1,1)
end = datetime(2021,1,1)
reload = False
datasets = [
"pdr",
# "tiingo",
# "finnhub"
]
dataset = "pdr"
sim = []
if reload:
for ticker in tqdm(sp5["Symbol"]):
try:
market.connect()
prices = market.retrieve_price_data("{}_prices".format(dataset),ticker)
market.close()
if dataset == "pdr":
prices = ColumnTransformer.rename_columns(prices," ")
else:
prices = ColumnTransformer.rename_columns(prices,"")
prices = DateTransformer.convert_to_date(dataset,prices,"date")
prices["year"] = [x.year for x in prices["date"]]
prices["week"] = [x.week for x in prices["date"]]
prices["quarter"] = [x.quarter for x in prices["date"]]
prices["rolling100"] = prices["adjclose"].rolling(window=100).mean()
prices["rolling7"] = prices["adjclose"].rolling(window=7).mean()
current_quarterly_max = []
for row in prices.iterrows():
current_date = row[1]["date"]
current_quarter = row[1]["quarter"]
current_year = row[1]["year"]
relevant = prices[(prices["year"] == current_year) & (prices["quarter"]==current_quarter) & (prices["date"] <= current_date)]
current_quarterly_max.append(relevant["adjclose"].max())
prices["cqm"] = current_quarterly_max
prices["d1"] = [1 if x > 0 else 0 for x in prices["adjclose"].diff()]
prices["d2"] = [1 if x > 0 else 0 for x in prices["d1"].diff()]
prices["100d"] = (prices["rolling100"] - prices["adjclose"]) / prices["adjclose"]
prices["7d"] = (prices["rolling7"] - prices["adjclose"]) / prices["adjclose"]
prices["qd"] = (prices["cqm"] - prices["adjclose"]) / prices["adjclose"]
sim.append(prices)
except Exception as e:
print(ticker, str(e))
continue
strat_db.connect()
sim = strat_db.retrieve_data("headless_sim")
strat_db.close()
for col in ["100d","7d","qd"]:
sim[col] = sim[col] * -1
todays_sim = sim[(sim["100d"]>0) & (sim["7d"]<0)]
sim[sim["7d"] >= 0.03]
def backtest(sim,industries,delta_column,value,hpr,delta_req):
trades = []
blacklist = pd.DataFrame([{"ticker":"ZZZZZ","start":datetime(2016,4,1),"end":datetime(2016,4,14)}])
if not value:
print("not value")
for col in ["100d","7d","qd"]:
sim[col] = sim[col] * -1
for industry in tqdm(industries):
date = start
industry_tickers = list(sp5[sp5["GICS Sector"] == industry]["Symbol"])
while date <= end:
if date >= end:
break
if date.weekday() > 4 \
or date == datetime(2020,2,20):
date = date + timedelta(days=7-date.weekday())
blacklist_tickers = blacklist[(blacklist["start"] <= date) & (blacklist["end"] >= date)]["ticker"]
if delta_column == "7d":
delta_req = float(delta_req/3)
todays_sim = sim[(~sim["ticker"].isin(blacklist_tickers)) & (sim["ticker"].isin(industry_tickers)) &
(sim["date"] == date) & (sim["100d"]>0) & (sim["7d"]>0)]
if todays_sim.index.size >= 1:
offerings = todays_sim.sort_values(delta_column,ascending=False)
for offering in range(offerings.index.size):
try:
trade_ticker = offerings.iloc[offering]["ticker"]
try:
trade = sim[(sim["ticker"] == trade_ticker) & (sim["date"] > date)].iloc[0]
sell_date = trade["date"] + timedelta(days=1)
sell_trades = sim[(sim["date"] >= sell_date) & (sim["ticker"] == trade["ticker"])]
## if there aren't any listed prices left for the ticker
if sell_trades.index.size < 1:
## if there aren't any more ticker's left to vet
if offering == offerings.index.size - 1:
date = sell_date + timedelta(days=1)
print(date,"no more stock vets")
break
else:
print(date,"stock had no more listed prices")
continue
else:
if date > datetime(2020,10,1):
max_hpr = int((end - date).days) - 1
else:
max_hpr = hpr
sell_trades["gain"] = (sell_trades["adjclose"] - trade["adjclose"]) / trade["adjclose"]
sell_trades["hpr"] = [(x - trade["date"]).days for x in sell_trades["date"]]
hpr_sell_trades = sell_trades[sell_trades["hpr"] < max_hpr]
hpr_sell_trades["hit"] = hpr_sell_trades["gain"] >= delta_req
delta_hit = hpr_sell_trades[(hpr_sell_trades["hit"] == True)]
## if we never hit the mark
if delta_hit.index.size < 1:
sell_trade = sell_trades[(sell_trades["hpr"] >= max_hpr)].sort_values("hpr",ascending=True).iloc[0]
else:
sell_trade = delta_hit.iloc[0]
trade["sell_price"] = sell_trade["adjclose"]
trade["delta_req"] = delta_req
trade["sell_date"] = sell_trade["date"]
trade["sell_delta"] = (sell_trade["adjclose"] - trade["adjclose"]) / trade["adjclose"]
trade["hpr"] = sell_trade["hpr"]
trade["industry"] = industry
blacklist = blacklist.append([{"ticker":trade["ticker"],"start":date,"end":trade["sell_date"]}])
trades.append(trade)
date = trade["sell_date"] + timedelta(days=1)
break
except Exception as e:
print(date,str(e))
date = date + timedelta(days=1)
continue
except Exception as e:
print(date,str(e))
date = date + timedelta(days=1)
continue
else:
date = date + timedelta(days=1)
continue
return trades
epoch = 0
seats = list(sp5["GICS Sector"].unique())
strat_db.connect()
strat_db.drop_table("headless_epochs")
delta_range = range(3,12,3)
delta_columns = ["100d","7d"]
value_settings = [True,False]
hpr_settings = [7,14]
iterations = len(hpr_settings) * len(value_settings) * len(delta_columns) * len(delta_range)
print(iterations)
for i in range(iterations):
strat_db.drop_table("headless_{}".format(i))
for value in tqdm(value_settings):
for hpr in tqdm(hpr_settings):
for delta_column in tqdm(delta_columns):
for delta in tqdm(delta_range):
dr = delta/100
epoch_dict = {"epoch":epoch
,"dataset":dataset
,"dr":dr
,"value_setting":value
,"hpr":hpr
,"delta_column":delta_column
}
ts = backtest(sim.copy(),seats,delta_column,value,hpr,dr)
if len(ts) > 0:
strat_db.store_data("headless_epochs",pd.DataFrame([epoch_dict]))
strat_db.store_data("headless_{}".format(epoch),pd.DataFrame(ts))
epoch += 1
strat_db.close()
| boiler/unity_headless_backtest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Calibration of General Simulation Model
# -
# This notebook implements a Bayesian approach to finding model parameters that seem reasonable.
# Model specific variables are imported in the file gsm_metadata.pkl. This file is a Python dictionary that was created when the model was created.
# +
# %matplotlib notebook
import os
import datetime as dt
import pickle, joblib
# Standard data science libraries
import pandas as pd
import numpy as np
import scipy.stats as ss
import scipy.optimize as so
import scipy.interpolate as si
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-notebook')
# Options for pandas
pd.options.display.max_columns = 20
pd.options.display.max_rows = 200
# Display all cell outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
from IPython.display import Image
from IPython.display import Math
# + slideshow={"slide_type": "fragment"}
import shutil
from matplotlib import colors
import flopy as fp
from bayes_opt import BayesianOptimization, UtilityFunction # for Bayesian optimization
import itertools #efficient creation of iterable sets of model parameters
from tqdm.notebook import tqdm # for the progress bars
import statsmodels.api as sm # for lowess smoothing of plots
from scipy.spatial import ConvexHull # to find the Pareto front
import shapely # to operate on the parameter space
from scipy.spatial.distance import cdist # to operate on parameter space
import json
import RTD_util6 as rtd_ut # custom module with utilities
import warnings
import Genmod_Utilities as gmu
# -
# This calibration uses a Bayesian strategy with Gaussian Processes to sample parameter space. There is a good explanation (and where the Python package comes from) at
#
# https://github.com/fmfn/BayesianOptimization
#
# The basic idea is that you set up parameter bounds to be sampled, run the model a bunch of times ("initial probing"), and use Gaussian Processes to construct an error hypersurface ("hyper" because there can be more than two dimensions to it; the number of dimensions equals the number of parameters) in parameter space. The algorithm then starts to sample this hypersurface, run the model, and update the hypersurface. The updates are based on the error measure (sum of aquared error in this case). In other words, the lower the error measure, the more weight that is given to that parameter set when resampling for the next iteration. There is a trade-off in the algorithm between exploring new areas of the hypersurface and honing in on promising areas. There is a hyperparameter that control this trade off. Sampling is done by the "acquisition function", of which there several the user can select. Each acquisition function has slightly different ways of controlling the trade off. The options are in the code and the user can comment out functions that are not being used. The EI function is the default. The Gaussian Process algorithm also has a hyperparameter (alpha) that controls how much the error hypersurface is smoothed between data points.
with open('GenMod_metadata.txt') as json_file:
metadata = json.load(json_file)
# Read metadata dictionary that was created when the model was created.
# +
src = os.path.join('model_ws', 'gsm_metadata.json')
with open(src, 'r') as f:
gsm_metadata = json.load(f)
from argparse import Namespace
meta = Namespace(**gsm_metadata)
# -
# Copy the GSM that was created over to the scratch directory. It will be replaced many times during the exploration of parameter space.
# +
if os.path.exists('model_ws/calibration_runs'):
shutil.rmtree('model_ws/calibration_runs')
shutil.copytree('model_ws', 'model_ws/calibration_runs')
if os.path.exists('optimal_model'):
shutil.rmtree('optimal_model')
# -
# Load the model and extract a few variables.
# +
sim = fp.mf6.MFSimulation.load(sim_name='mfsim.nam', version='mf6', exe_name=metadata['modflow_path'],
sim_ws='model_ws/calibration_runs', strict=True, verbosity_level=0,
load_only=None, verify_data=False)
model = sim.get_model()
dis = model.get_package('dis')
top_ar = dis.top.array
top = top_ar.ravel()
nlay, nrow, ncol = dis.nlay.array, dis.nrow.array, dis.ncol.array
delc = dis.delc.array
delr = dis.delr.array
npf = model.get_package('npf')
k = npf.k.array
k33 = npf.k33.array
tmp = np.load(os.path.join('bedrock_flag_array.npz'))
bedrock_index = tmp['bedrock_index']
print (' ... done')
# -
# Load the model_grid.csv file to get the observation cell types
# +
model_file = os.path.join(metadata['gis_dir'], 'model_grid.csv')
model_grid = pd.read_csv(model_file)
model_grid.fillna(0, inplace=True)
model_grid.loc[model_grid[meta.K_bdrk] == 0, meta.ibound] = 0
model_grid.loc[model_grid[meta.K_surf] == 0, meta.ibound] = 0
model_grid.loc[model_grid.ibound == 0, 'obs_type'] = np.nan
topo_cells = model_grid.obs_type == 'topo'
hydro_cells = model_grid.obs_type == 'hydro'
num_topo = model_grid.obs_type.value_counts()['topo']
num_hydro = model_grid.obs_type.value_counts()['hydro']
# -
# Set some optimizer parameters.
#
# * **pbounds**: the lower and upper limit for each parameter within which to search
# * **acq**: the acquisition function for updating Bayes estimates. Either Upper Confidence Bounds (ucb) or expected improvement (ei)
# * **kappa, xi**: metaparameter for ucb or ei respectively. Lower values favor exploiting local maxima; higher values favor a broader exploration of parameter space. The range given are only suggestions based on the source web site.
# * **alpha**: a parameter that can be passed to the underlying Gaussian Process
# * **dif_wt**: factor used to weight the difference between hydro and topo errors
# * **hyd_wt**: factor used to weight hydro errors
# * **num_init**: the number of grid search parameter sets to start the Bayesian sampling. Total number of sets is (number of parameters) $^{numinit}$
# * **num_bayes**: the number of Bayesian updates to try
#
# ranges of all hyperparameters can be used to automate tuning
# +
# parameter bounds in native units
pbounds = {'k_surf_mult': (-1., 2.), 'k_bdrk_mult': (-1., 2.), 'stream_mult': (
-1., 1.), 'k_bottom_fraction': (-2., 1.)}
num_pars = len(pbounds)
# select level of feedback from optimizer
verbosity = 1
# select acquisition function and its parameter
acq = 'ei' # or 'ucb'
# xi used with ei (0, 0.1)
xi = 0.1
# kappa used with ucb (1, 10):
kappa = 0
# select metaparameters for the Gaussian process
# higher alpha means more toleration for noise, i.e. more flexibility
# alpha = 1.E-10
# alpha = 0.001
# alpha = 0.5
alpha = 3/2
# alpha = 5/2
# select weights for objective function components
# dif_wt based on average topo and hydro errors in Starn et al. 2021 is 24.
dif_wt = 1.
hyd_wt = 1.
# select number of initial samples and Bayesian samples
num_init = 50
num_Bayes = 200
# calculate arrays of initial values to probe
parameter_sets = np.empty((num_init, num_pars))
parameter_sets[:, 0] = np.random.uniform(*pbounds['k_surf_mult'], num_init)
parameter_sets[:, 1] = np.random.uniform(*pbounds['k_bdrk_mult'], num_init)
parameter_sets[:, 2] = np.random.uniform(*pbounds['stream_mult'], num_init)
parameter_sets[:, 3] = np.random.uniform(*pbounds['k_bottom_fraction'], num_init)
# select discrete hyperparameter values for model tuning
hp_list = list()
# alpha_range = (0.001, 3/2)
alpha_range = 0.001
try:
ar = len(alpha_range)
except:
ar = 1
hp_list.append(alpha_range)
hyd_wt_range = (1)
try:
ah = len(hyd_wt_range)
except:
ah = 1
hp_list.append(hyd_wt_range)
xi_range = (0, 0.1)
xi_range = (0)
try:
ax = len(xi_range)
except:
ax = 1
hp_list.append(xi_range)
num_hyper = ar + ah + ax
# -
# Define a function to update parameter values, run the model, and calculate hydro and topo errors. The parameters of the model are multiplers of the the original values. Parameter multipliers are sampled in log space, so a multiplier of 1 means that the parameter value is 10 times the original value.
#
# $K$ for each application of the following function is calculated from the $k$ (designated lower case $k$ in the code, but it refers to hydraulic conductivity, not intrinsic permeability) that was read in in the base model. There are 3 $K$ multipliers that will be optimized and which apply to two hydrogeologic materials--consolidated and unconsolidated. One multiplier (`k_surf_mult`) multiplies $K$ in the unconsolidated (surficial) material. Two multipliers apply to the consolidated (bedrock) material. One of these multipliers (`k_bdrk_mult`) multiplies the uppermost bedrock layer $K$, the other multiplier (`k_bottom_fraction`) multiplies the lowermost bedrock layer. Any bedrock layers in between these two layers is calculated using an exponetial relationship between the top and bottom bedrock $K$ values.
#
# In the previous notebook, layers were created parallel to the simulated water table. By doing this, some cells in a layer may be composed of bedrock while other cells in the same cold be composed of surfiical material. The array created in the previous notebook called `bedrock_index` contains flags that indicate which K should be applied to each cell, surficial $K$ or bedrock $K$.
#
# To summarize, a 3D array of bedrock $K$ is calculated from $\mathbf{k}$ using multipliers and exponential interpolation (decay with depth). Another array is created (same shape as the bedrock array) of the surficial $K$, even though there is no variation with depth. The final $K$ array is made by choosing one of these array for cell based on `bedrock_index`.
#
# $K_{top\_of\_ bedrock} = \mathbf{k} * {k\_bdrk\_mult}$
#
# $K_{layer\_n} = c e^{a z}$
#
# $K_{bottom\_of\_ bedrock} = \mathbf{k} * {k\_bottom\_fraction}$
#
# where the coefficients $a$ and $c$ are determined in the code from the the top and bottom layer elevations and $K$
#
# Streambed $K$ is set as a fraction of the cell $K$. This parameter is `stream_mult`.
#
#
# **Note that there should be at least 2 bedrock layers for this interpolation to work**
# Define a function to apply the multipliers and run the model. The effect of the new parameters on streambed permeability is calculated.
# +
def run_model(k_surf_mult, k_bdrk_mult, stream_mult, k_bottom_fraction, sim_ws='model_ws/calibration_runs'):
# transform the log multipliers to real multipliers
k_surf_mult = 10 ** k_surf_mult
k_bdrk_mult = 10 ** k_bdrk_mult
stream_mult = 10 ** stream_mult
k_bottom_fraction = 10 ** k_bottom_fraction
# use flopy to read in the model
sim = fp.mf6.MFSimulation.load(sim_name='mfsim.nam', version='mf6',
exe_name=metadata['modflow_path'],
sim_ws=sim_ws, strict=True, verbosity_level=0,
load_only=None, verify_data=False)
model = sim.get_model()
dis = model.get_package('dis')
npf = model.get_package('npf')
# set K in each layer
k_top_of_bedrock = k[-gsm_metadata['num_bdrk_layers']] * k_bdrk_mult
k_bottom_of_bedrock = k[-1, ...] * k_bottom_fraction
grid = np.empty((nlay+1, nrow, ncol))
grid[0, ...] = dis.top.array
grid[1:, ...] = dis.botm.array
z = (grid[0:-1, ...] + grid[1:, ...] ) / 2
a = np.log(k_bottom_of_bedrock / k_top_of_bedrock) / (z[-1 , ...] - z[-gsm_metadata['num_bdrk_layers']])
c = k_top_of_bedrock * np.exp(-a * z[-gsm_metadata['num_bdrk_layers']])
k_exp = c * np.exp(a * z)
new_k = np.where(bedrock_index, k_exp, k_surf_mult * k)
npf.k = new_k
model_grid[meta.K_surf] = new_k[0, ...].ravel()
# set drain data in each drain cell
drn_data = model_grid[(model_grid.order != 0) &
(model_grid[meta.ibound] == 1)].copy()
# adjust streambed K based on cell K and stream_mult
drn_data['dcond'] = drn_data[meta.K_surf] * stream_mult * \
drn_data.reach_len * drn_data.width / meta.stream_bed_thk
drn_data['iface'] = 6
drn_data = drn_data.reindex(
['lay', 'row', 'col', 'stage', 'dcond', 'iface'], axis=1)
drn_data.rename(columns={'lay': 'k', 'row': 'i',
'col': 'j', 'stage': 'stage'}, inplace=True)
drn_data = drn_data[drn_data.dcond > 0]
cellid = list(zip(drn_data.k, drn_data.i, drn_data.j))
drn_data6 = pd.DataFrame({'cellid': cellid, 'stage': drn_data.stage,
'dcond': drn_data.dcond, 'iface': drn_data.iface})
drn_recarray6 = drn_data6.to_records(index=False)
drn_dict6 = {0: drn_recarray6}
drn = model.get_package('drn')
drn.stress_period_data = drn_dict6
# run the model
sim.write_simulation()
sim.run_simulation(silent=True)
# calculate the errors
rtd = rtd_ut.RTD_util(sim, 'flow', 'rt')
rtd.get_watertable()
water_table = rtd.water_table
t_crit = (model_grid.obs_type =='topo') & (model_grid[meta.ibound] != 0)
topo_cells = t_crit.values.reshape(nrow, ncol)
h_crit = (model_grid.obs_type =='hydro') & (model_grid[meta.ibound] != 0)
hydro_cells = h_crit.values.reshape(nrow, ncol)
num_topo = np.count_nonzero(topo_cells)
num_hydro = np.count_nonzero(hydro_cells)
topo = (top_ar + meta.err_tol) < water_table
hydro = (top_ar - meta.err_tol) > water_table
topo_error = topo & topo_cells
hydro_error = hydro & hydro_cells
t = np.count_nonzero(topo_error)
h = np.count_nonzero(hydro_error)
topo_rate = t / num_topo
hydro_rate = h / num_hydro
return topo_rate, hydro_rate
# -
# Loops to first run a grid search to initiate the Bayesian sampling, then loop for Bayes. The first commented-out loop can be used to automate hyperparameter tuning. The model may not run for some combinations of parameters. These will be printed out at the bottom of the cell.
# +
results_dict = dict()
# hyper_parameter_set = itertools.product(*hp_list)
# for alpha, xi in tqdm(hyper_parameter_set, total=num_hyper, desc='hyperparameter loop'):
# for alpha, xi in tqdm(hp_list, total=num_hyper, desc='hyperparameter loop'):
topo_error_list = list()
hydro_error_list = list()
dif_list = list()
sum_list = list()
alpha, hyd_wt, xi = hp_list
def fxn():
warnings.warn("future warning", FutureWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
for i in range(1):
dict_key = 'alpha={}_dif_wt={}_xi={}'.format(alpha, hyd_wt, xi)
utility = UtilityFunction(kind=acq, xi=xi, kappa=kappa)
optimizer = BayesianOptimization(
run_model, pbounds=pbounds, verbose=verbosity)
optimizer.set_gp_params(**{'alpha': alpha})
for i in tqdm(parameter_sets, total=num_init, desc='initial probing'):
next_point_to_probe = dict(
(zip(('k_surf_mult', 'k_bdrk_mult', 'stream_mult', 'k_bottom_fraction'), i)))
try:
topo_rate, hydro_rate = run_model(**next_point_to_probe)
edif = dif_wt * np.abs(topo_rate - hydro_rate)
esum = topo_rate + hyd_wt * hydro_rate
target = -(edif + esum)
optimizer.register(
params=next_point_to_probe,
target=target)
topo_error_list.append(topo_rate)
hydro_error_list.append(hydro_rate)
dif_list.append(edif)
sum_list.append(esum)
except OSError:
print('model did not run for {}'.format(next_point_to_probe))
for n in tqdm(range(num_Bayes), desc='Bayesian sampling'):
next_point = optimizer.suggest(utility)
try:
topo_rate, hydro_rate = run_model(**next_point)
edif = dif_wt * np.abs(topo_rate - hydro_rate)
esum = topo_rate + hyd_wt * hydro_rate
target = -(edif + esum)
optimizer.register(params=next_point, target=target)
topo_error_list.append(topo_rate)
hydro_error_list.append(hydro_rate)
dif_list.append(edif)
sum_list.append(esum)
except OSError:
print('model did not run for {}'.format(next_point))
df = pd.DataFrame(optimizer.res)
df = pd.concat((df, df.params.apply(pd.Series)),
axis=1).drop('params', axis='columns')
df['topo_error'] = topo_error_list
df['hydro_error'] = hydro_error_list
df['dif_error'] = dif_list
df['sum_error'] = sum_list
results_dict[dict_key] = df
# -
# Find one set of the optimal parameters by considering where the Pareto (tradeoff) front between hydro and topo errors intersects the line of hydro error = topo error.
# +
# find the convex hull of the points in error space
ch = ConvexHull(df[['topo_error', 'hydro_error']])
# make a polygon of the convex hull
hull = df.iloc[ch.vertices]
shapely_poly = shapely.geometry.Polygon(hull[['topo_error', 'hydro_error']].values)
# make a line of hydro error = topo error
line = [(0, 0), (1, 1)]
shapely_line = shapely.geometry.LineString(line)
# intersect the polygon and the line
intersection_line = list(shapely_poly.intersection(shapely_line).coords)
# the intersection will occur at two points; use the minimum
a = intersection_line[np.array(intersection_line)[:, 0].argmin()]
a = np.array(a).reshape(1, 2)
b = ch.points
# find the distance between all the parameter sets and the point of intersection
df['cdist'] = cdist(a, b)[0, :]
# find the closest (least distance) parameter set to the intersection
crit = df.cdist.idxmin()
# extract the optimal parameters and save
o = df.iloc[crit]
# -
# Run the model using the optimal parameters
# +
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
topo, hydro = run_model(o.k_surf_mult, o.k_bdrk_mult,
o.stream_mult, o.k_bottom_fraction, sim_ws='model_ws/calibration_runs')
model = sim.get_model()
dis = model.get_package('dis')
top = dis.top.array
botm = dis.botm.array
nlay, nrow, ncol = dis.nlay.array, dis.nrow.array, dis.ncol.array
delc = dis.delc.array
delr = dis.delr.array
ibound = dis.idomain.array
npf = model.get_package('npf')
k = npf.k.array
k33 = npf.k33.array
shutil.copytree('model_ws/calibration_runs', 'optimal_model')
dst = os.path.join('optimal_model', 'final_k.npz')
np.savez(dst, k=k)
dst = os.path.join('optimal_model', 'results_df.csv')
df.to_csv(dst)
# +
final_df = pd.DataFrame({'log multiplier': df[['k_bdrk_mult', 'k_bottom_fraction',
'k_surf_mult', 'stream_mult', 'topo_error', 'hydro_error', 'dif_error',
'sum_error', 'cdist']].iloc[crit]})
final_df['transformed'] = np.nan
final_df.loc[['k_bdrk_mult', 'k_bottom_fraction',
'k_surf_mult', 'stream_mult'], 'transformed'] = 10 ** final_df.loc[['k_bdrk_mult', 'k_bottom_fraction',
'k_surf_mult', 'stream_mult'], 'log multiplier']
final_df.loc['stream_mult', 'transformed'] = final_df.loc['stream_mult', 'transformed'] * gsm_metadata['stream_bed_kadjust']
dst = os.path.join('optimal_model', 'best_pars.csv')
final_df.to_csv(dst)
# -
# ### To evaluate uncertainty
#
# Find the Pareto front where there is a tradeoff between hydro and topo errors. To do this, we must separate the two halves of the convex hull polygon. We only want the minimum. Do this by creating a vertical line at each point along the front (which will be at a convex hull node) and taking the minimum. Assemble the minima into a line shape.
# Sample the Pareto front, creating points at equal distances along the front and finding the parameter sets that are closest to them.
# +
test = list()
for x in shapely_poly.exterior.xy[0]:
line = [(x, 0), (x, 1)]
points = np.array(shapely.geometry.LineString(line).intersection(shapely_poly).coords)
ok = points.argmin(axis=0)
test.append(tuple(points[ok[1], :]))
test = np.unique(test, axis=0)
front = shapely.geometry.LineString(test)
# -
# The next cell can be used (all uncommented) to find parmeter sets that lie on the Pareto front. The model can be run for each of these sets to evaluate uncertainty.
# +
# the fractional points to sample
g = np.linspace(0, 1, 11)
b = ch.points
pareto_sets = list()
# for each point
for i in g:
# interpolate its position along the front
x, y = front.interpolate(i, normalized=True).xy
# put that point in an array
a = np.array([[x[0], y[0]]])
# find the closest parameter set to that point and add its index to the list
pareto_sets.append(cdist(a, b).argmin())
dst = os.path.join('optimal_model', 'pareto_sets.csv')
pareto_df = pd.DataFrame({'df_index': pareto_sets})
pareto_df.to_csv(dst)
# -
# If an outer loop was used for hyperparameter tuning, save or load the results.
# dst = os.path.join('optimal_model', 'results_dict_v2.joblib')
# with open(dst, 'wb') as f:
# joblib.dump(results_dict, f)
# src = os.path.join('optimal_model', 'results_dict_v2.joblib')
# with open(src, 'rb') as f:
# results_dict = joblib.load(f)
# Plot all points in parameter space and the Pareto front
# +
fig, axs = plt.subplots(2, 2, figsize=(11, 8.5), sharey=True,
gridspec_kw={'hspace': 0.0, 'wspace': 0})
axs = axs.ravel()
li = ['k_surf_mult', 'k_bdrk_mult', 'stream_mult', 'k_bottom_fraction']
letter = ['A.', 'B.', 'C.', 'D.']
for num in range(4):
plot = axs[num]
var = li[num]
im = plot.hexbin(df.topo_error, df.hydro_error, df[var], 30, reduce_C_function=np.mean,
cmap=plt.cm.nipy_spectral, alpha=0.8, edgecolors='None')
pos = plot.get_position()
cbaxes = fig.add_axes([pos.x0+0.05, pos.y0+0.35, pos.width - 0.1, 0.02])
cb = plt.colorbar(im, ax=plot, cax=cbaxes, orientation='horizontal')
dum = fig.text(0.02, 0.5, 'topographic error', rotation='vertical', ha='left', va='center', fontsize=12)
dum = fig.text(0.50, 0.02, 'hydrologic error', rotation='horizontal', ha='center', va='bottom', fontsize=12)
dum = fig.text(pos.x0+0.20, pos.y0+0.28, var, rotation='horizontal', ha='center', va='bottom', fontsize=12)
dum = fig.text(pos.x0+0.02, pos.y0+0.35, letter[num], rotation='horizontal', ha='center', va='bottom', fontsize=12)
dum = plot.plot((0, 1), (0, 1), linestyle='dashed', color='black', linewidth=0.7)
dum = plot.plot(*front.xy, linestyle='dashed', color='black', linewidth=1.5)
dum = plot.grid(False)
dum = fig.suptitle('pareto front')
dst = os.path.join('optimal_model', 'pareto_plot.png')
plt.savefig(dst)
Image(dst)
# +
l, r, c = np.indices((nlay, nrow, ncol))
hin = np.argmax(np.isfinite(bedrock_index), axis=0)
bedrock_top = np.squeeze(botm[hin, r[0,:,:], c[0,:,:]])
NROW = nrow
NCOL = ncol
def ma2(data2D):
return np.ma.MaskedArray(data2D, mask=(ibound[0, ...] == 0))
def ma3(data3D):
return np.ma.MaskedArray(data3D, mask=(ibound == 0))
def interpolate_travel_times(points, values, xi):
return si.griddata(points, values, xi, method='linear')
def plot_travel_times(ax, x, y, tt, shp):
with np.errstate(invalid='ignore'):
return ax.contourf(x.reshape(shp), y.reshape(shp), tt[:].reshape(shp),
colors=colors, alpha=1.0, levels=levels, antialiased=True)
row_to_plot = np.int32(NROW / 2)
# row_to_plot = 65
xplot = np.linspace(delc[0] / 2, NCOL * delc[0] - delc[0] / 2, NCOL)
mKh = ma3(k)
mtop = ma2(top.reshape(nrow, ncol))
mbed = ma2(bedrock_top)
mbot = ma3(botm)
rtd = rtd_ut.RTD_util(sim, 'flow', 'rt')
rtd.get_watertable()
water_table = rtd.water_table
# make a color map of fixed colors
cmap = plt.cm.coolwarm
bounds = [0, 5, 10]
norm = colors.BoundaryNorm(bounds, cmap.N)
fig = plt.figure(figsize=(11, 8.5))
ax1 = plt.subplot2grid((4, 1), (0, 0), rowspan=3)
dum = ax1.plot(xplot, mtop[row_to_plot, ],
label='land surface', color='black', lw=0.5)
dum = ax1.plot(xplot, rtd.water_table[row_to_plot, ],
label='water table', color='blue', lw=1.)
dum = ax1.fill_between(xplot, mtop[row_to_plot, ], mbot[0, row_to_plot, :], alpha=0.25,
color='blue', lw=0.75)
for lay in range(nlay-1):
label = 'layer {}'.format(lay+2)
dum = ax1.fill_between(xplot, mbot[lay, row_to_plot, :], mbot[lay+1, row_to_plot, :],
color=cmap(lay / nlay), alpha=0.50, lw=0.75)
dum = ax1.plot(xplot, mbed[row_to_plot, :], label='bedrock',
color='red', linestyle='dotted', lw=1.5)
dum = ax1.plot(xplot, mbot[-1, row_to_plot, :], color='black',
linestyle='dashed', lw=0.5, label='model bottom')
dum = ax1.legend(loc=0, frameon=False, fontsize=10, ncol=1)
dum = ax1.set_ylabel('Altitude, in meters')
dum = ax1.set_title('Section along row {}'.format(row_to_plot))
ax2 = plt.subplot2grid((4, 1), (3, 0))
dum = ax2.fill_between(xplot, 0, mKh[0, row_to_plot, :], alpha=0.25, color='blue',
label='layer 1', lw=0.75, step='mid')
dum = ax1.set_xlabel('Distance in meters')
dum = ax2.set_yscale('log')
dum = ax2.set_ylabel('Hydraulic conductivity\n in layer 1, in meters / day')
line = 'optimal_{}_xs.png'.format(metadata['HUC8_name'])
fig_name = os.path.join('optimal_model', line)
plt.savefig(fig_name)
Image(fig_name)
# +
grid = os.path.join(metadata['gis_dir'], 'ibound.tif')
mtg = gmu.SourceProcessing(np.nan)
mtg.read_raster(grid)
fig, ax = plt.subplots(1, 1, figsize=(11, 8.5))
t_crit = (model_grid.obs_type =='topo') & (ibound[0, ...].ravel() != 0)
topo_cells = t_crit.values.reshape(NROW, NCOL)
h_crit = (model_grid.obs_type =='hydro') & (ibound[0, ...].ravel() != 0)
hydro_cells = h_crit.values.reshape(NROW, NCOL)
num_topo = np.count_nonzero(topo_cells)
num_hydro = np.count_nonzero(hydro_cells)
topo = (top + meta.err_tol) < water_table
hydro = (top - meta.err_tol) > water_table
topo_error = topo & topo_cells
hydro_error = hydro & hydro_cells
mask = (ibound[0] == 0) | ~topo_cells
mt = np.ma.MaskedArray(topo_cells, mask)
cmap = colors.ListedColormap(['green'])
im = ax.pcolormesh(mtg.x_edge, mtg.y_edge, mt, cmap=cmap, alpha=0.2, edgecolors=None)
mask = (ibound[0] == 0) | ~topo_error
mte = np.ma.MaskedArray(topo_error, mask)
cmap = colors.ListedColormap(['green'])
im = ax.pcolormesh(mtg.x_edge, mtg.y_edge, mte, cmap=cmap, alpha=0.4, edgecolors=None)
mask = (ibound[0] == 0) | ~hydro_cells
mh = np.ma.MaskedArray(hydro_cells, mask)
cmap = colors.ListedColormap(['blue'])
im = ax.pcolormesh(mtg.x_edge, mtg.y_edge, mh, cmap=cmap, alpha=0.2, edgecolors=None)
mask = (ibound[0] == 0) | ~hydro_error
mhe = np.ma.MaskedArray(hydro_error, mask)
cmap = colors.ListedColormap(['blue'])
im = ax.pcolormesh(mtg.x_edge, mtg.y_edge, mhe, cmap=cmap, alpha=0.6, edgecolors=None)
ax.set_aspect(1)
dum = fig.suptitle('Default model errors\n{} model\nFraction dry drains (blue) {:0.2f}\n \
Fraction flooded cells (green) {:0.2f}'.format( \
metadata['HUC8_name'], hydro_rate, topo_rate))
fig.set_tight_layout(True)
line = 'optimal_{}_error_map.png'.format(metadata['HUC8_name']) #csc
fig_name = os.path.join('optimal_model', line)
plt.savefig(fig_name)
Image(fig_name)
mtg.old_raster = topo_error
line = os.path.join('optimal_model', 'topo_error.tif')
mtg.write_raster(line)
mtg.old_raster = hydro_error
line = os.path.join('optimal_model', 'hydro_error.tif')
mtg.write_raster(line)
# +
k[ibound == 0] = np.nan
for layer in range(nlay):
fig, ax = plt.subplots(1, 1)
im = ax.imshow(k[layer, ...])
ax.set_title('K in layer {}'.format(layer))
fig.colorbar(im)
mtg.old_raster = k[layer, ...]
line = os.path.join('optimal_model', 'k_layer_{}.tif'.format(layer))
mtg.write_raster(line)
# +
rtd = rtd_ut.RTD_util(sim, 'flow', 'rt')
rtd.get_watertable()
water_table = rtd.water_table
water_table[water_table > (2 * model_grid.ned.max())] = np.nan
mtg.new_array = water_table
fig, ax = mtg.plot_raster(which_raster='new', sk={'figsize': (11, 8.5)})
fig.set_tight_layout(True)
dst = os.path.join('postcal-heads.tif')
dst = os.path.join('postcal-heads.png')
plt.savefig(dst)
i = Image(filename='postcal-heads.png')
i
| GenMod2.0/General_Simulation_Model_3_Calibrate_Model_Bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Support Vector Regression
# ## CS/DSA 5970
# +
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import mean_squared_error
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.metrics import roc_curve, auc
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC, LinearSVR, SVR
import pickle as pkl
from mpl_toolkits.mplot3d import Axes3D
##################
# Default parameters
FIGURESIZE=(10,6)
FONTSIZE=18
plt.rcParams['figure.figsize'] = FIGURESIZE
plt.rcParams['font.size'] = FONTSIZE
plt.rcParams['xtick.labelsize'] = 18
plt.rcParams['ytick.labelsize'] = 18
# -
def plot_3d(ins, pred):
'''
Plot pred as a function of the 2 dimensions of ins
'''
fig = plt.figure(figsize=FIGURESIZE)
ax = fig.gca(projection='3d')
ax.plot_trisurf(ins[:,0], ins[:,1], pred, cmap=plt.cm.jet)
# ## Load data
fname = '../ml_practices/imports/datasets/misc/svr_data.pkl'
fp = open(fname, 'rb')
ins = pkl.load(fp).T
outs = pkl.load(fp).flatten()
fp.close()
# ## SVR: Linear
# ## Polynomial kernel
# ## Gaussian (RBF) Kernel
| skeletons/svr-skel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc="true"
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#LightGBM" data-toc-modified-id="LightGBM-1"><span class="toc-item-num">1 </span>LightGBM</a></span><ul class="toc-item"><li><span><a href="#Benchmarking" data-toc-modified-id="Benchmarking-1.1"><span class="toc-item-num">1.1 </span>Benchmarking</a></span></li><li><span><a href="#Categorical-Variables-in-Tree-based-Models" data-toc-modified-id="Categorical-Variables-in-Tree-based-Models-1.2"><span class="toc-item-num">1.2 </span>Categorical Variables in Tree-based Models</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
# +
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css', plot_style = False)
# +
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
# %matplotlib inline
# %load_ext watermark
# %load_ext autoreload
# %autoreload 2
# %config InlineBackend.figure_format = 'retina'
import os
import re
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from time import time
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from lightgbm import plot_importance
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from mlutils.transformers import Preprocessor
# %watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib,xgboost,lightgbm
# -
# # LightGBM
# [Gradient boosting](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/trees/gbm/gbm.ipynb) is a machine learning technique that produces a prediction model in the form of an ensemble of weak classifiers, optimizing for a differentiable loss function. One of the most popular types of gradient boosting is gradient boosted trees, that internally is made up of an ensemble of week [decision trees](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/trees/decision_tree.ipynb). There are two different ways to compute the trees: level-wise and leaf-wise as illustrated by the diagram below:
#
# <img src="img/levelwise.png" width="50%" height="50%">
#
# <img src="img/leafwise.png" width="60%" height="60%">
#
# > The level-wise strategy adds complexity extending the depth of the tree level by level. As a contrary, the leaf-wise strategy generates branches by optimizing a loss.
#
# The level-wise strategy grows the tree level by level. In this strategy, each node splits the data prioritizing the nodes closer to the tree root. The leaf-wise strategy grows the tree by splitting the data at the nodes with the highest loss change. Level-wise growth is usually better for smaller datasets whereas leaf-wise tends to overfit. Leaf-wise growth tends to [excel in larger datasets](http://researchcommons.waikato.ac.nz/handle/10289/2317) where it is considerably faster than level-wise growth.
#
# A key challenge in training boosted decision trees is the [computational cost of finding the best split](https://arxiv.org/abs/1706.08359) for each leaf. Conventional techniques find the [exact split](https://arxiv.org/abs/1603.02754) for each leaf, and require scanning through all the data in each iteration. A different approach [approximates the split](https://arxiv.org/abs/1611.01276) by building histograms of the features. That way, the algorithm doesn’t need to evaluate every single value of the features to compute the split, but only the bins of the histogram, which are bounded. This approach turns out to be much more efficient for large datasets, without adversely affecting accuracy.
#
# ---
#
# With all of that being said LightGBM is a fast, distributed, high performance gradient boosting that was open-source by Microsoft around August 2016. The main advantages of LightGBM includes:
#
# - Faster training speed and higher efficiency: LightGBM use histogram based algorithm i.e it buckets continuous feature values into discrete bins which fasten the training procedure.
# - Lower memory usage: Replaces continuous values to discrete bins which result in lower memory usage.
# - Better accuracy than any other boosting algorithm: It produces much more complex trees by following leaf wise split approach rather than a level-wise approach which is the main factor in achieving higher accuracy. However, it can sometimes lead to overfitting which can be avoided by setting the max_depth parameter.
# - Compatibility with Large Datasets: It is capable of performing equally good with large datasets with a significant reduction in training time as compared to XGBoost.
# - Parallel learning supported.
#
# The significant speed advantage of LightGBM translates into the ability to do more iterations and/or quicker hyperparameter search, which can be very useful if we have a limited time budget for optimizing your model or want to experiment with different feature engineering ideas.
# ## Benchmarking
# This notebook compares LightGBM with [XGBoost](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/trees/xgboost.ipynb), another extremely popular gradient boosting framework by applying both the algorithms to a dataset and then comparing the model's performance and execution time. Here we will be using the [Adult dataset](http://archive.ics.uci.edu/ml/datasets/Adult) that consists of 32561 observations and 14 features describing individuals from various countries. Our target is to predict whether a person makes <=50k or >50k annually on basis of the other information available. Dataset consists of 32561 observations and 14 features describing individuals.
#
# For installing LightGBM on mac:
#
# ```bash
# # install cmake and gcc first from brew
# # note that these installation can
# # take up to 30 minutes if we're doing
# # it for the first time, be patient;
# brew tap homebrew/versions
# brew install cmake
# brew install gcc
#
# # note that if you installed gcc a long time,
# # you will need to update it; it requires 7.1 up
# # we can check via:
# # g++-8 -v
# # https://github.com/Microsoft/LightGBM/issues/118
# brew update && brew upgrade gcc
#
# # compile lightgbm from source
# git clone --recursive https://github.com/Microsoft/LightGBM ; cd LightGBM
# export CXX=g++-8 CC=gcc-8
# # mkdir build ; cd build
# cmake ..
# make -j4
#
# # install the python package
# # cd ../python-package
# python setup.py install --precompile
# ```
# +
def get_data():
file_path = 'adult.csv'
if not os.path.isfile(file_path):
def chunks(input_list, n_chunk):
"""take a list and break it up into n-size chunks"""
for i in range(0, len(input_list), n_chunk):
yield input_list[i:i + n_chunk]
columns = [
'age', 'workclass', 'fnlwgt', 'education',
'education_num', 'marital_status', 'occupation',
'relationship', 'race', 'sex', 'capital_gain',
'capital_loss', 'hours_per_week', 'native_country', 'income']
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
r = requests.get(url)
raw_text = r.text.replace('\n', ',')
splitted_text = re.split(r',\s*', raw_text)
data = list(chunks(splitted_text, n_chunk = len(columns)))
data = pd.DataFrame(data, columns = columns).dropna(axis = 0, how = 'any')
data.to_csv(file_path, index = False)
data = pd.read_csv(file_path)
return data
data = get_data()
print('dimensions:', data.shape)
data.head()
# +
label_col = 'income'
cat_cols = [
'workclass', 'education', 'marital_status',
'occupation', 'relationship', 'race',
'sex', 'native_country']
num_cols = [
'age', 'fnlwgt', 'education_num',
'capital_gain', 'capital_loss',
'hours_per_week']
label_encode = LabelEncoder()
data[label_col] = label_encode.fit_transform(data[label_col])
y = data[label_col].values
data = data.drop(label_col, axis = 1)
print('labels distribution:', np.bincount(y) / y.size)
# +
test_size = 0.1
split_random_state = 1234
df_train, df_test, y_train, y_test = train_test_split(
data, y, test_size = test_size,
random_state = split_random_state, stratify = y)
print('dimensions:', df_train.shape)
df_train.head()
# -
# We'll perform very little feature engineering as that's not our main focus here. The following code chunk utilize a cusotomized Transformer from the [mlutils](https://github.com/ethen8181/machine-learning/tree/master/projects/mlutils) package to standardize the numerical features and one hot encode the categorical features.
# ideally, we can include this step into a sklearn
# Pipeline with the model, but here we will
# separate it out, so we can isolate the timing for
# benchmarking the algorithm
preprocess = Preprocessor(num_cols, cat_cols)
X_train = preprocess.fit_transform(df_train)
X_test = preprocess.transform(df_test)
# The next section compares the xgboost and lightgbm's implementation in terms of both execution time and model performance. There are a bunch of other hyperparameters that we as the end-user can specify, but here we explicity specify arguably the most important ones.
# +
lgb = LGBMClassifier(
n_jobs = -1,
max_depth = 6,
subsample = 1,
n_estimators = 100,
learning_rate = 0.1,
colsample_bytree = 1,
objective = 'binary',
boosting_type = 'gbdt')
start = time()
lgb.fit(X_train, y_train)
lgb_elapse = time() - start
print('elapse:, ', lgb_elapse)
# +
# raw xgboost
xgb = XGBClassifier(
n_jobs = -1,
max_depth = 6,
subsample = 1,
n_estimators = 100,
learning_rate = 0.1,
colsample_bytree = 1,
objective = 'binary:logistic',
boosting_type = 'gbtree')
start = time()
xgb.fit(X_train, y_train)
xgb_elapse = time() - start
print('elapse:, ', xgb_elapse)
# +
# xgboost includes a tree_method = 'hist'
# option that buckets continuous variables
# into bins to speed up training, as for
# the tree growing policy, we set it to
# 'lossguide' to favor splitting at nodes
# with highest loss change, which mimicks lightgbm
xgb_hist = XGBClassifier(
n_jobs = -1,
max_depth = 6,
subsample = 1,
n_estimators = 100,
learning_rate = 0.1,
colsample_bytree = 1,
objective = 'binary:logistic',
boosting_type = 'gbtree',
tree_method = 'hist',
grow_policy = 'lossguide')
start = time()
xgb_hist.fit(X_train, y_train)
xgb_hist_elapse = time() - start
print('elapse:, ', xgb_hist_elapse)
# +
# evaluate performance
y_pred = lgb.predict_proba(X_test)[:, 1]
lgb_auc = roc_auc_score(y_test, y_pred)
print('auc score: ', lgb_auc)
y_pred = xgb.predict_proba(X_test)[:, 1]
xgb_auc = roc_auc_score(y_test, y_pred)
print('auc score: ', xgb_auc)
y_pred = xgb.predict_proba(X_test)[:, 1]
xgb_hist_auc = roc_auc_score(y_test, y_pred)
print('auc score: ', xgb_hist_auc)
# -
# comparison table
results = pd.DataFrame({
'elapse_time': [lgb_elapse, xgb_hist_elapse, xgb_elapse],
'auc_score': [lgb_auc, xgb_hist_auc, xgb_auc]})
results.index = ['LightGBM', 'XGBoostHist', 'XGBoost']
results
# From the resulting table, we can see that there isn't a noticeable difference in auc score by applying LightGBM over XGBoost, in this case, XGBoost happens to be slightly better. On the other hand, there is a significant difference in the execution time for the training procedure. This is a huge advantage and makes LightGBM a much better approach when dealing with large datasets.
#
# For those interested, the people at Microsoft has a blog that has a even more thorough benchmark result on various datasets. Link is included below along with a summary of their results:
#
# > [Blog: Lessons Learned From Benchmarking Fast Machine Learning Algorithms](https://blogs.technet.microsoft.com/machinelearning/2017/07/25/lessons-learned-benchmarking-fast-machine-learning-algorithms/)
# >
# > Our results, based on tests on six datasets, are summarized as follows:
#
# > - XGBoost and LightGBM achieve similar accuracy metrics.
# > - LightGBM has lower training time than XGBoost and its histogram-based variant, XGBoost hist, for all test datasets, on both CPU and GPU implementations. The training time difference between the two libraries depends on the dataset, and can be as big as 25 times.
# > - XGBoost GPU implementation does not scale well to large datasets and ran out of memory in half of the tests.
# > - XGBoost hist may be significantly slower than the original XGBoost when feature dimensionality is high.
# ## Categorical Variables in Tree-based Models
# LightGBM also has inbuilt support for categorical variables, unlike XGBoost, where one has to pre-process the data to convert all of the categorical features using one-hot encoding, this section is devoted to discussing why this is a highly desirable feature.
#
# Many real-world datasets include a mix of continuous and categorical variables. The defining property of the latter is that they do not permit a total ordering. A major advantage of decision tree models and their ensemble counterparts, such as random forests, extra trees and gradient boosted trees, is that they are able to operate on both continuous and categorical variables directly (popular implementations of tree-based models differ as to whether they honor this fact). In contrast, most other popular models (e.g., generalized linear models, neural networks) must instead transform categorical variables into some numerical analog, usually by one-hot encoding them to create a new dummy variable for each level of the original variable. e.g.
#
# <img src="img/onehot_encoding.png" width="80%" height="80%">
#
# One drawback of one hot encoding is that they can lead to a huge increase in the dimensionality of the feature representations. For example, one hot encoding U.S. states adds 49 dimensions to to our feature representation.
#
# To understand why we don't need to perform one hot encoding for tree-based models, we need to refer back to the logic of tree-based algorithms. At the heart of the tree-based algorithm is a sub-algorithm that splits the samples into two bins by selecting a feature and a value. This splitting algorithm considers each of the features in turn, and for each feature selects the value of that feature that minimizes the impurity of the bins.
#
# This means tree-based models are essentially looking for places to split the data, they are not multiplying our inputs by weights. In contrast, most other popular models (e.g., generalized linear models, neural networks) would interpret categorical variables such as red=1, blue=2 as blue is twice the amount of red, which is obviously not what we want.
# +
# to use the inbuilt support for categorical features,
# we can output a pandas DataFrame and it will
# detect columns with categorical type and treat it as categorical features
# http://lightgbm.readthedocs.io/en/latest/Python-API.html#lightgbm.LGBMModel.fit
preprocess = Preprocessor(
num_cols, cat_cols, output_pandas = True, use_onehot = False)
X_train = preprocess.fit_transform(df_train)
X_test = preprocess.transform(df_test)
lgb = LGBMClassifier(
n_jobs = -1,
max_depth = 6,
subsample = 1,
n_estimators = 100,
learning_rate = 0.1,
colsample_bytree = 1,
objective = 'binary',
boosting_type = 'gbdt')
start = time()
lgb.fit(X_train, y_train)
lgb_elapse = time() - start
print('elapse:, ', lgb_elapse)
y_pred = lgb.predict_proba(X_test)[:, 1]
lgb_auc = roc_auc_score(y_test, y_pred)
print('auc score: ', lgb_auc)
# -
# From the result above, we can see that it requires even less training time without sacrificing any sort of performance. What's even more is that we now no longer need to perform the one hot encoding on our categorical features. The code chunk below shows this is highly advantageous from a memory-usage perspective when we have a bunch of categorical features.
# +
preprocess = Preprocessor(
num_cols, cat_cols, output_pandas = True, use_onehot = False)
X_train = preprocess.fit_transform(df_train)
print('number of columns:', X_train.shape[1])
print('no one hot encoding memory usage:, ', X_train.memory_usage(deep = True).sum())
preprocess = Preprocessor(
num_cols, cat_cols, output_pandas = True, use_onehot = True)
X_train = preprocess.fit_transform(df_train)
print('number of columns:', X_train.shape[1])
print('one hot encoding memory usage:, ', X_train.memory_usage(deep = True).sum())
# +
# putting it together into a pipeline
preprocess = Preprocessor(
num_cols, cat_cols, output_pandas = True, use_onehot = False)
lgb = LGBMClassifier(
n_jobs = -1,
max_depth = 6,
subsample = 1,
n_estimators = 100,
learning_rate = 0.1,
colsample_bytree = 1,
objective = 'binary',
boosting_type = 'gbdt')
start = time()
lgb_pipeline = Pipeline([
('preprocess', preprocess),
('lgb', lgb)
]).fit(df_train, y_train)
lgb_elapse = time() - start
print('elapse:, ', lgb_elapse)
y_pred = lgb_pipeline.predict_proba(df_test)[:, 1]
lgb_auc = roc_auc_score(y_test, y_pred)
print('auc score: ', lgb_auc)
# +
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
# like other tree-based models, it can also output the
# feature importance plot
plot_importance(lgb, importance_type = 'gain')
plt.show()
# -
# For tuning LightGBM's hyperparameter, the documentation page has some pretty good suggestions. [LightGBM Documentation: Parameters Tuning](http://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html)
# # Reference
# - [LightGBM Documentation: Parameters Tuning](http://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html)
# - [Blog: xgboost’s New Fast Histogram (tree_method = hist)](https://medium.com/data-design/xgboosts-new-fast-histogram-tree-method-hist-a3c08f36234c)
# - [Blog: Which algorithm takes the crown: Light GBM vs XGBOOST?](https://www.analyticsvidhya.com/blog/2017/06/which-algorithm-takes-the-crown-light-gbm-vs-xgboost/)
# - [Blog: Are categorical variables getting lost in your random forests?](http://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)
# - [Blog: Lessons Learned From Benchmarking Fast Machine Learning Algorithms](https://blogs.technet.microsoft.com/machinelearning/2017/07/25/lessons-learned-benchmarking-fast-machine-learning-algorithms/)
# - [Stackoverflow: Why tree-based model do not need one-hot encoding for nominal data?
# ](https://stackoverflow.com/questions/45139834/why-tree-based-model-do-not-need-one-hot-encoding-for-nominal-data)
| trees/lightgbm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Basic imports
# +
import numpy as np
import matplotlib.pyplot as plt
from keras import models
from keras import layers
from keras import callbacks
# this function allows the one hot encoding of labels
from keras.utils import to_categorical
# this module has the imdb data set
from keras.datasets import imdb
# this allows the preprocessing of text into a list of binary inputs
from keras.preprocessing.text import Tokenizer
# -
# #### Load and visualize the data
# +
# Load the train and test data
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=5000)
print(train_data.shape)
print(len(train_data[0]))
# The data set contains a review, and the sentiment of the review: either positive, 1, or negative, 0.
print(train_data[0])
print(train_labels[0])
# +
# Using a word dictionary, we can decode the integer message
word_index = imdb.get_word_index()
# The dictionary's key is the word, and its frequency ranking is its value. For simplicity
# we swap them using list comprehention
reverse_index = dict([(value+3, key) for (key, value) in word_index.items()])
# The first 3 indices are reserved for the following values. That is why we need to pad the values by 3
reverse_index[0] = "<PAD>"
reverse_index[1] = "<START>"
reverse_index[2] = "<UNKNOWN>" # unknown
reverse_index[3] = "<UNUSED>"
decoded_review = ' '.join([reverse_index.get(i,'?') for i in train_data[0]])
print(decoded_review)
# -
# #### Standardize and one hot encode the data
# +
# Turning the output into vector mode, each of length 5000
tokenizer = Tokenizer(num_words=5000)
train_data_token = tokenizer.sequences_to_matrix(train_data, mode='binary')
test_data_token = tokenizer.sequences_to_matrix(test_data, mode='binary')
print(train_data_token.shape)
print(test_data_token.shape)
# One-hot encoding the output
num_classes = 2
one_hot_train_labels = to_categorical(train_labels, num_classes)
one_hot_test_labels = to_categorical(test_labels, num_classes)
print(one_hot_train_labels.shape)
print(one_hot_test_labels.shape)
# Creating a validation set with the first 10000 reviews
validation_data = train_data_token[:10000]
validation_labels = one_hot_train_labels[:10000]
# Creating the input set
x_data = train_data_token[10000:]
y_data = one_hot_train_labels[10000:]
print(x_data.shape)
print(y_data.shape)
# -
# #### Build and train the model
# +
# Building the model architecture
model = models.Sequential()
model.add(layers.Dense(128, activation='relu', input_dim=5000))
# Added dropout between the input and first hidden layer
model.add(layers.Dropout(0.3))
model.add(layers.Dense(64, activation='relu'))
# Added dropout between the first hidden layer and the second one
model.add(layers.Dropout(0.3))
model.add(layers.Dense(num_classes, activation='softmax'))
model.summary()
# included the early stopping which monitors the validation loss
early_stop = callbacks.EarlyStopping(monitor='val_loss', patience=5)
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# +
# You can manually create and add the validation set, or...
'''history = model.fit(x_data, y_data,
batch_size=512,
epochs=40,
validation_data=(validation_data, validation_labels),
verbose=2)
'''
# You can let keras make it for you by telling it which percetage of the train data
# will be used as validation. In this case through the validation_split argument.
# Here, it is taking 20% of the train data to use for validation.
history = model.fit(train_data_token, one_hot_train_labels,
batch_size=512,
epochs=40,
validation_split=0.2,
callbacks=[early_stop],
verbose=2)
# -
# #### Evaluate the model, and plot the validation and accuracy
# +
# Evaluating the model with the test data
results = model.evaluate(test_data_token, one_hot_test_labels)
print(results)
# -
# This dictionary stores the validation and accuracy of the model throughout the epochs
history_dict = history.history
print(history_dict.keys())
# +
# The history values are split in different lists for ease of plotting
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# +
# Plot of the validation and training loss
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# +
# Plot of the validation and train accuracy
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
| Notebooks/DeepLearning/4. DL. Keras Imdb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import required libraries
import numpy as np
from scipy import linalg
#Formulate two linear equations based on the given scenario
numArr = np.array([[1,2,4,5],[9,8,4,3],[7,8,3,5],[7,0,1,2]])
numVal = np.array([10,45,26,65])
#Apply a suitable method to solve the linear equation
print(linalg.solve(numArr,numVal))
| Assignment_03 Solve a Linear Algebra Problem --Scipy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
from music21 import *
from collections import Counter, defaultdict
from ea.individual import Measure, Individual, Note
from ea.duration import Duration
import ea.initialisation as initialisation
import ea.modelTrainer as modelTrainer
import ea.musicPlayer as musicPlayer
import ea.simulation as simulation
import ea.fitness as fitness
import ea.mutation as mutation
import collections
import importlib
import music21
import numpy as np
import pandas as pd
import ea.individual as individual
import ea.duration as duration
import ea.constants as constants
import ea.util as util
import random
import nltk
import ea.fitness as fitness
import re
import ea.modelUpdater as modelUpdater
import ea.metrics as metrics
import copy
# -
importlib.reload(initialisation)
importlib.reload(modelTrainer)
importlib.reload(musicPlayer)
importlib.reload(simulation)
importlib.reload(mutation)
importlib.reload(fitness)
importlib.reload(constants)
importlib.reload(util)
importlib.reload(individual)
importlib.reload(fitness)
importlib.reload(modelUpdater)
importlib.reload(metrics)
# +
testIndividual = Individual(
[
Measure(
[
Note('C5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('E5', Duration('quarter', 0.25))
], 0, ['C3', 'E3', 'G3']
),
Measure(
[
Note('C5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('D5', Duration('half', 0.5)),
], 0, ['C3', 'E3', 'G3'])
], 0
)
testIndividual2 = Individual(
[
Measure(
[
Note('C5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('E5', Duration('quarter', 0.25)),
Note('E5', Duration('quarter', 0.25))
], 0, ['C3', 'E3', 'G3']
),
Measure(
[
Note('C5', Duration('quarter', 0.25)),
Note('D5', Duration('quarter', 0.25)),
Note('C5', Duration('quarter', 0.25)),
Note('E5', Duration('quarter', 0.25)),
Note('E5', Duration('quarter', 0.25))
], 0, ['C3', 'E3', 'G3'])
], 0
)
population = [testIndividual, testIndividual2]
# -
fitness.cadence(testIndividual2)
# +
individuals = list(map(lambda x: x.get_notes_per_measure(), population))
highest_fitness_i = None
counter = 0
for i in range(len(individuals)):
ind = individuals[i]
curr_individual = []
for m in ind:
for n in m:
curr_individual.append(n.pitch)
curr_individual = tuple(curr_individual)
if i == 0:
highest_fitness_i = curr_individual
if curr_individual == highest_fitness_i:
counter += 1
counter / len(population)
# -
metrics.proportion_equal_to_highest_fitness(population)
# +
a = testIndividual
b = testIndividual
a_notes_copy = copy.deepcopy(a.measures[0].notes)
b.measures[0].notes = a_notes_copy
a.measures[0].notes is b.measures[0].notes
# -
a.fitnesses - b.fitnesses
from random import Random
rng = Random()
def elitist_mutation(individual: Individual, elitist_population: [Individual]):
e_individual: Individual = rng.choice(elitist_population)
measure = rng.choice(range(len(e_individual.measures)))
e_individual_copy = copy.deepcopy(e_individual.measures[measure].notes)
individual.measures[measure].notes = e_individual_copy
print(individual)
print(elitist_population)
if individual.measures[measure].notes is e_individual.measures[measure].notes:
print('Mutated individual has reference to elitist individual')
| EvoMusicCompanion/jupyter/operator-tests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="muMs8kQLUDOz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 80} outputId="26ac8a33-0ebf-49ac-bca6-40c282359adb"
import numpy
import pandas
import tensorflow.compat.v1
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
# + id="Nv1c7vdx3UKG" colab_type="code" outputId="ba0775b0-3be5-4ef6-8616-6139be5a100a" colab={"base_uri": "https://localhost:8080/", "height": 142}
data = pandas.read_csv("https://raw.githubusercontent.com/matheuspolillo/K-Network/master/data/iris.csv")
data_to_view = [
[data.loc[0, 'sepal_length'], data.loc[0, 'sepal_width'], data.loc[0, 'petal_length'], data.loc[0, 'petal_width'], data.loc[0, 'species']],
[data.loc[50, 'sepal_length'], data.loc[50, 'sepal_width'], data.loc[50, 'petal_length'], data.loc[50, 'petal_width'], data.loc[50, 'species']],
[data.loc[100, 'sepal_length'], data.loc[100, 'sepal_width'], data.loc[100, 'petal_length'], data.loc[100, 'petal_width'], data.loc[100, 'species']]
]
data_to_view = pandas.DataFrame(data_to_view, columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
data_to_view
# + id="j6FEpBQM3673" colab_type="code" outputId="5c0f457a-bab0-4532-ef8f-e66964a4c169" colab={"base_uri": "https://localhost:8080/", "height": 142}
data.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Target']
data['Target'] = LabelEncoder().fit_transform(data['Target'].values)
data_to_view = [
[data.loc[0, 'Sepal_Length'], data.loc[0, 'Sepal_Width'], data.loc[0, 'Petal_Length'], data.loc[0, 'Petal_Width'], data.loc[0, 'Target']],
[data.loc[50, 'Sepal_Length'], data.loc[50, 'Sepal_Width'], data.loc[50, 'Petal_Length'], data.loc[50, 'Petal_Width'], data.loc[50, 'Target']],
[data.loc[100, 'Sepal_Length'], data.loc[100, 'Sepal_Width'], data.loc[100, 'Petal_Length'], data.loc[100, 'Petal_Width'], data.loc[100, 'Target']]
]
data_to_view = pandas.DataFrame(data_to_view, columns=['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Target'])
data_to_view
# + id="Cn-SqzikGxh1" colab_type="code" outputId="8dccfbb2-be9a-4a11-841c-bd3ce371c691" colab={"base_uri": "https://localhost:8080/", "height": 261}
target_categorical = np_utils.to_categorical(data['Target'].values)
train, test = train_test_split(data, test_size=0.2, random_state=0)
print(
'\n' + 'Valores categóricos' + '\n--------------------\n' + str(target_categorical[0]) + ' => Setosa\n' + str(target_categorical[50]) +
' => Versicolor\n' + str(target_categorical[100]) + ' => Virginica\n\n'
)
data_to_view = [
[data.loc[0, 'Sepal_Length'], data.loc[0, 'Sepal_Width'], data.loc[0, 'Petal_Length'], data.loc[0, 'Petal_Width'], '1 0 0'],
[data.loc[50, 'Sepal_Length'], data.loc[50, 'Sepal_Width'], data.loc[50, 'Petal_Length'], data.loc[50, 'Petal_Width'], '0 1 0'],
[data.loc[100, 'Sepal_Length'], data.loc[100, 'Sepal_Width'], data.loc[100, 'Petal_Length'], data.loc[100, 'Petal_Width'], '0 0 1']
]
data_to_view = pandas.DataFrame(data_to_view, columns=['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Target'])
data_to_view
# + id="xsVd-XnRZyGu" colab_type="code" colab={}
y_train = np_utils.to_categorical(train['Target'].values)
y_test = np_utils.to_categorical(test['Target'].values)
# + id="sJWvACp2KkEp" colab_type="code" outputId="ba20709d-a08e-44dc-8add-3d24210ceaf7" colab={"base_uri": "https://localhost:8080/", "height": 34}
model = Sequential()
model.add(Dense(8, input_dim=4, activation='sigmoid'))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train.loc[:, ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']], y_train, epochs=100, batch_size=1, verbose=0)
# + id="RdkHlxZ-YBg6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="b8c8e3fb-be3e-41e1-b269-5a9591d628a9"
loss, acc = model.evaluate(test.loc[:, ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']], y_test, verbose=0)
print('Loss:', loss, '\nSuccess/Accuracy:', acc)
| notebooks/IA/MLP_Iris.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Efficient Frontier
import pandas as pd
import numpy as np
import ashmodule as ash
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
ind = ash.get_idx_returns()
ind.head()
ind.columns
ind.shape
ind.head()
ash.drawdown(ind["Food"])["Drawdown"].plot(figsize = (12,7));
ash.var_gaussian(ind, modified = True).sort_values().plot.bar(figsize = (12,7));
(ash.sharpe_ratio(ind, 0.03,12)*100).sort_values()
ash.sharpe_ratio(ind, 0.03,12).sort_values().plot.bar(title = "Sharpe Ratio per Industry From 1926 to 2018", figsize = (12, 7),color = "Red");
ash.sharpe_ratio(ind["2000":], 0.03,12).sort_values().plot.bar(title = "Sharpe Ratio per Industry From 2000 to 2018", figsize = (12, 7),color = "Orange");
(ash.sharpe_ratio(ind["2000":], 0.03,12)*100).sort_values().plot.bar(title = "Sharpe Ratio per Industry From 2000 to 2018", figsize = (12, 7),color = "Orange");
er = ((ash.annualize_rets(ind["1995":"2000"],12))*100).sort_values()
er.plot.bar(figsize = (12,7), color = "Green", title = "Annualized Return on Industries over period 1995:2000");
cov = ind["1995":"2000"].cov()
cov
cov.shape
| Introduction to Portfolio Construction and Analysis with Python/W2/Efficient Frontier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3 (built-in)
# language: python
# name: python3
# ---
# Back to the main [Index](../index.ipynb)
# ### Model loading from BioModels
#
# Models can be easily retrieved from BioModels using the `loadSBMLModel` function in Tellurium. This example uses a download URL to directly load `BIOMD0000000010`.
# +
from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine("matplotlib")
# %matplotlib inline
# Load model from biomodels (may not work with https).
r = te.loadSBMLModel("https://www.ebi.ac.uk/biomodels-main/download?mid=BIOMD0000000010")
result = r.simulate(0, 3000, 5000)
r.plot(result)
# + outputHidden=false inputHidden=false
| examples/notebooks/core/model_modelFromBioModels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.0 64-bit (''3dcv'': conda)'
# metadata:
# interpreter:
# hash: 5bbec2c175cdc645bdcaa7c23b1994df206e8fec5e7a0d34dbfaad9347bbf153
# name: python3
# ---
# # Creating RGB-D Dataset Directory
# Creates an appropriately structured dataset directory of cropped images.
#
# Methodology:
# - crop middle 200x200 pixels of rgb, depth, and mask images (out of 480x600)
# - for each category/class reserve one object instance for the test dataset, and the rest for training
# +
import os
import pathlib
import utils
from tqdm import tqdm
import cv2 as cv
import matplotlib.pyplot as plt
# -
rgbd_data_path = 'C:/Users/awnya/Documents/Projects/RGBD Object Classification/full_data/extracted/rgbd-dataset'
rgbd_out_path = 'C:/Users/awnya/Documents/Projects/RGBD Object Classification/RGBD_dataset'
non_rgb = ['mask', 'depth']
def is_rgb_im(file):
is_img = file.suffix == '.png'
is_rgb = not any([x in file.name for x in non_rgb])
return is_img and is_rgb
def get_files_in_dir(path):
'''gets list of all files in a directory given by a pathlib Path. (recursive)'''
files = []
for entry in path.iterdir():
if entry.is_file():
files.append(entry)
elif entry.is_dir():
files += get_files_in_dir(entry)
return files
label_path = {dir_.name: dir_ for dir_ in pathlib.Path(rgbd_data_path).iterdir() if dir_.is_dir()}
label_path
# +
crop_size_y, crop_size_x = 200, 200
full_size_y, full_size_x, = (480, 640)
x = (full_size_x - crop_size_x)//2
y = (full_size_y - crop_size_y)//2
def process_image(image):
# crop
processed_img = image[y:y+crop_size_y, x:x+crop_size_x]
return processed_img
# +
def load_and_process(rgb_img_path):
rgb_path = str(rgb_img_path)
depth_path = rgb_path[:-4] + '_depth.png'
mask_path = rgb_path[:-4] + '_mask.png'
bgr_image = cv.imread(rgb_path, cv.IMREAD_UNCHANGED)
processed_bgr = process_image(bgr_image)
depth_image = cv.imread(depth_path, cv.IMREAD_UNCHANGED)
processed_depth = process_image(depth_image)
mask_image = cv.imread(mask_path, cv.IMREAD_UNCHANGED)
processed_mask = process_image(mask_image)
return processed_bgr, processed_depth, processed_mask
def write_bgr_depth_mask(bgr_img, depth_img, mask_img, out_path, train_test, label, name_stem):
cv.imwrite(f'{out_path}/{train_test}/{label}/{name_stem}.png', bgr_img)
cv.imwrite(f'{out_path}/{train_test}/{label}/{name_stem}_depth.png', depth_img)
cv.imwrite(f'{out_path}/{train_test}/{label}/{name_stem}_mask.png', mask_img)
# +
# # some images don't have a corresponding depth or mask image. delete them
# def depth_and_mask_exist(rgb_img_path):
# rgb_path = str(rgb_img_path)
# depth_path = rgb_path[:-4] + '_depth.png'
# mask_path = rgb_path[:-4] + '_mask.png'
# depth_exists = os.path.exists(depth_path)
# mask_exists = os.path.exists(mask_path)
# return depth_exists and mask_exists
# incomplete_imgs = [rgb_path for rgb_path in tqdm(rgb_imgs) if not depth_and_mask_exist(rgb_path)]
# print(f'there are {len(incomplete_imgs)} incomplete images to be deleted')
# for incomplete_img in incomplete_imgs:
# os.remove(str(incomplete_img))
# rgb_imgs = [file for file in get_files_in_dir(pathlib.Path(rgbd_data_path)) if is_rgb_im(file)]
# incomplete_imgs = [rgb_path for rgb_path in rgb_imgs if not depth_and_mask_exist(rgb_path)]
# print(f'there are {len(incomplete_imgs)} incomplete images left')
# -
for label, path in tqdm(label_path.items()):
# partition one object for testing, and rest for training
subdirs = [obj for obj in path.iterdir() if obj.is_dir()]
train_dirs = subdirs[:-1]
test_dir = subdirs[-1]
# make train directory for label if one doesn't exist
if not os.path.isdir(f'{rgbd_out_path}/train/{label}'):
os.mkdir(f'{rgbd_out_path}/train/{label}')
# get training rgb image files for this label in the RGB-D dataset directory
rgb_imgs_train = []
for sub_dir in train_dirs:
rgb_imgs_train += [file for file in get_files_in_dir(sub_dir) if is_rgb_im(file)]
# process rgb image files and write to RGB dataset train directory under their label subdirectory
for rgb_img_path in rgb_imgs_train:
proceessed_bgr, processed_depth, processed_mask = load_and_process(rgb_img_path)
write_bgr_depth_mask(proceessed_bgr, processed_depth, processed_mask, rgbd_out_path, 'train', label, rgb_img_path.stem)
# make test directory for label if one doesn't exist
if not os.path.isdir(f'{rgbd_out_path}/test/{label}'):
os.mkdir(f'{rgbd_out_path}/test/{label}')
# get test rgb image files for this label in the RGB-D dataset directory
rgb_imgs_test = [file for file in get_files_in_dir(test_dir) if is_rgb_im(file)]
# process rgb image files and write to RGB dataset test directory under their label subdirectory
for rgb_img_path in rgb_imgs_test:
proceessed_bgr, processed_depth, processed_mask = load_and_process(rgb_img_path)
write_bgr_depth_mask(proceessed_bgr, processed_depth, processed_mask, rgbd_out_path, 'test', label, rgb_img_path.stem)
| preprocessing/create_RGBD_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv38twttr
# language: python
# name: venv38twttr
# ---
# +
import pandas as pd
import numpy as np
import torch
from tqdm.notebook import tqdm
from torch.utils.data import TensorDataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from matplotlib import pyplot as plt
import seaborn as sns
import sklearn
from sklearn.metrics import classification_report, confusion_matrix
# -
# # Get data
df = pd.read_csv('./../../../labeledTweets/allLabeledTweets.csv')
df = df[['id', 'message', 'label']]
df = df.drop_duplicates()
print(df.shape[0])
df.head()
df['label'].value_counts()
# +
newLine ="\\n|\\r"
urls = '(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]+\.[^\s]{2,}|www\.[a-zA-Z0-9]+\.[^\s]{2,})'
numbers = '\d+((\.|\-)\d+)?'
mentions = '\B\@([\w\-]+)'
hashtag = '#'
whitespaces = '\s+'
leadTrailWhitespace = '^\s+|\s+?$'
df['clean_message'] = df['message']
df['clean_message'] = df['clean_message'].str.replace(newLine,' ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(urls,' URL ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(mentions,' MENTION ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(numbers,' NMBR ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(hashtag,' ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(whitespaces,' ',regex=True)
df['clean_message'] = df['clean_message'].str.replace(leadTrailWhitespace,'',regex=True)
df.head()
# -
# # Train, validate split (balanced)
# +
df_0 = df[df['label']==0]
df_1 = df[df['label']==1]
df_2 = df[df['label']==2]
trainLabelSize = round(df_1.shape[0]*0.85)
trainLabelSize
# +
df_0 = df_0.sample(trainLabelSize, random_state=42)
df_1 = df_1.sample(trainLabelSize, random_state=42)
df_2 = df_2.sample(trainLabelSize, random_state=42)
df_train = pd.concat([df_0, df_1, df_2])
# Shuffle rows
df_train = sklearn.utils.shuffle(df_train, random_state=42)
df_train['label'].value_counts()
# -
df_val = df.merge(df_train, on=['id', 'message', 'label', 'clean_message'], how='left', indicator=True)
df_val = df_val[df_val['_merge']=='left_only']
df_val = df_val[['id', 'message', 'label', 'clean_message']]
df_val['label'].value_counts()
# # Tokenizer "sentence-transformers/LaBSE"
tokenizer = AutoTokenizer.from_pretrained("./../labse_bert_model", do_lower_case=False)
# ## Find popular UNK tokens
# +
unk_tokens = []
for message in df.clean_message.values:
list_of_space_separated_pieces = message.strip().split()
ids = [tokenizer(piece, add_special_tokens=False)["input_ids"] for piece in list_of_space_separated_pieces]
unk_indices = [i for i, encoded in enumerate(ids) if tokenizer.unk_token_id in encoded]
unknown_strings = [piece for i, piece in enumerate(list_of_space_separated_pieces) if i in unk_indices]
for unk_str in unknown_strings:
unk_tokens.append(unk_str)
import collections
counter=collections.Counter(unk_tokens)
print(counter.most_common(100))
most_common_values= [word for word, word_count in counter.most_common(100)]
# -
tokenizer.add_tokens(most_common_values, special_tokens=True)
# ### Find max length for tokenizer
# +
token_lens = []
for txt in list(df.clean_message.values):
tokens = tokenizer.encode(txt, max_length=512, truncation=True)
token_lens.append(len(tokens))
max_length = max(token_lens)
max_length
# -
# ### Encode messages
# +
encoded_data_train = tokenizer.batch_encode_plus(
df_train["clean_message"].values.tolist(),
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
max_length=max_length,
return_tensors='pt'
)
encoded_data_val = tokenizer.batch_encode_plus(
df_val["clean_message"].values.tolist(),
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
max_length=max_length,
return_tensors='pt'
)
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(df_train.label.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(df_val.label.values)
dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train)
dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val)
len(dataset_train), len(dataset_val)
# -
# # Model "LaBSE" pytorch
model = AutoModelForSequenceClassification.from_pretrained("./../labse_bert_model",
num_labels=3,
output_attentions=False,
output_hidden_states=False)
model.resize_token_embeddings(len(tokenizer))
# +
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
batch_size = 32
dataloader_train = DataLoader(dataset_train, sampler=RandomSampler(dataset_train), batch_size=batch_size)
dataloader_validation = DataLoader(dataset_val, sampler=SequentialSampler(dataset_val), batch_size=batch_size)
# +
from transformers import get_linear_schedule_with_warmup
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-5, eps=1e-8)
# optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
# -
epochs = 5
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=len(dataloader_train)*epochs)
# +
# Function to measure weighted F1
from sklearn.metrics import f1_score
def f1_score_func(preds, labels):
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, preds_flat, average='weighted')
# +
import random
seed_val = 17
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = torch.device('cpu')
model.to(device)
print(device)
# -
# Function to evaluate model. Returns average validation loss, predictions, true values
def evaluate(dataloader_val):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
progress_bar = tqdm(dataloader_val, desc='Validating:', leave=False, disable=False)
for batch in progress_bar:
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'labels': batch[2]}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
# # Evaluate untrained model
# +
_, predictions, true_vals = evaluate(dataloader_validation)
from sklearn.metrics import classification_report, confusion_matrix
preds_flat = np.argmax(predictions, axis=1).flatten()
print(classification_report(true_vals, preds_flat))
print(f1_score_func(predictions, true_vals))
pd.DataFrame(confusion_matrix(true_vals, preds_flat),
index = [['actual', 'actual', 'actual'], ['neutral', 'positive', 'negative']],
columns = [['predicted', 'predicted', 'predicted'], ['neutral', 'positive', 'negative']])
# -
# # Train
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'labels': batch[2]}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'modelsUNK/finetuned_LaBSE_epoch_{epoch}.model')
tqdm.write(f'\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_validation)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (Weighted): {val_f1}')
preds_flat = np.argmax(predictions, axis=1).flatten()
print('Classification report:')
print(classification_report(true_vals, preds_flat))
print('Confusion matrix:')
print(pd.DataFrame(confusion_matrix(true_vals, preds_flat),
index = [['actual', 'actual', 'actual'], ['neutral', 'positive', 'negative']],
columns = [['predicted', 'predicted', 'predicted'], ['neutral', 'positive', 'negative']]))
# # Evaluate best model
# +
model.load_state_dict(torch.load('modelsBase/finetuned_BERT_epoch_X.model', map_location=torch.device('cpu')))
_, predictions, true_vals = evaluate(dataloader_validation)
preds_flat = np.argmax(predictions, axis=1).flatten()
# -
print(f1_score_func(predictions, true_vals))
print(classification_report(true_vals, preds_flat))
pd.DataFrame(confusion_matrix(true_vals, preds_flat),
index = [['actual', 'actual', 'actual'], ['neutral', 'positive', 'negative']],
columns = [['predicted', 'predicted', 'predicted'], ['neutral', 'positive', 'negative']])
| LaBSE/addUNKtokens/addUNKtokens.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install --upgrade pip
# !pip install --upgrade tensorflow
# !pip install -q --upgrade tensorflow-datasets
# !pip install -q tensorflow-recommenders
# +
import os
import tempfile
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
tf.__version__
# -
import warnings
warnings.filterwarnings(action='once')
# +
ratings = tfds.load("movielens/100k-ratings", split="train")
#movies = tfds.load("movielens/100k-movies", split="train")
#ratings = ratings.filter(lambda x: x['user_rating'] !=3)
ratings = ratings.map(lambda x: {
"movie_id": x["movie_id"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
"user_gender": int(x["user_gender"]),
"user_zip_code": x["user_zip_code"],
"user_occupation_text": x["user_occupation_text"],
"bucketized_user_age": int(x["bucketized_user_age"]),
})
# +
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
# +
feature_names = ["movie_id", "user_id", "user_gender", "user_zip_code",
"user_occupation_text", "bucketized_user_age"]
vocabularies = {}
for feature_name in feature_names:
vocab = ratings.batch(1_000_000).map(lambda x: x[feature_name])
vocabularies[feature_name] = np.unique(np.concatenate(list(vocab)))
# -
class DCN(tfrs.Model):
def __init__(self, use_cross_layer, deep_layer_sizes, projection_dim=None):
super().__init__()
self.embedding_dimension = 32
str_features = ["movie_id", "user_id" ,"user_zip_code", "user_occupation_text" ]
int_features = ["user_gender", "bucketized_user_age"]
self._all_features = str_features + int_features
self._embeddings = {}
# Compute embeddings for string features.
for feature_name in str_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=vocabulary, mask_token=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
# Compute embeddings for int features.
for feature_name in int_features:
vocabulary = vocabularies[feature_name]
self._embeddings[feature_name] = tf.keras.Sequential(
[tf.keras.layers.experimental.preprocessing.IntegerLookup(
vocabulary=vocabulary, mask_value=None),
tf.keras.layers.Embedding(len(vocabulary) + 1,
self.embedding_dimension)
])
if use_cross_layer:
self._cross_layer = tfrs.layers.dcn.Cross(
projection_dim=projection_dim,
kernel_initializer="glorot_uniform")
else:
self._cross_layer = None
self._deep_layers = [tf.keras.layers.Dense(layer_size, activation="relu")
for layer_size in deep_layer_sizes]
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError("RMSE")]
)
def call(self, features):
# Concatenate embeddings
embeddings = []
for feature_name in self._all_features:
embedding_fn = self._embeddings[feature_name]
embeddings.append(embedding_fn(features[feature_name]))
x = tf.concat(embeddings, axis=1)
# Build Cross Network
if self._cross_layer is not None:
x = self._cross_layer(x)
x = self._cross_layer(x)
# Build Deep Network
for deep_layer in self._deep_layers:
x = deep_layer(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=scores,
)
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
def run_models(use_cross_layer, deep_layer_sizes, projection_dim=None, num_runs=5):
models = []
rmses = []
for i in range(num_runs):
model = DCN(use_cross_layer=use_cross_layer,
deep_layer_sizes=deep_layer_sizes,
projection_dim=projection_dim)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate))
models.append(model)
model.fit(cached_train, epochs=epochs, verbose=False)
metrics = model.evaluate(cached_test, return_dict=True)
rmses.append(metrics["RMSE"])
mean, stdv = np.average(rmses), np.std(rmses)
return {"model": models, "mean": mean, "stdv": stdv}
epochs = 8
learning_rate = 0.01
np.seterr(divide = 'ignore')
dcn_result = run_models(use_cross_layer=True,
deep_layer_sizes=[192, 192])
dcn_lr_result = run_models(use_cross_layer=True,
projection_dim=20,
deep_layer_sizes=[192, 192])
dnn_result = run_models(use_cross_layer=False,
deep_layer_sizes=[192, 192, 192])
print("DCN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_result["mean"], dcn_result["stdv"]))
print("DCN (low-rank) RMSE mean: {:.4f}, stdv: {:.4f}".format(
dcn_lr_result["mean"], dcn_lr_result["stdv"]))
print("DNN RMSE mean: {:.4f}, stdv: {:.4f}".format(
dnn_result["mean"], dnn_result["stdv"]))
# +
model = dcn_result["model"][4]
mat = model._cross_layer._dense.kernel
features = model._all_features
block_norm = np.ones([len(features), len(features)])
dim = model.embedding_dimension
# Compute the norms of the blocks.
for i in range(len(features)):
for j in range(len(features)):
block = mat[i * dim:(i + 1) * dim,
j * dim:(j + 1) * dim]
block_norm[i,j] = np.linalg.norm(block, ord="fro")
plt.figure(figsize=(9,9))
im = plt.matshow(block_norm, cmap=plt.cm.Blues)
ax = plt.gca()
#divider = make_axes_locatable(plt.gca())
#cax = divider.append_axes("right", size="5%", pad=0.05)
#plt.colorbar(im, cax=cax)
#cax.tick_params(labelsize=10)
_ = ax.set_xticklabels([""] + features, rotation=45, ha="left", fontsize=10)
_ = ax.set_yticklabels([""] + features, fontsize=10)
# -
| recomendation-systems/pre-processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="ZiWwxVXn9H1A"
# !pip install -q git+https://github.com/moritztng/prism.git
from IPython.display import Image
# + id="yEMoMEad9OEY"
# changer "content.jpg" et "style.jpg" pour les noms des fichiers à utiliser (l'ordre est important)
# le paramètre --preserve_color peut être soit none ou style
# !style-transfer "content.jpg" "style.jpg" --iter 500 --lr 0.25 --content_weight 1 --style_weight 1 --preserve_color none --avg_pool --use_amp --area 500
Image('artwork.png')
| test_colab/Style_transfer-micro-astro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
path = os.path.abspath(os.path.join('..','..'))
import sys
sys.path.append(path)
from reservoirpy.pvtpy import black_oil as bl
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# # Create a chromatography composition
# +
composition = {
'mole_fraction':[0.02,0.01,0.85,0.04,0.03,0.03,0.02]
}
x=bl.chromatography(composition, compound=['carbon-dioxide','nitrogen','methane','ethane','propane','isobutane','n-butane'])
# -
x
# ## Estimate some properties
# ### Apparent Molecular Weigt
#
# The apparent Molecular Weight (ma) is calculated by summing the product of molar fraction and molecular weight of each component in the chromatography
x.ma
# ### Gas specific gravity
#
# The Gas specific gravity is calculated by diving the **ma** by the specific gravity of the *air*
x.gas_sg
# ### Pseudo critical properties
#
# The Pseudo critical properties are calulated by summing the product of mole fraction and critical properties (pressure and temperature). By default it corrects the properties by Non-hydrocarbon components with the **wichert-aziz** correlation.
x.get_pseudo_critical_properties()
x.get_pseudo_critical_properties(correct=False)
x.get_pseudo_critical_properties(correct_method='carr_kobayashi_burrows')
# ### Get the compressibility factor of gas
#
# Estimate the compressibility factor by estimating the critical properties and applying the default correlation method **papay**
x.get_z(p=3000, t=180)
p_range = np.linspace(1000,5000,10)
x.get_z(p=p_range, t=180)
# ### Get the gas density in lb/ft3
#
# Estimate the gas density by estimating the **ma**, the **z** factor and finnaly applying the gas equation of state for **real gases**
x.get_rhog(p=3000,t=180)
x.get_rhog(p=3000,t=180, rhog_method='ideal_gas')
x.get_rhog(p=np.linspace(1000,5000,10),t=180,rhog_method='real_gas')
# ### Estimate the Specific volume of Gas.
#
# Get the specific volume by estimate the inverse of the density
x.get_sv(p=3000,t=180, rhog_method='ideal_gas')
x.get_sv(p=3000,t=180, rhog_method='real_gas')
# # Create the Gas object
fm = 'formation_1'
t= 210
chrom = x
g = bl.gas(formation=fm, temp=t, chromatography=chrom)
g.pseudo_critical_properties()
g.pvt_from_correlations()
g.pvt
# +
## Gas without chromatography
# -
gas_t = bl.gas(formation=fm, temp=t, sg=0.68)
gas_t.pvt_from_correlations()
gas_t.pvt
gas_t.pvt.interpolate(4000)
gas_t.pvt.interpolate(np.linspace(40,3000,10))
from reservoirpy.wellproductivitypy import pi
# +
dt = gas_t.pvt.copy()
dt['ps'] = (2*dt.index) / (dt['z']*dt['mug'])
dt.plot(y='ps')
# +
df,aof = pi.gas_inflow_curve(2000,6.4e-4,gas_t.pvt)
plt.plot(df['q'],df['p'])
# +
# Example takek
# +
pvt_data = np.array([
[0, 0.01270, 1.000],
[400, 0.01286, 0.937],
[1200, 0.01530, 0.832],
[1600, 0.01680, 0.794],
[2000, 0.01840,0.770],
[3200, 0.02340, 0.797],
[3600, 0.02500,0.827],
[4000, 0.02660, 0.860]
])
pvt_pi=bl.pvt(pvt_data, columns=['pressure','mug','z'])
pvt_pi
# +
j_gas = pi.gas_j(h=6,k=100,re=1000,rw=0.75,temp=122,s=0)
print(f"J Gas: {j_gas}")
df,aof = pi.gas_inflow_curve(1000,3e-5,pvt_pi,n=10)
plt.plot(df['q'],df['p'])
plt.grid()
# -
df
| docs/examples/old_examples/gas_pvt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming Exercise 8:
# # Anomaly Detection and Recommender Systems
#
#
# ## Introduction
#
# In this exercise, you will implement the anomaly detection algorithm and
# apply it to detect failing servers on a network. In the second part, you will
# use collaborative filtering to build a recommender system for movies. Before
# starting on the programming exercise, we strongly recommend watching the
# video lectures and completing the review questions for the associated topics.
#
# All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
#
# Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
# +
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
import matplotlib as mpl
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
# %matplotlib inline
# -
# ## Submission and Grading
#
#
# After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
#
#
# | Section | Part | Submitted Function | Points |
# | :- |:- |:- | :-: |
# | 1 | [Estimate Gaussian Parameters](#section1) | [`estimateGaussian`](#estimateGaussian) | 15 |
# | 2 | [Select Threshold](#section2) | [`selectThreshold`](#selectThreshold) | 15 |
# | 3 | [Collaborative Filtering Cost](#section3) | [`cofiCostFunc`](#cofiCostFunc) | 20 |
# | 4 | [Collaborative Filtering Gradient](#section4) | [`cofiCostFunc`](#cofiCostFunc) | 30 |
# | 5 | [Regularized Cost](#section5) | [`cofiCostFunc`](#cofiCostFunc) | 10 |
# | 6 | [Gradient with regularization](#section6) | [`cofiCostFunc`](#cofiCostFunc) | 10 |
# | | Total Points | |100 |
#
#
#
# You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
#
# <div class="alert alert-block alert-warning">
# At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
# </div>
# ## 1 Anomaly Detection
#
# In this exercise, you will implement an anomaly detection algorithm to detect anomalous behavior in server computers. The features measure the throughput (mb/s) and latency (ms) of response of each server. While your servers were operating, you collected $m = 307$ examples of how they were behaving, and thus have an unlabeled dataset $\{x^{(1)}, \dots, x^{(m)}\}$. You suspect that the vast majority of these examples are “normal” (non-anomalous) examples of the servers operating normally, but there might also be some examples of servers acting anomalously within this dataset.
#
# You will use a Gaussian model to detect anomalous examples in your dataset. You will first start on a 2D dataset that will allow you to visualize what the algorithm is doing. On that dataset you will fit a Gaussian distribution and then find values that have very low probability and hence can be considered anomalies. After that, you will apply the anomaly detection algorithm to a larger dataset with many dimensions.
#
# We start this exercise by using a small dataset that is easy to visualize. Our example case consists of 2 network server statistics across several machines: the latency and throughput of each machine.
# +
# The following command loads the dataset.
data = loadmat(os.path.join('Data', 'ex8data1.mat'))
X, Xval, yval = data['X'], data['Xval'], data['yval'][:, 0]
# Visualize the example dataset
pyplot.plot(X[:, 0], X[:, 1], 'bx', mew=2, mec='k', ms=6)
pyplot.axis([0, 30, 0, 30])
pyplot.xlabel('Latency (ms)')
pyplot.ylabel('Throughput (mb/s)')
pass
# -
# ### 1.1 Gaussian distribution
#
# To perform anomaly detection, you will first need to fit a model to the data's distribution. Given a training set $\{x^{(1)}, \dots, x^{(m)} \}$ (where $x^{(i)} \in \mathbb{R}^n$ ), you want to estimate the Gaussian distribution for each of the features $x_i$ . For each feature $i = 1 \dots n$, you need to find parameters $\mu_i$ and $\sigma_i^2$ that fit the data in the $i^{th}$ dimension $\{ x_i^{(1)}, \dots, x_i^{(m)} \}$ (the $i^{th}$ dimension of each example).
#
# The Gaussian distribution is given by
#
# $$ p\left( x; \mu, \sigma^2 \right) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{\left(x-\mu\right)^2}{2\sigma^2}},$$
# where $\mu$ is the mean and $\sigma^2$ is the variance.
#
# <a id="section1"></a>
# ### 1.2 Estimating parameters for a Gaussian
#
# You can estimate the parameters $\left( \mu_i, \sigma_i^2 \right)$, of the $i^{th}$ feature by using the following equations. To estimate the mean, you will use:
#
# $$ \mu_i = \frac{1}{m} \sum_{j=1}^m x_i^{(j)},$$
#
# and for the variance you will use:
#
# $$ \sigma_i^2 = \frac{1}{m} \sum_{j=1}^m \left( x_i^{(j)} - \mu_i \right)^2.$$
#
# Your task is to complete the code in the function `estimateGaussian`. This function takes as input the data matrix `X` and should output an n-dimension vector `mu` that holds the mean for each of the $n$ features and another n-dimension vector `sigma2` that holds the variances of each of the features. You can implement this
# using a for-loop over every feature and every training example (though a vectorized implementation might be more efficient; feel free to use a vectorized implementation if you prefer).
# <a id="estimateGaussian"></a>
def estimateGaussian(X):
"""
This function estimates the parameters of a Gaussian distribution
using a provided dataset.
Parameters
----------
X : array_like
The dataset of shape (m x n) with each n-dimensional
data point in one row, and each total of m data points.
Returns
-------
mu : array_like
A vector of shape (n,) containing the means of each dimension.
sigma2 : array_like
A vector of shape (n,) containing the computed
variances of each dimension.
Instructions
------------
Compute the mean of the data and the variances
In particular, mu[i] should contain the mean of
the data for the i-th feature and sigma2[i]
should contain variance of the i-th feature.
"""
# Useful variables
m, n = X.shape
# You should return these values correctly
mu = np.zeros(n)
sigma2 = np.zeros(n)
# ====================== YOUR CODE HERE ======================
mu = X.mean(axis=0)
sigma2 = X.var(axis=0)
# =============================================================
return mu, sigma2
# Once you have completed the code in `estimateGaussian`, the next cell will visualize the contours of the fitted Gaussian distribution. You should get a plot similar to the figure below.
#
# 
#
# From your plot, you can see that most of the examples are in the region with the highest probability, while
# the anomalous examples are in the regions with lower probabilities.
#
# To do the visualization of the Gaussian fit, we first estimate the parameters of our assumed Gaussian distribution, then compute the probabilities for each of the points and then visualize both the overall distribution and where each of the points falls in terms of that distribution.
# +
# Estimate my and sigma2
mu, sigma2 = estimateGaussian(X)
# Returns the density of the multivariate normal at each data point (row)
# of X
p = utils.multivariateGaussian(X, mu, sigma2)
# Visualize the fit
utils.visualizeFit(X, mu, sigma2)
pyplot.xlabel('Latency (ms)')
pyplot.ylabel('Throughput (mb/s)')
pyplot.tight_layout()
# -
# *You should now submit your solutions.*
grader[1] = estimateGaussian
grader.grade()
# <a id="section2"></a>
# ### 1.3 Selecting the threshold, $\varepsilon$
#
# Now that you have estimated the Gaussian parameters, you can investigate which examples have a very high probability given this distribution and which examples have a very low probability. The low probability examples are more likely to be the anomalies in our dataset. One way to determine which examples are anomalies is to select a threshold based on a cross validation set. In this part of the exercise, you will implement an algorithm to select the threshold $\varepsilon$ using the $F_1$ score on a cross validation set.
#
#
# You should now complete the code for the function `selectThreshold`. For this, we will use a cross validation set $\{ (x_{cv}^{(1)}, y_{cv}^{(1)}), \dots, (x_{cv}^{(m_{cv})}, y_{cv}^{(m_{cv})})\}$, where the label $y = 1$ corresponds to an anomalous example, and $y = 0$ corresponds to a normal example. For each cross validation example, we will compute $p\left( x_{cv}^{(i)}\right)$. The vector of all of these probabilities $p\left( x_{cv}^{(1)}\right), \dots, p\left( x_{cv}^{(m_{cv})}\right)$ is passed to `selectThreshold` in the vector `pval`. The corresponding labels $y_{cv}^{(1)} , \dots , y_{cv}^{(m_{cv})}$ are passed to the same function in the vector `yval`.
#
# The function `selectThreshold` should return two values; the first is the selected threshold $\varepsilon$. If an example $x$ has a low probability $p(x) < \varepsilon$, then it is considered to be an anomaly. The function should also return the $F_1$ score, which tells you how well you are doing on finding the ground truth
# anomalies given a certain threshold. For many different values of $\varepsilon$, you will compute the resulting $F_1$ score by computing how many examples the current threshold classifies correctly and incorrectly.
#
# The $F_1$ score is computed using precision ($prec$) and recall ($rec$):
#
# $$ F_1 = \frac{2 \cdot prec \cdot rec}{prec + rec}, $$
#
# You compute precision and recall by:
#
# $$ prec = \frac{tp}{tp + fp} $$
#
# $$ rec = \frac{tp}{tp + fn} $$
#
# where:
#
# - $tp$ is the number of true positives: the ground truth label says it’s an anomaly and our algorithm correctly classified it as an anomaly.
#
# - $fp$ is the number of false positives: the ground truth label says it’s not an anomaly, but our algorithm incorrectly classified it as an anomaly.
# - $fn$ is the number of false negatives: the ground truth label says it’s an anomaly, but our algorithm incorrectly classified it as not being anomalous.
#
# In the provided code `selectThreshold`, there is already a loop that will try many different values of $\varepsilon$ and select the best $\varepsilon$ based on the $F_1$ score. You should now complete the code in `selectThreshold`. You can implement the computation of the $F_1$ score using a for-loop over all the cross
# validation examples (to compute the values $tp$, $fp$, $fn$). You should see a value for `epsilon` of about 8.99e-05.
#
# <div class="alert alert-block alert-warning">
# **Implementation Note:** In order to compute $tp$, $fp$ and $fn$, you may be able to use a vectorized implementation rather than loop over all the examples. This can be implemented by numpy's equality test
# between a vector and a single number. If you have several binary values in an n-dimensional binary vector $v \in \{0, 1\}^n$, you can find out how many values in this vector are 0 by using: np.sum(v == 0). You can also
# apply a logical and operator to such binary vectors. For instance, let `cvPredictions` be a binary vector of size equal to the number of cross validation set, where the $i^{th}$ element is 1 if your algorithm considers
# $x_{cv}^{(i)}$ an anomaly, and 0 otherwise. You can then, for example, compute the number of false positives using: `fp = np.sum((cvPredictions == 1) & (yval == 0))`.
# </div>
# <a id="selectThreshold"></a>
def selectThreshold(yval, pval):
"""
Find the best threshold (epsilon) to use for selecting outliers based
on the results from a validation set and the ground truth.
Parameters
----------
yval : array_like
The ground truth labels of shape (m, ).
pval : array_like
The precomputed vector of probabilities based on mu and sigma2 parameters. It's shape is also (m, ).
Returns
-------
bestEpsilon : array_like
A vector of shape (n,) corresponding to the threshold value.
bestF1 : float
The value for the best F1 score.
Instructions
------------
Compute the F1 score of choosing epsilon as the threshold and place the
value in F1. The code at the end of the loop will compare the
F1 score for this choice of epsilon and set it to be the best epsilon if
it is better than the current choice of epsilon.
Notes
-----
You can use predictions = (pval < epsilon) to get a binary vector
of 0's and 1's of the outlier predictions
"""
bestEpsilon = 0
bestF1 = 0
F1 = 0
step = (pval.max() - pval.min()) / 1000
for epsilon in np.linspace(1.01*min(pval), max(pval), 1000):
# ====================== YOUR CODE HERE =======================
preds = pval < epsilon
tp = np.sum(np.logical_and(preds == 1, yval == 1)).astype(float)
fp = np.sum(np.logical_and(preds == 1, yval == 0)).astype(float)
fn = np.sum(np.logical_and(preds == 0, yval == 1)).astype(float)
precision = tp / (tp + fp)
recall = tp / (tp + fn)
f1 = (2 * precision * recall) / (precision + recall)
if f1 > bestF1:
bestF1 = f1
bestEpsilon = epsilon
# =============================================================
if F1 > bestF1:
bestF1 = F1
bestEpsilon = epsilon
return bestEpsilon, bestF1
# Once you have completed the code in `selectThreshold`, the next cell will run your anomaly detection code and circle the anomalies in the plot.
# +
pval = utils.multivariateGaussian(Xval, mu, sigma2)
epsilon, F1 = selectThreshold(yval, pval)
print('Best epsilon found using cross-validation: %.2e' % epsilon)
print('Best F1 on Cross Validation Set: %f' % F1)
print(' (you should see a value epsilon of about 8.99e-05)')
print(' (you should see a Best F1 value of 0.875000)')
# Find the outliers in the training set and plot the
outliers = p < epsilon
# Visualize the fit
utils.visualizeFit(X, mu, sigma2)
pyplot.xlabel('Latency (ms)')
pyplot.ylabel('Throughput (mb/s)')
pyplot.tight_layout()
# Draw a red circle around those outliers
pyplot.plot(X[outliers, 0], X[outliers, 1], 'ro', ms=10, mfc='None', mew=2)
pass
# -
# *You should now submit your solutions.*
grader[2] = selectThreshold
grader.grade()
# ### 1.4 High dimensional dataset
#
# The next cell will run the anomaly detection algorithm you implemented on a more realistic and much harder dataset. In this dataset, each example is described by 11 features, capturing many more properties of your compute servers, but only some features indicate whether a point is an outlier. The script will use your code to estimate the Gaussian parameters ($\mu_i$ and $\sigma_i^2$), evaluate the probabilities for both the training data `X` from which you estimated the Gaussian parameters, and do so for the the cross-validation set `Xval`. Finally, it will use `selectThreshold` to find the best threshold $\varepsilon$. You should see a value epsilon of about 1.38e-18, and 117 anomalies found.
# +
# Loads the second dataset. You should now have the
# variables X, Xval, yval in your environment
data = loadmat(os.path.join('Data', 'ex8data2.mat'))
X, Xval, yval = data['X'], data['Xval'], data['yval'][:, 0]
# Apply the same steps to the larger dataset
mu, sigma2 = estimateGaussian(X)
# Training set
p = utils.multivariateGaussian(X, mu, sigma2)
# Cross-validation set
pval = utils.multivariateGaussian(Xval, mu, sigma2)
# Find the best threshold
epsilon, F1 = selectThreshold(yval, pval)
print('Best epsilon found using cross-validation: %.2e' % epsilon)
print('Best F1 on Cross Validation Set : %f\n' % F1)
print(' (you should see a value epsilon of about 1.38e-18)')
print(' (you should see a Best F1 value of 0.615385)')
print('\n# Outliers found: %d' % np.sum(p < epsilon))
# -
# ## 2 Recommender Systems
#
# In this part of the exercise, you will implement the collaborative filtering learning algorithm and apply it to a dataset of movie ratings ([MovieLens 100k Dataset](https://grouplens.org/datasets/movielens/) from GroupLens Research). This dataset consists of ratings on a scale of 1 to 5. The dataset has $n_u = 943$ users, and $n_m = 1682$ movies.
#
# In the next parts of this exercise, you will implement the function `cofiCostFunc` that computes the collaborative filtering objective function and gradient. After implementing the cost function and gradient, you will use `scipy.optimize.minimize` to learn the parameters for collaborative filtering.
#
# ### 2.1 Movie ratings dataset
#
# The next cell will load the dataset `ex8_movies.mat`, providing the variables `Y` and `R`.
# The matrix `Y` (a `num_movies` $\times$ `num_users` matrix) stores the ratings $y^{(i,j)}$ (from 1 to 5). The matrix `R` is an binary-valued indicator matrix, where $R(i, j) = 1$ if user $j$ gave a rating to movie $i$, and $R(i, j) = 0$ otherwise. The objective of collaborative filtering is to predict movie ratings for the movies that users have not yet rated, that is, the entries with $R(i, j) = 0$. This will allow us to recommend the movies with the highest predicted ratings to the user.
#
# To help you understand the matrix `Y`, the following cell will compute the average movie rating for the first movie (Toy Story) and print its average rating.
# +
# Load data
data = loadmat(os.path.join('Data', 'ex8_movies.mat'))
Y, R = data['Y'], data['R']
# Y is a 1682x943 matrix, containing ratings (1-5) of
# 1682 movies on 943 users
# R is a 1682x943 matrix, where R(i,j) = 1
# if and only if user j gave a rating to movie i
# From the matrix, we can compute statistics like average rating.
print('Average rating for movie 1 (Toy Story): %f / 5' %
np.mean(Y[0, R[0, :] == 1]))
# We can "visualize" the ratings matrix by plotting it with imshow
pyplot.figure(figsize=(8, 8))
pyplot.imshow(Y)
pyplot.ylabel('Movies')
pyplot.xlabel('Users')
pyplot.grid(False)
# -
# Throughout this part of the exercise, you will also be working with the matrices, `X` and `Theta`:
#
# $$ \text{X} =
# \begin{bmatrix}
# - \left(x^{(1)}\right)^T - \\
# - \left(x^{(2)}\right)^T - \\
# \vdots \\
# - \left(x^{(n_m)}\right)^T - \\
# \end{bmatrix}, \quad
# \text{Theta} =
# \begin{bmatrix}
# - \left(\theta^{(1)}\right)^T - \\
# - \left(\theta^{(2)}\right)^T - \\
# \vdots \\
# - \left(\theta^{(n_u)}\right)^T - \\
# \end{bmatrix}.
# $$
#
# The $i^{th}$ row of `X` corresponds to the feature vector $x^{(i)}$ for the $i^{th}$ movie, and the $j^{th}$ row of `Theta` corresponds to one parameter vector $\theta^{(j)}$, for the $j^{th}$ user. Both $x^{(i)}$ and $\theta^{(j)}$ are n-dimensional vectors. For the purposes of this exercise, you will use $n = 100$, and therefore, $x^{(i)} \in \mathbb{R}^{100}$ and $\theta^{(j)} \in \mathbb{R}^{100}$. Correspondingly, `X` is a $n_m \times 100$ matrix and `Theta` is a $n_u \times 100$ matrix.
#
# <a id="section3"></a>
# ### 2.2 Collaborative filtering learning algorithm
#
# Now, you will start implementing the collaborative filtering learning algorithm. You will start by implementing the cost function (without regularization).
#
# The collaborative filtering algorithm in the setting of movie recommendations considers a set of n-dimensional parameter vectors $x^{(1)}, \dots, x^{(n_m)}$ and $\theta^{(1)} , \dots, \theta^{(n_u)}$, where the model predicts the rating for movie $i$ by user $j$ as $y^{(i,j)} = \left( \theta^{(j)} \right)^T x^{(i)}$. Given a dataset that consists of a set of ratings produced by some users on some movies, you wish to learn the parameter vectors $x^{(1)}, \dots, x^{(n_m)}, \theta^{(1)}, \dots, \theta^{(n_u)}$ that produce the best fit (minimizes the squared error).
#
# You will complete the code in `cofiCostFunc` to compute the cost function and gradient for collaborative filtering. Note that the parameters to the function (i.e., the values that you are trying to learn) are `X` and `Theta`. In order to use an off-the-shelf minimizer such as `scipy`'s `minimize` function, the cost function has been set up to unroll the parameters into a single vector called `params`. You had previously used the same vector unrolling method in the neural networks programming exercise.
#
# #### 2.2.1 Collaborative filtering cost function
#
# The collaborative filtering cost function (without regularization) is given by
#
# $$
# J(x^{(1)}, \dots, x^{(n_m)}, \theta^{(1)}, \dots,\theta^{(n_u)}) = \frac{1}{2} \sum_{(i,j):r(i,j)=1} \left( \left(\theta^{(j)}\right)^T x^{(i)} - y^{(i,j)} \right)^2
# $$
#
# You should now modify the function `cofiCostFunc` to return this cost in the variable `J`. Note that you should be accumulating the cost for user $j$ and movie $i$ only if `R[i,j] = 1`.
#
# <div class="alert alert-block alert-warning">
# **Implementation Note**: We strongly encourage you to use a vectorized implementation to compute $J$, since it will later by called many times by `scipy`'s optimization package. As usual, it might be easiest to first write a non-vectorized implementation (to make sure you have the right answer), and the modify it to become a vectorized implementation (checking that the vectorization steps do not change your algorithm’s output). To come up with a vectorized implementation, the following tip might be helpful: You can use the $R$ matrix to set selected entries to 0. For example, `R * M` will do an element-wise multiplication between `M`
# and `R`; since `R` only has elements with values either 0 or 1, this has the effect of setting the elements of M to 0 only when the corresponding value in R is 0. Hence, `np.sum( R * M)` is the sum of all the elements of `M` for which the corresponding element in `R` equals 1.
# </div>
#
# <a id="cofiCostFunc"></a>
def cofiCostFunc(params, Y, R, num_users, num_movies,
num_features, lambda_=0.0):
"""
Collaborative filtering cost function.
Parameters
----------
params : array_like
The parameters which will be optimized. This is a one
dimensional vector of shape (num_movies x num_users, 1). It is the
concatenation of the feature vectors X and parameters Theta.
Y : array_like
A matrix of shape (num_movies x num_users) of user ratings of movies.
R : array_like
A (num_movies x num_users) matrix, where R[i, j] = 1 if the
i-th movie was rated by the j-th user.
num_users : int
Total number of users.
num_movies : int
Total number of movies.
num_features : int
Number of features to learn.
lambda_ : float, optional
The regularization coefficient.
Returns
-------
J : float
The value of the cost function at the given params.
grad : array_like
The gradient vector of the cost function at the given params.
grad has a shape (num_movies x num_users, 1)
Instructions
------------
Compute the cost function and gradient for collaborative filtering.
Concretely, you should first implement the cost function (without
regularization) and make sure it is matches our costs. After that,
you should implement thegradient and use the checkCostFunction routine
to check that the gradient is correct. Finally, you should implement
regularization.
Notes
-----
- The input params will be unraveled into the two matrices:
X : (num_movies x num_features) matrix of movie features
Theta : (num_users x num_features) matrix of user features
- You should set the following variables correctly:
X_grad : (num_movies x num_features) matrix, containing the
partial derivatives w.r.t. to each element of X
Theta_grad : (num_users x num_features) matrix, containing the
partial derivatives w.r.t. to each element of Theta
- The returned gradient will be the concatenation of the raveled
gradients X_grad and Theta_grad.
"""
# Unfold the U and W matrices from params
X = params[:num_movies*num_features].reshape(num_movies, num_features)
Theta = params[num_movies*num_features:].reshape(num_users, num_features)
# You need to return the following values correctly
J = 0
X_grad = np.zeros(X.shape)
Theta_grad = np.zeros(Theta.shape)
# ====================== YOUR CODE HERE ======================
J = (1 / 2) * np.sum(np.square((X.dot(Theta.T) - Y) * R)) + (lambda_ / 2) * np.sum(np.square(X)) + \
(lambda_ / 2) * np.sum(np.square(Theta))
for i in range(R.shape[0]):
ind = np.where(R[i, :] == 1)[0]
Theta_temp = Theta[ind, :]
Y_temp = Y[i, ind]
X_grad[i, :] = np.dot(np.dot(X[i, :], Theta_temp.T) - Y_temp, Theta_temp) + lambda_ * X[i, :]
for j in range(R.shape[1]):
ind = np.where(R[:, j] == 1)[0]
X_temp = X[ind, :]
Y_temp = Y[ind, j]
Theta_grad[j, :] = np.dot(np.dot(X_temp, Theta[j, :]) - Y_temp, X_temp) + lambda_ * Theta[j, :]
# =============================================================
grad = np.concatenate([X_grad.ravel(), Theta_grad.ravel()])
return J, grad
# After you have completed the function, the next cell will run your cost function. To help you debug your cost function, we have included set of weights that we trained on that. You should expect to see an output of 22.22.
# +
# Load pre-trained weights (X, Theta, num_users, num_movies, num_features)
data = loadmat(os.path.join('Data', 'ex8_movieParams.mat'))
X, Theta, num_users, num_movies, num_features = data['X'],\
data['Theta'], data['num_users'], data['num_movies'], data['num_features']
# Reduce the data set size so that this runs faster
num_users = 4
num_movies = 5
num_features = 3
X = X[:num_movies, :num_features]
Theta = Theta[:num_users, :num_features]
Y = Y[:num_movies, 0:num_users]
R = R[:num_movies, 0:num_users]
# Evaluate cost function
J, _ = cofiCostFunc(np.concatenate([X.ravel(), Theta.ravel()]),
Y, R, num_users, num_movies, num_features)
print('Cost at loaded parameters: %.2f \n(this value should be about 22.22)' % J)
# -
# *You should now submit your solutions.*
grader[3] = cofiCostFunc
grader.grade()
# <a id="section4"></a>
# #### 2.2.2 Collaborative filtering gradient
#
# Now you should implement the gradient (without regularization). Specifically, you should complete the code in `cofiCostFunc` to return the variables `X_grad` and `Theta_grad`. Note that `X_grad` should be a matrix of the same size as `X` and similarly, `Theta_grad` is a matrix of the same size as
# `Theta`. The gradients of the cost function is given by:
#
# $$ \frac{\partial J}{\partial x_k^{(i)}} = \sum_{j:r(i,j)=1} \left( \left(\theta^{(j)}\right)^T x^{(i)} - y^{(i,j)} \right) \theta_k^{(j)} $$
#
# $$ \frac{\partial J}{\partial \theta_k^{(j)}} = \sum_{i:r(i,j)=1} \left( \left(\theta^{(j)}\right)^T x^{(i)}- y^{(i,j)} \right) x_k^{(i)} $$
#
# Note that the function returns the gradient for both sets of variables by unrolling them into a single vector. After you have completed the code to compute the gradients, the next cell run a gradient check
# (available in `utils.checkCostFunction`) to numerically check the implementation of your gradients (this is similar to the numerical check that you used in the neural networks exercise. If your implementation is correct, you should find that the analytical and numerical gradients match up closely.
#
# <div class="alert alert-block alert-warning">
# **Implementation Note:** You can get full credit for this assignment without using a vectorized implementation, but your code will run much more slowly (a small number of hours), and so we recommend that you try to vectorize your implementation. To get started, you can implement the gradient with a for-loop over movies
# (for computing $\frac{\partial J}{\partial x^{(i)}_k}$) and a for-loop over users (for computing $\frac{\partial J}{\theta_k^{(j)}}$). When you first implement the gradient, you might start with an unvectorized version, by implementing another inner for-loop that computes each element in the summation. After you have completed the gradient computation this way, you should try to vectorize your implementation (vectorize the inner for-loops), so that you are left with only two for-loops (one for looping over movies to compute $\frac{\partial J}{\partial x_k^{(i)}}$ for each movie, and one for looping over users to compute $\frac{\partial J}{\partial \theta_k^{(j)}}$ for each user).
# </div>
#
# <div class="alert alert-block alert-warning">
# **Implementation Tip:** To perform the vectorization, you might find this helpful: You should come up with a way to compute all the derivatives associated with $x_1^{(i)} , x_2^{(i)}, \dots , x_n^{(i)}$ (i.e., the derivative terms associated with the feature vector $x^{(i)}$) at the same time. Let us define the derivatives for the feature vector of the $i^{th}$ movie as:
#
# $$ \left(X_{\text{grad}} \left(i, :\right)\right)^T =
# \begin{bmatrix}
# \frac{\partial J}{\partial x_1^{(i)}} \\
# \frac{\partial J}{\partial x_2^{(i)}} \\
# \vdots \\
# \frac{\partial J}{\partial x_n^{(i)}}
# \end{bmatrix} = \quad
# \sum_{j:r(i,j)=1} \left( \left( \theta^{(j)} \right)^T x^{(i)} - y^{(i,j)} \right) \theta^{(j)}
# $$
#
# To vectorize the above expression, you can start by indexing into `Theta` and `Y` to select only the elements of interests (that is, those with `r[i, j] = 1`). Intuitively, when you consider the features for the $i^{th}$ movie, you only need to be concerned about the users who had given ratings to the movie, and this allows you to remove all the other users from `Theta` and `Y`. <br/><br/>
#
#
# Concretely, you can set `idx = np.where(R[i, :] == 1)[0]` to be a list of all the users that have rated movie $i$. This will allow you to create the temporary matrices `Theta_temp = Theta[idx, :]` and `Y_temp = Y[i, idx]` that index into `Theta` and `Y` to give you only the set of users which have rated the $i^{th}$ movie. This will allow you to write the derivatives as: <br>
#
# `X_grad[i, :] = np.dot(np.dot(X[i, :], Theta_temp.T) - Y_temp, Theta_temp)`
#
# <br><br>
# Note that the vectorized computation above returns a row-vector instead. After you have vectorized the computations of the derivatives with respect to $x^{(i)}$, you should use a similar method to vectorize the derivatives with respect to $θ^{(j)}$ as well.
# </div>
#
# [Click here to go back to the function `cofiCostFunc` to update it](#cofiCostFunc).
#
# <font color="red"> Do not forget to re-execute the cell containg the function `cofiCostFunc` so that it is updated with your implementation of the gradient computation.</font>
# Check gradients by running checkcostFunction
utils.checkCostFunction(cofiCostFunc)
# *You should now submit your solutions*
grader[4] = cofiCostFunc
grader.grade()
# <a id="section5"></a>
# #### 2.2.3 Regularized cost function
#
# The cost function for collaborative filtering with regularization is given by
#
# $$ J(x^{(1)}, \dots, x^{(n_m)}, \theta^{(1)}, \dots, \theta^{(n_u)}) = \frac{1}{2} \sum_{(i,j):r(i,j)=1} \left( \left( \theta^{(j)} \right)^T x^{(i)} - y^{(i,j)} \right)^2 + \left( \frac{\lambda}{2} \sum_{j=1}^{n_u} \sum_{k=1}^{n} \left( \theta_k^{(j)} \right)^2 \right) + \left( \frac{\lambda}{2} \sum_{i=1}^{n_m} \sum_{k=1}^n \left(x_k^{(i)} \right)^2 \right) $$
#
# You should now add regularization to your original computations of the cost function, $J$. After you are done, the next cell will run your regularized cost function, and you should expect to see a cost of about 31.34.
#
# [Click here to go back to the function `cofiCostFunc` to update it](#cofiCostFunc)
# <font color="red"> Do not forget to re-execute the cell containing the function `cofiCostFunc` so that it is updated with your implementation of regularized cost function.</font>
# +
# Evaluate cost function
J, _ = cofiCostFunc(np.concatenate([X.ravel(), Theta.ravel()]),
Y, R, num_users, num_movies, num_features, 1.5)
print('Cost at loaded parameters (lambda = 1.5): %.2f' % J)
print(' (this value should be about 31.34)')
# -
# *You should now submit your solutions.*
grader[5] = cofiCostFunc
grader.grade()
# <a id="section6"></a>
# #### 2.2.4 Regularized gradient
#
# Now that you have implemented the regularized cost function, you should proceed to implement regularization for the gradient. You should add to your implementation in `cofiCostFunc` to return the regularized gradient
# by adding the contributions from the regularization terms. Note that the gradients for the regularized cost function is given by:
#
# $$ \frac{\partial J}{\partial x_k^{(i)}} = \sum_{j:r(i,j)=1} \left( \left(\theta^{(j)}\right)^T x^{(i)} - y^{(i,j)} \right) \theta_k^{(j)} + \lambda x_k^{(i)} $$
#
# $$ \frac{\partial J}{\partial \theta_k^{(j)}} = \sum_{i:r(i,j)=1} \left( \left(\theta^{(j)}\right)^T x^{(i)}- y^{(i,j)} \right) x_k^{(i)} + \lambda \theta_k^{(j)} $$
#
# This means that you just need to add $\lambda x^{(i)}$ to the `X_grad[i,:]` variable described earlier, and add $\lambda \theta^{(j)}$ to the `Theta_grad[j, :]` variable described earlier.
#
# [Click here to go back to the function `cofiCostFunc` to update it](#cofiCostFunc)
# <font color="red"> Do not forget to re-execute the cell containing the function `cofiCostFunc` so that it is updated with your implementation of the gradient for the regularized cost function.</font>
#
# After you have completed the code to compute the gradients, the following cell will run another gradient check (`utils.checkCostFunction`) to numerically check the implementation of your gradients.
# Check gradients by running checkCostFunction
utils.checkCostFunction(cofiCostFunc, 1.5)
# *You should now submit your solutions.*
grader[6] = cofiCostFunc
grader.grade()
# ### 2.3 Learning movie recommendations
#
# After you have finished implementing the collaborative filtering cost function and gradient, you can now start training your algorithm to make movie recommendations for yourself. In the next cell, you can enter your own movie preferences, so that later when the algorithm runs, you can get your own movie recommendations! We have filled out some values according to our own preferences, but you should change this according to your own tastes. The list of all movies and their number in the dataset can be found listed in the file `Data/movie_idx.txt`.
# +
# Before we will train the collaborative filtering model, we will first
# add ratings that correspond to a new user that we just observed. This
# part of the code will also allow you to put in your own ratings for the
# movies in our dataset!
movieList = utils.loadMovieList()
n_m = len(movieList)
# Initialize my ratings
my_ratings = np.zeros(n_m)
# Check the file movie_idx.txt for id of each movie in our dataset
# For example, Toy Story (1995) has ID 1, so to rate it "4", you can set
# Note that the index here is ID-1, since we start index from 0.
my_ratings[0] = 4
# Or suppose did not enjoy Silence of the Lambs (1991), you can set
my_ratings[97] = 2
# We have selected a few movies we liked / did not like and the ratings we
# gave are as follows:
my_ratings[6] = 3
my_ratings[11]= 5
my_ratings[53] = 4
my_ratings[63] = 5
my_ratings[65] = 3
my_ratings[68] = 5
my_ratings[182] = 4
my_ratings[225] = 5
my_ratings[354] = 5
print('New user ratings:')
print('-----------------')
for i in range(len(my_ratings)):
if my_ratings[i] > 0:
print('Rated %d stars: %s' % (my_ratings[i], movieList[i]))
# -
# #### 2.3.1 Recommendations
#
# After the additional ratings have been added to the dataset, the script
# will proceed to train the collaborative filtering model. This will learn the
# parameters X and Theta. To predict the rating of movie i for user j, you need to compute (θ (j) ) T x (i) . The next part of the script computes the ratings for
# all the movies and users and displays the movies that it recommends (Figure
# 4), according to ratings that were entered earlier in the script. Note that
# you might obtain a different set of the predictions due to different random
# initializations.
# +
# Now, you will train the collaborative filtering model on a movie rating
# dataset of 1682 movies and 943 users
# Load data
data = loadmat(os.path.join('Data', 'ex8_movies.mat'))
Y, R = data['Y'], data['R']
# Y is a 1682x943 matrix, containing ratings (1-5) of 1682 movies by
# 943 users
# R is a 1682x943 matrix, where R(i,j) = 1 if and only if user j gave a
# rating to movie i
# Add our own ratings to the data matrix
Y = np.hstack([my_ratings[:, None], Y])
R = np.hstack([(my_ratings > 0)[:, None], R])
# Normalize Ratings
Ynorm, Ymean = utils.normalizeRatings(Y, R)
# Useful Values
num_movies, num_users = Y.shape
num_features = 10
# Set Initial Parameters (Theta, X)
X = np.random.randn(num_movies, num_features)
Theta = np.random.randn(num_users, num_features)
initial_parameters = np.concatenate([X.ravel(), Theta.ravel()])
# Set options for scipy.optimize.minimize
options = {'maxiter': 100}
# Set Regularization
lambda_ = 10
res = optimize.minimize(lambda x: cofiCostFunc(x, Ynorm, R, num_users,
num_movies, num_features, lambda_),
initial_parameters,
method='TNC',
jac=True,
options=options)
theta = res.x
# Unfold the returned theta back into U and W
X = theta[:num_movies*num_features].reshape(num_movies, num_features)
Theta = theta[num_movies*num_features:].reshape(num_users, num_features)
print('Recommender system learning completed.')
# -
# After training the model, you can now make recommendations by computing the predictions matrix.
# +
p = np.dot(X, Theta.T)
my_predictions = p[:, 0] + Ymean
movieList = utils.loadMovieList()
ix = np.argsort(my_predictions)[::-1]
print('Top recommendations for you:')
print('----------------------------')
for i in range(10):
j = ix[i]
print('Predicting rating %.1f for movie %s' % (my_predictions[j], movieList[j]))
print('\nOriginal ratings provided:')
print('--------------------------')
for i in range(len(my_ratings)):
if my_ratings[i] > 0:
print('Rated %d for %s' % (my_ratings[i], movieList[i]))
# -
| RecommenderSys+Anomaly/exercise8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# # Web Scraping HondaWeb to Obtain Member Skills
# + deletable=true editable=true jupyter={"outputs_hidden": true} run_control={"frozen": false, "read_only": false}
import requests, lxml.html
from getpass import getpass
s = requests.session()
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# The default login page requires that a user enters their user name and password. But, there may be some additional data that we may need to send with our request in addition to the user name and password. Most often, they are defined as **hidden inputs** in the html's ```form``` tag.
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **We can programmatically obtain hidden input fields in the log-in page:**
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
login_url = 'https://hondasites.com/auth/default.html'
login = s.get(login_url)
login_html = lxml.html.fromstring(login.text)
hidden_inputs = login_html.xpath(r'//form//input[@type="hidden"]')
# Create Python dictionary containing key-value pairs of hidden inputs
form = {x.attrib["name"]: x.attrib["value"] for x in hidden_inputs}
print(form)
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# From above, we see that there are 2 hidden inputs: ```login_referrer``` and ```login```.
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **Alternatively, we can inspect the log-in page source page to also find those 2 hidden inputs.**
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **Using a browser's inpector tools' Network scanner, I was able to determine that HondaWeb uses 3 stages of authentication. Below are the URLs for the default log-in page and URLs 2 through 4 are the 3 stages of authentication. The last URL (url5) is just a test URL of an actual person's profile page. In order to be fully authenticated, we must be able to request the first 4 URLs below:**
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
s = requests.session()
login_url = 'https://hondasites.com/auth/default.aspx'
login_url2 = 'https://myhondda.hondasites.com/_layouts/15/Authenticate.aspx?Source=/'
login_url3 = 'https://myhondda.hondasites.com/_layouts/accessmanagersignin.aspx?ReturnUrl=/_layouts/15/Authenticate.aspx?Source=%2F&Source=/'
login_url4 = 'https://myhondda.hondasites.com/_layouts/15/Authenticate.aspx?Source=/'
login_url5 = 'https://myhondda.hondasites.com/Person.aspx?accountname=i:0%23.f|AccessManagerMembershipProvider|17151'
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **To log into the defaul login page, we have all the pieces of information we need: user name, password, login_referrer, and login.**
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# We will create a Python dictionary that will contain our credentials.
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
username = getpass('User Name:')
password = getpass('Password:')
credentials = {
'username': username,
'password': password,
'login_referrer': '',
'login': 'Y'
}
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# ### To test things, we will attempt to request those 5 URLs that we defined earlier above:
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
request1 = s.post(login_url, data=credentials)
print('request1:', request1.status_code)
request2 = s.get(login_url2)
print('request2:', request2.status_code)
request3 = s.get(login_url3)
print('request3:', request3.status_code)
request4 = s.get(login_url4)
print('request4:', request4.status_code)
request5 = s.get(login_url5)
print('request5:', request5.status_code)
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# ### Now that we know we were able to request all 5 pages, let's look at the first 500 characters of a user's profile page (request5):
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
request5.content[:500]
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **From above, we can see it appears we have the data we want.**
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# **Now we can go proceed with actually web scraping the profile page using lxml with XPath:**
# + deletable=true editable=true jupyter={"outputs_hidden": true} run_control={"frozen": false, "read_only": false}
profile_html = lxml.html.fromstring(request5.content)
# Get div tag with id="ct100_blah_blah" and span tag with class="ms-tableCell ms-profile-detailsValue", then text()
skills_div = profile_html.xpath(r'//div[@id="ctl00_SPWebPartManager1_g_402dacf0_24c9_49f7_b128_9a852fc0ae8a_ProfileViewer_SPS-Skills"]/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
if skills_div:
print('User Skills:', skills_div[0])
else:
print('User did not enter skills.')
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# ### Web Scraping Multiple Profiles:
# + [markdown] deletable=true editable=true run_control={"frozen": false, "read_only": false}
# Given a list of 2 or more members, we can web scrape them all using a FOR loop:
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
base_profile_url = 'https://myhondda.hondasites.com/Person.aspx?accountname=i:0%23.f|AccessManagerMembershipProvider|'
members = ['17151', '38623', '10770']
for member in members:
member_url = base_profile_url + member
request = s.get(member_url)
profile_html = lxml.html.fromstring(request.content)
skills_div = profile_html.xpath(r'//div[@id="blah_blah_ProfileViewer_SPS-Skills"]/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
if skills_div:
print('User(', member, ') Skills:', skills_div[0])
else:
print('User(', member, ') did not enter skills.')
# + deletable=true editable=true jupyter={"outputs_hidden": false} run_control={"frozen": false, "read_only": false}
import requests
import lxml.html
from getpass import getpass
s = requests.session()
login_url = 'https://hondasites.com/auth/default.html'
login_url2 = 'https://myhondda.hondasites.com/_layouts/15/Authenticate.aspx?Source=/'
login_url3 = 'https://myhondda.hondasites.com/_layouts/accessmanagersignin.aspx?ReturnUrl=/_layouts/15/Authenticate.aspx?Source=%2F&Source=/'
login_url4 = 'https://myhondda.hondasites.com/_layouts/15/Authenticate.aspx?Source=/'
base_profile_url = 'https://myhondda.hondasites.com/Person.aspx?accountname=i:0%23.f|AccessManagerMembershipProvider|'
username = getpass('User Name:')
password = getpass('Password:')
credentials = {
'username': username,
'password': password,
'login_referrer': '',
'login': 'Y'
}
request1 = s.post(login_url, data=credentials)
print('Submitted login')
request2 = s.get(login_url2)
print('Passed authentication #1')
request3 = s.get(login_url3)
print('Passed authentication #2')
request4 = s.get(login_url4)
print('Passed authentication #3')
members = ['17151', '38623', '10770']
for member in members:
member_url = base_profile_url + member
request = s.get(member_url)
profile_html = lxml.html.fromstring(request.content)
skills_div = profile_html.xpath(r'//div[@id="blah_blah_ProfileViewer_SPS-Skills"]/span[@class="ms-tableCell ms-profile-detailsValue"]/text()')
if skills_div:
print('User(', member, ') Skills:', skills_div[0])
else:
print('User(', member, ') did not enter skills.')
| web_scraping/Web_Scraping_HondaWeb_lxml.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numpy - multidimensional data arrays
# # ... with some Matplotlib
#
# `
# <NAME>
# `
#
# based on the work of <NAME> (<EMAIL>) http://dml.riken.jp/~rob/
# what is this line all about?!? Answer later
# %matplotlib inline
# ## Introduction
# The `numpy` package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
#
# To use `numpy` first import the module:
import numpy as np # np is the most common import name for numpy
# In the `numpy` package the terminology used for vectors, matrices and higher-dimensional data sets is *array*.
#
#
# ## Creating `numpy` arrays
# There are a number of ways to initialize new numpy arrays, for example from
#
# * a Python list or tuples
# * using functions that are dedicated to generating numpy arrays, such as `arange`, `linspace`, etc.
# * reading data from files
# ### From lists
# For example, to create new vector and matrix arrays from Python lists we can use the `numpy.array` function.
# a vector: the argument to the array function is a Python list
v = np.array([1, 2, 3, 4])
print(v)
print(type(v))
# a matrix (or better a 2d array): the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
print(M)
mm = [[1, 2], [3, 4]]
mm[0]
# The `v` and `M` objects are both of the type `ndarray` that the `numpy` module provides.
type(v), type(M)
# The difference between the `v` and `M` arrays is only their shapes. We can get information about the shape of an array by using the `ndarray.shape` property.
v.shape
type(v.shape)
M.shape
a = (4,)
print(a)
# The number of elements in the array is available through the `ndarray.size` property:
M.size # be careful matlab size is shape with Numpy
# So far the `numpy.ndarray` looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
#
# There are several reasons:
#
# * Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementating such functions for Python lists would not be very efficient because of the dynamic typing.
# * Numpy arrays are **statically typed** and **homogeneous**. The type of the elements is determined when array is created.
# * Numpy arrays are memory efficient.
# * Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of `numpy` arrays can be implemented in a compiled language (C and Fortran is used).
#
# Using the `dtype` (data type) property of an `ndarray`, we can see what type the data of an array has:
M.dtype
# We get an error if we try to assign a value of the wrong type to an element in a numpy array:
M[0, 0] = "hello"
M[0, 0] = 1.5
print(M)
# If we want, we can explicitly define the type of the array data when we create it, using the `dtype` keyword argument:
M = np.array([[1, 2], [3, 4]], dtype=complex)
M
# Common type that can be used with `dtype` are: `int`, `float`, `complex`, `bool`, `object`, etc.
#
# We can also explicitly define the bit size of the data types, for example: `int64`, `int16`, `float128`, `complex128`.
# ### Using array-generating functions
# For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in `numpy` that generates arrays of different forms. Some of the more common are:
# #### arange
# +
# create a range
x = np.arange(0, 10, 1) # arguments: start, stop, step
x
# -
x = np.arange(-1, 1, 0.1)
x
# #### linspace and logspace
# using linspace, both end points ARE included
np.linspace(0, 10, 25)
np.logspace(0, 10, 10, base=np.e)
# ### First demo of Matplotlib
# First import
import matplotlib.pyplot as plt
x = np.arange(10)
y = np.linspace(0, 1, 10)
plt.plot(x, y, 'r-')
plt.show()
# #### mgrid
x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
x
y
# #### random data
from numpy import random
# uniform random numbers in [0,1]
random.rand(5, 5)
# standard normal distributed random numbers
random.randn(5, 5)
# More on matplotlib
x = random.randn(100000)
plt.hist(x, 20)
plt.show()
x = random.randn(10, 10)
# plt.matshow(x, cmap=plt.cm.Spectral)
plt.imshow(x, cmap=plt.cm.Spectral, interpolation='nearest')
plt.colorbar()
plt.show()
# #### diag
# a diagonal matrix
np.diag([1, 2, 3])
# diagonal with offset from the main diagonal
np.diag([1, 2, 3], k=1)
# #### zeros and ones
np.zeros((3, 3), dtype=np.int)
np.ones((3, 3))
# ## File I/O
# Using `numpy.savetxt` we can store a Numpy array to a file in CSV format:
M = random.rand(3,3)
M
np.savetxt("random-matrix.txt", M)
# !cat random-matrix.txt
# +
np.savetxt("random-matrix.csv", M, fmt='%.5f', delimiter=', ') # fmt specifies the format
# !cat random-matrix.csv
# -
np.loadtxt("random-matrix.csv", delimiter=', ')
# ### Numpy's native file format
# Useful when storing and reading back numpy array data. Use the functions `numpy.save` and `numpy.load`:
# +
np.save("random-matrix.npy", M)
# !file random-matrix.npy
# -
np.load("random-matrix.npy")
# ## More properties of the numpy arrays
print(M.dtype)
M.itemsize # bytes per element
print(M.nbytes) # number of bytes
print(M.size * M.itemsize)
M.ndim # number of dimensions
# ## Manipulating arrays
# ### Indexing
# We can index elements in an array using the square bracket and indices:
# v is a vector, and has only one dimension, taking one index
v[0]
# M is a matrix, or a 2 dimensional array, taking two indices
M[1, 1]
# If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
M
M[1]
# The same thing can be achieved with using `:` instead of an index:
M[1, :] # row 1
M[:, 1] # column 1
# We can assign new values to elements in an array using indexing:
M[0, 0] = 1
M
# also works for rows and columns
M[1, :] = 0
M[:, 2] = -1
M
# ### Index slicing
# Index slicing is the technical name for the syntax `M[lower:upper:step]` to extract part of an array:
A = np.array([1, 2, 3, 4, 5])
A
print(A.shape)
A[1:3]
# Array slices are *mutable*: if they are assigned a new value the original array from which the slice was extracted is modified:
A[1:3] = [-2, -3]
A
# We can omit any of the three parameters in `M[lower:upper:step]`:
A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
B = np.zeros((4, 4))
B[1] = 1
print(B)
print(B[:, ::2])
# Negative indices counts from the end of the array (positive index from the begining):
A = np.array([1, 2, 3, 4, 5])
A[-2] # the last element in the array
A[-3:] # the last three elements
# Index slicing works exactly the same way for multidimensional arrays:
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
# strides
A[::2, ::2]
# ### Fancy indexing
# Fancy indexing is the name for when an array or list is used in-place of an index:
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
print(A)
# print(A[np.ix_([1, 2, 3], [1, 3, 4])])
B = A.copy()
B[np.ix_([1, 2, 3], [1, 3, 4])] = 0
print(B)
B = A.copy()
B[[1, 2, 3]][: , [1, 3, 4]] = 0
print(B)
# We can also index masks: If the index mask is an Numpy array of with data type `bool`, then an element is selected (True) or not (False) depending on the value of the index mask at the position each element:
B = np.arange(5)
B
even_idx = (B % 2) == 0
print(even_idx)
print(B[even_idx])
print(B[(B % 2) == 0])
row_mask = np.array([True, False, True, False, False])
print(row_mask)
print(B[row_mask])
# same thing
row_mask = np.array([1, 0, 1, 0, 0], dtype=bool)
B[row_mask]
# This feature is very useful to conditionally select elements from an array, using for example comparison operators:
x = np.arange(0, 10, 0.5)
x
mask = (5 < x) * (x < 7.5)
mask
x[mask]
# ## Functions for extracting data from arrays and creating arrays
# ### where
# The index mask can be converted to position index using the `where` function
indices = np.where(mask)[0]
indices
x[indices] # this indexing is equivalent to the fancy indexing x[mask]
# ### diag
# With the diag function we can also extract the diagonal and subdiagonals of an array:
np.diag(A)
np.diag(A, -1)
# ### take
# The `take` function is similar to fancy indexing described above:
v2 = np.arange(-3,3)
v2
row_indices = [1, 3, 5]
v2[row_indices] # fancy indexing
v2.take(row_indices)
# But `take` also works on lists and other objects:
np.take([-3, -2, -1, 0, 1, 2], row_indices)
# ## Array oriented algebra
# Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
# ### Scalar-array operations
# We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
v1 = np.arange(5)
print(v1)
v1 * 2
v1 + 2
A * 2, A + 2
# ### Element-wise array-array operations
# When we add, subtract, multiply and divide arrays with each other, the default behaviour is **element-wise** operations:
A * A # element-wise multiplication
v1 * v1
# If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
A.shape, v1.shape
np.tile(v1, (5, 1))
v1[:, np.newaxis]
print(A, v1)
print((A.T * v1).T)
print(A * v1[:, np.newaxis])
print(A * np.tile(v1, (5, 1)).T)
v1[:, np.newaxis] * v1[np.newaxis, :]
v1.reshape(-1, 1)
# ### Matrix algebra
# What about matrix mutiplication? There are two ways. We can either use the `dot` function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
np.dot(A, A)
np.dot(A, v1)
np.dot(v1, v1)
# ### Data processing
# Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
np.shape(A)
# #### mean
np.mean(A[:, 3])
np.mean(A, axis=1)
# #### standard deviations and variance
np.std(A[:, 3]), np.var(A[:, 3])
# #### min and max
# lowest value
A[:, 3].min()
# highest value
A[:, 3].max()
# #### sum, prod, and trace
d = np.arange(0, 10)
d
# sum up all elements
np.sum(d)
# product of all elements
np.prod(d + 1)
# cummulative sum
np.cumsum(d)
# cummulative product
np.cumprod(d + 1)
# same as: diag(A).sum()
np.trace(A)
# ## EXERCISE :
#
# Compute an approximation of $\pi$ using Wallis' formula (no loop just using Numpy):
#
# <!-- <img src="files/images/spyder-screenshot.jpg" width="800"> -->
# <img src="http://scipy-lectures.github.io/_images/math/31913b3982be13ed2063b0ffccbcab9cf4931fdb.png" width="200">
# ### Calculations with higher-dimensional data
# When functions such as `min`, `max`, etc., is applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the `axis` argument we can specify how these functions should behave:
m = np.random.rand(3, 3)
m
# global max
m.max()
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1)
# Many other functions and methods in the `array` and `matrix` classes accept the same (optional) `axis` keyword argument.
# ## Reshaping, resizing and stacking arrays
# The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
A
n, m = A.shape
B = A.reshape((1, n*m))
B
# +
B[0,0:5] = 5 # modify the array
B
# -
A # and the original variable is also changed. B is only a different view of the same data
# We can also use the function `flatten` to make a higher-dimensional array into a vector. But this function create a copy of the data.
B = A.flatten()
B
B[0:5] = 10
B
A # now A has not changed, because B's data is a copy of A's, not refering to the same data
# ## Adding a new dimension: newaxis
# With `newaxis`, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
v = np.array([1,2,3])
v.shape
# make a column matrix of the vector v
v[:, np.newaxis]
# column matrix
v[:, np.newaxis].shape
# row matrix
v[np.newaxis, :].shape
# ## Stacking and repeating arrays
# Using function `repeat`, `tile`, `vstack`, `hstack`, and `concatenate` we can create larger vectors and matrices from smaller ones.
#
# **But in practice they are almost always possible to avoid**
# ### tile and repeat
a = np.array([[1, 2], [3, 4]])
# repeat each element 3 times
np.repeat(a, 3)
# tile the matrix 3 times
np.tile(a, 3)
# ### concatenate
b = np.array([[5, 6]])
np.concatenate((a, b), axis=0)
np.concatenate((a, b.T), axis=1)
# ### hstack and vstack
np.vstack((a, b))
np.hstack((a, b.T))
# ## Copy and "deep copy"
# To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (techincal term: pass by reference).
A = np.array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0, 0] = 10
B
A
B = A[:, 1]
print(B)
B[0] = 100
print(A[:, [1]])
print(B)
# If we want to avoid this behavior, so that when we get a new completely independent object `B` copied from `A`, then we need to do a so-called "deep copy" using the function `copy`:
B = A.copy()
# +
# now, if we modify B, A is not affected
B[0,0] = -5
B
# -
A
# ## Iterating over array elements
# Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
#
# However, sometimes iterations are unavoidable. For such cases, the Python `for` loop is the most convenient way to iterate over an array:
# +
v = np.array([1, 2, 3, 4])
for element in v:
print(element)
# +
M = np.array([[1,2], [3,4]])
for row in M:
print("row", row)
for element in row:
print(element)
# -
# When we need to iterate over each element of an array and modify its elements, it is convenient to use the `enumerate` function to obtain both the element and its index in the `for` loop:
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
# each element in M is now squared
M
# ## Using arrays in conditions
# When using arrays in conditions in for example `if` statements and other boolean expressions, one need to use one of `any` or `all`, which requires that any or all elements in the array evalutes to `True`:
M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
# ## Type casting
# Since Numpy arrays are *statically typed*, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the `astype` functions (see also the similar `asarray` function). This always create a new array of new type:
M.dtype
# +
M2 = M.astype(float)
M2
# -
M2.dtype
# +
M3 = M.astype(bool)
M3
# -
# ## Further reading
# * http://numpy.scipy.org
# * http://scipy.org/Tentative_NumPy_Tutorial
# * http://scipy.org/NumPy_for_Matlab_Users - A Numpy guide for MATLAB users.
| 2017_03_Brussels/0b-Intro_Numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Debug SqueezeNet v1.1 OpenCL implement with PyOpenCL and PyTorch
# Partial code are copied heavily from https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py
# SqueezeNet Paper:https://arxiv.org/abs/1602.07360
# SqueezeNet 1.1 model from https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1
# SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters than SqueezeNet 1.0, without sacrificing accuracy.
#
#some set up
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
import torch.utils.data
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from PIL import Image
import math
import time
from matplotlib.pyplot import imshow
# %matplotlib inline
# ## Test top-1 accuracy on pytorch pre-trained SqueezeNet v1.1
# ### Build fire unit and SqueezeNet model
# +
# squeeze_planes(1x1 conv) - (expand1x1_planes + expand3x3_planes)
class Fire(nn.Module):
def __init__(self, inplanes, squeeze_planes,
expand1x1_planes, expand3x3_planes):
super(Fire, self).__init__()
self.inplanes = inplanes
self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
self.squeeze_activation = nn.ReLU(inplace=True)
self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
kernel_size=1)
self.expand1x1_activation = nn.ReLU(inplace=True)
self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
kernel_size=3, padding=1)
self.expand3x3_activation = nn.ReLU(inplace=True)
def forward(self, x):
x = self.squeeze_activation(self.squeeze(x))
return torch.cat([
self.expand1x1_activation(self.expand1x1(x)),
self.expand3x3_activation(self.expand3x3(x))], 1)
class SqueezeNet(nn.Module):
def __init__(self, num_classes=1000):
super(SqueezeNet, self).__init__()
self.num_classes = num_classes
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(64, 16, 64, 64),
Fire(128, 16, 64, 64),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(128, 32, 128, 128),
Fire(256, 32, 128, 128),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(256, 48, 192, 192),
Fire(384, 48, 192, 192),
Fire(384, 64, 256, 256),
Fire(512, 64, 256, 256),
)
# Final convolution is initialized differently form the rest
final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
self.classifier = nn.Sequential(
nn.Dropout(p=0.5),
final_conv,
nn.ReLU(inplace=True),
nn.AvgPool2d(13, stride=1)
)
def forward(self, x):
x = self.features(x)
x = self.classifier(x)
return x.view(x.size(0), self.num_classes)
# -
# ### Load validation image: ILSVRC2012 validation set
# +
# data loading code
# batch_size 64 consumes about 3.3GB of gpu memory
cudnn.benchmark = True
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
std = [ 0.229, 0.224, 0.225 ]),
])
val_batch_size = 64
valdir = '/home/zjy/Workspace/dataset/ILSVRC2012_img_val/'
val = datasets.ImageFolder(valdir, transform)
val_loader = torch.utils.data.DataLoader(val, batch_size=val_batch_size,shuffle=False, pin_memory = True, num_workers=8)
# -
# ### Calculate top-1 accuracy of the pre-trained model
# +
model = SqueezeNet()
model.load_state_dict(torch.load('squeezenet1_1.pth'))
model.cuda()# using gpu
model.eval()# for dropout layer
correct = 0.0
total = 0.0
for i, (images, labels) in enumerate(val_loader):
images = Variable(images.cuda())
# get output
output = model(images)
_, predicted = torch.max(output.data, 1)
total += labels.size(0)
correct += (predicted.cpu() == labels).sum()
acc_val = 100.0 * correct / total
print("Top-1 accuracy on validation set: %.4f" % acc_val)
# -
# ## Test if the OpenCL implement is correct
# By comprare the result of OpenCL implement and PyTorch implement using a single image as input
# error = ((OpenCL_implement_class_socores - PyTorch_implement_class_socores) ^ 2).sum(element_wise)
# If the OpenCL implement is correct, error should be relativly small.
# +
# load the test image
im_path = r'cat.jpg'
im = Image.open(im_path)
#visualize the image
imshow(im)
#preprocess of the imput image
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
std = [ 0.229, 0.224, 0.225 ]),
])
im_input = transform(im)
#add a new axis for pytorch
im_input = im_input.numpy()
im_input = im_input[np.newaxis,:]
im_input = torch.from_numpy(im_input)
# -
model.cpu()
im_input_torch = Variable(im_input)
im_output_torch = model(im_input_torch)
_, predicted = torch.max(im_output_torch.data, 1)
print('the label index prediction of pytorch implement: %d' % predicted.numpy()[0])
# ### Load OpenCL implement
# deviceinfo.py and partial pyopencl code are from Hands On OpenCL
# https://handsonopencl.github.io/
# #### Step1: OpenCL preparation
# OpenCL setup
import pyopencl as cl
import deviceinfo
from time import time
# +
# Ask the user to select a platform/device on the CLI
context = cl.create_some_context()
# Print out device info
deviceinfo.output_device_info(context.devices[0])
# Create a command queue
queue = cl.CommandQueue(context)
# -
# #### Step 2: import parameters from pytorch implement
# +
params = model.state_dict()
#remove # to see the params index
for k,v in params.items():
#print parameter name
print(k,params[k].numpy().shape)
conv1_weight = params['features.0.weight'].numpy().reshape(-1)
conv1_bias = params['features.0.bias'].numpy()
#fire - fire - maxpool block 1
fire1_squeeze_weight = params['features.3.squeeze.weight'].numpy().reshape(-1)
fire1_squeeze_bias = params['features.3.squeeze.bias'].numpy()
fire1_expand1x1_weight = params['features.3.expand1x1.weight'].numpy().reshape(-1)
fire1_expand1x1_bias = params['features.3.expand1x1.bias'].numpy()
fire1_expand3x3_weight = params['features.3.expand3x3.weight'].numpy().reshape(-1)
fire1_expand3x3_bias = params['features.3.expand3x3.bias'].numpy()
fire2_squeeze_weight = params['features.4.squeeze.weight'].numpy().reshape(-1)
fire2_squeeze_bias = params['features.4.squeeze.bias'].numpy()
fire2_expand1x1_weight = params['features.4.expand1x1.weight'].numpy().reshape(-1)
fire2_expand1x1_bias = params['features.4.expand1x1.bias'].numpy()
fire2_expand3x3_weight = params['features.4.expand3x3.weight'].numpy().reshape(-1)
fire2_expand3x3_bias = params['features.4.expand3x3.bias'].numpy()
#fire - fire - maxpool block 2
fire3_squeeze_weight = params['features.6.squeeze.weight'].numpy().reshape(-1)
fire3_squeeze_bias = params['features.6.squeeze.bias'].numpy()
fire3_expand1x1_weight = params['features.6.expand1x1.weight'].numpy().reshape(-1)
fire3_expand1x1_bias = params['features.6.expand1x1.bias'].numpy()
fire3_expand3x3_weight = params['features.6.expand3x3.weight'].numpy().reshape(-1)
fire3_expand3x3_bias = params['features.6.expand3x3.bias'].numpy()
fire4_squeeze_weight = params['features.7.squeeze.weight'].numpy().reshape(-1)
fire4_squeeze_bias = params['features.7.squeeze.bias'].numpy()
fire4_expand1x1_weight = params['features.7.expand1x1.weight'].numpy().reshape(-1)
fire4_expand1x1_bias = params['features.7.expand1x1.bias'].numpy()
fire4_expand3x3_weight = params['features.7.expand3x3.weight'].numpy().reshape(-1)
fire4_expand3x3_bias = params['features.7.expand3x3.bias'].numpy()
#fire - fire - fire - fire block 3
fire5_squeeze_weight = params['features.9.squeeze.weight'].numpy().reshape(-1)
fire5_squeeze_bias = params['features.9.squeeze.bias'].numpy()
fire5_expand1x1_weight = params['features.9.expand1x1.weight'].numpy().reshape(-1)
fire5_expand1x1_bias = params['features.9.expand1x1.bias'].numpy()
fire5_expand3x3_weight = params['features.9.expand3x3.weight'].numpy().reshape(-1)
fire5_expand3x3_bias = params['features.9.expand3x3.bias'].numpy()
fire6_squeeze_weight = params['features.10.squeeze.weight'].numpy().reshape(-1)
fire6_squeeze_bias = params['features.10.squeeze.bias'].numpy()
fire6_expand1x1_weight = params['features.10.expand1x1.weight'].numpy().reshape(-1)
fire6_expand1x1_bias = params['features.10.expand1x1.bias'].numpy()
fire6_expand3x3_weight = params['features.10.expand3x3.weight'].numpy().reshape(-1)
fire6_expand3x3_bias = params['features.10.expand3x3.bias'].numpy()
fire7_squeeze_weight = params['features.11.squeeze.weight'].numpy().reshape(-1)
fire7_squeeze_bias = params['features.11.squeeze.bias'].numpy()
fire7_expand1x1_weight = params['features.11.expand1x1.weight'].numpy().reshape(-1)
fire7_expand1x1_bias = params['features.11.expand1x1.bias'].numpy()
fire7_expand3x3_weight = params['features.11.expand3x3.weight'].numpy().reshape(-1)
fire7_expand3x3_bias = params['features.11.expand3x3.bias'].numpy()
fire8_squeeze_weight = params['features.12.squeeze.weight'].numpy().reshape(-1)
fire8_squeeze_bias = params['features.12.squeeze.bias'].numpy()
fire8_expand1x1_weight = params['features.12.expand1x1.weight'].numpy().reshape(-1)
fire8_expand1x1_bias = params['features.12.expand1x1.bias'].numpy()
fire8_expand3x3_weight = params['features.12.expand3x3.weight'].numpy().reshape(-1)
fire8_expand3x3_bias = params['features.12.expand3x3.bias'].numpy()
classifier_conv_weight = params['classifier.1.weight'].numpy().reshape(-1)
classifier_conv_bias = params['classifier.1.bias'].numpy()
# -
# Creat OpenCL memory object
# +
h_sample = im_input.numpy().reshape(-1)
h_result_conv = np.empty(1 * 64 * 111 * 111).astype(np.float32)
h_result_pool1 = np.empty(1 * 64 * 55 * 55).astype(np.float32)
h_result_fire1_squeeze = np.empty(1 * 16 * 55 * 55).astype(np.float32)
h_result_fire1_expand = np.empty(1 * 128 * 55 * 55).astype(np.float32)
h_result_fire2_squeeze = np.empty(1 * 16 * 55 * 55).astype(np.float32)
h_result_fire2_expand = np.empty(1 * 128 * 55 * 55).astype(np.float32)
h_result_pool2 = np.empty(1 * 128 * 27 * 27).astype(np.float32)
h_result_fire3_squeeze = np.empty(1 * 32 * 27 * 27).astype(np.float32)
h_result_fire3_expand = np.empty(1 * 256 * 27 * 27).astype(np.float32)
h_result_fire4_squeeze = np.empty(1 * 32 * 27 * 27).astype(np.float32)
h_result_fire4_expand = np.empty(1 * 256 * 27 * 27).astype(np.float32)
h_result_pool3 = np.empty(1 * 256 * 13 * 13).astype(np.float32)
h_result_fire5_squeeze = np.empty(1 * 48 * 13 * 13).astype(np.float32)
h_result_fire5_expand = np.empty(1 * 384 * 13 * 13).astype(np.float32)
h_result_fire6_squeeze = np.empty(1 * 48 * 13 * 13).astype(np.float32)
h_result_fire6_expand = np.empty(1 * 384 * 13 * 13).astype(np.float32)
h_result_fire7_squeeze = np.empty(1 * 64 * 13 * 13).astype(np.float32)
h_result_fire7_expand = np.empty(1 * 512 * 13 * 13).astype(np.float32)
h_result_fire8_squeeze = np.empty(1 * 64 * 13 * 13).astype(np.float32)
h_result_fire8_expand = np.empty(1 * 512 * 13 * 13).astype(np.float32)
h_result_classifier_conv = np.empty(1 * 1000 * 13 * 13).astype(np.float32)
h_result_classifier = np.empty(1 * 1000).astype(np.float32)
# device input buffer
d_sample = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=h_sample)
# device conv1 buffers
d_conv1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=conv1_weight)
d_conv1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=conv1_bias)
d_fire1_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_squeeze_weight)
d_fire1_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_squeeze_bias)
d_fire1_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_expand1x1_weight)
d_fire1_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_expand1x1_bias)
d_fire1_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_expand3x3_weight)
d_fire1_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire1_expand3x3_bias)
d_fire2_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_squeeze_weight)
d_fire2_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_squeeze_bias)
d_fire2_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_expand1x1_weight)
d_fire2_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_expand1x1_bias)
d_fire2_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_expand3x3_weight)
d_fire2_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire2_expand3x3_bias)
d_fire3_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_squeeze_weight)
d_fire3_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_squeeze_bias)
d_fire3_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_expand1x1_weight)
d_fire3_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_expand1x1_bias)
d_fire3_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_expand3x3_weight)
d_fire3_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire3_expand3x3_bias)
d_fire4_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_squeeze_weight)
d_fire4_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_squeeze_bias)
d_fire4_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_expand1x1_weight)
d_fire4_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_expand1x1_bias)
d_fire4_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_expand3x3_weight)
d_fire4_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire4_expand3x3_bias)
d_fire5_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_squeeze_weight)
d_fire5_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_squeeze_bias)
d_fire5_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_expand1x1_weight)
d_fire5_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_expand1x1_bias)
d_fire5_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_expand3x3_weight)
d_fire5_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire5_expand3x3_bias)
d_fire6_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_squeeze_weight)
d_fire6_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_squeeze_bias)
d_fire6_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_expand1x1_weight)
d_fire6_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_expand1x1_bias)
d_fire6_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_expand3x3_weight)
d_fire6_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire6_expand3x3_bias)
d_fire7_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_squeeze_weight)
d_fire7_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_squeeze_bias)
d_fire7_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_expand1x1_weight)
d_fire7_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_expand1x1_bias)
d_fire7_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_expand3x3_weight)
d_fire7_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire7_expand3x3_bias)
d_fire8_squeeze_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_squeeze_weight)
d_fire8_squeeze_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_squeeze_bias)
d_fire8_expand1x1_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_expand1x1_weight)
d_fire8_expand1x1_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_expand1x1_bias)
d_fire8_expand3x3_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_expand3x3_weight)
d_fire8_expand3x3_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=fire8_expand3x3_bias)
d_classifier_conv_weight = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=classifier_conv_weight)
d_classifier_conv_bias = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=classifier_conv_bias)
d_result_conv = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_conv.nbytes)
d_result_pool1 = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_pool1.nbytes)
d_result_fire1_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire1_squeeze.nbytes)
d_result_fire1_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire1_expand.nbytes)
d_result_fire2_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire2_squeeze.nbytes)
d_result_fire2_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire2_expand.nbytes)
d_result_pool2 = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_pool2.nbytes)
d_result_fire3_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire3_squeeze.nbytes)
d_result_fire3_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire3_expand.nbytes)
d_result_fire4_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire4_squeeze.nbytes)
d_result_fire4_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire4_expand.nbytes)
d_result_pool3 = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_pool3.nbytes)
d_result_fire5_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire5_squeeze.nbytes)
d_result_fire5_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire5_expand.nbytes)
d_result_fire6_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire6_squeeze.nbytes)
d_result_fire6_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire6_expand.nbytes)
d_result_fire7_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire7_squeeze.nbytes)
d_result_fire7_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire7_expand.nbytes)
d_result_fire8_squeeze = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire8_squeeze.nbytes)
d_result_fire8_expand = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_fire8_expand.nbytes)
d_result_classifier_conv = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_classifier_conv.nbytes)
d_result_classifier = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, h_result_classifier.nbytes)
# -
# #### Step 3: creat kernels
# Creat & build program
kernelSource = open("squeezenet.cl").read()
program = cl.Program(context, kernelSource).build()
# Creat kernels
# +
conv3x3 = program.conv2d3x3
conv3x3.set_scalar_arg_dtypes([np.int32, np.int32, np.int32, np.int32, np.int32, np.int32, \
None, None, None, None])
maxpool = program.maxpool2d
maxpool.set_scalar_arg_dtypes([np.int32, np.int32, None, None])
conv1x1 = program.conv2d1x1
conv1x1.set_scalar_arg_dtypes([np.int32, np.int32, \
None, None, None, None])
avgpool = program.avgpool2d
avgpool.set_scalar_arg_dtypes([None, None])
# -
# #### OpenCL kernel: squeezenet.cl
# conv2d3x3: 2-D 3x3 convolution.
# conv2d1x1: 2-D 1x1 convolution. kerner size 1, stride 1
# maxpool2d: 2-D max pool. kerner size 3, stride 2
# avgpool2d: 2-D average pool. kerner size 13
# ```C
# //maxPool2d
# //kernel_size=3 stride=2
# //output one feature map per kernel
# __kernel void maxpool2d(
# const int input_size,
# const int output_size,
# __global float *input_im,
# __global float *restrict output_im)
# {
# int channel = get_global_id(0);//get output channel index
#
# input_im += channel * input_size * input_size;
# output_im += channel * output_size * output_size;
#
# //loop over output feature map
# for(int i = 0; i < output_size; i++)//row
# {
# for(int j = 0; j < output_size; j++)//col
# {
# //find the max value in 3x3 reigon
# //to be one element in the output feature map
# float tmp = 0.0;
#
# for(int k = 0; k < 3; k++)//row
# {
# for(int l = 0; l < 3; l++)//col
# {
# float value = input_im[(i * 2 + k) * input_size + j * 2 + l ];
# if(value > tmp)
# tmp = value;
# }
# }
# //store the result to output feature map
# output_im[i * output_size + j] = tmp;
# }
# }
# }
#
# //3x3 convolution layer
# //output one feature map per kernel
# __kernel void conv2d3x3(
# const int input_channels, const int input_size,
# const int pad, const int stride,
# const int start_channel, //start_channel is for 1x1 feature map in fire layer
# const int output_size,
# __global float *restrict input_im,
# __global const float *restrict filter_weight,
# __global const float *restrict filter_bias,
# __global float *restrict output_im
# )
# {
# int filter_index = get_global_id(0); //get output channel index
#
# filter_weight += filter_index * input_channels * 9;
# float bias = filter_bias[filter_index];
# output_im += (start_channel + filter_index) * output_size * output_size;
#
# //loop over output feature map
# for(int i = 0; i < output_size; i++)
# {
# for(int j = 0; j < output_size; j++)
# {
# //compute one element in the output feature map
# float tmp = bias;
#
# //compute dot product of 2 input_channels x 3 x 3 matrix
# for(int k = 0; k < input_channels; k++)
# {
# for(int l = 0; l < 3; l++)
# {
# int h = i * stride + l - pad;
# for(int m = 0; m < 3; m++)
# {
# int w = j * stride + m - pad;
# if((h >= 0) && (h < input_size) && (w >= 0) && (w < input_size))
# {
# tmp += input_im[k * input_size * input_size + (i * stride + l - pad) * input_size + j
# * stride + m - pad] * filter_weight[9 * k + 3 * l + m];
# }
# }
# }
# }
#
# //add relu activation after conv
# output_im[i * output_size + j] = (tmp > 0.0) ? tmp : 0.0;
# }
# }
# }
#
# //1x1 convolution layer
# //output one feature map per kernel
# __kernel void conv2d1x1(
# const int input_channels, const int input_size,
# __global float *input_im,
# __global const float *restrict filter_weight,
# __global const float *restrict filter_bias,
# __global float *restrict output_im)
# {
# int filter_index = get_global_id(0); // 0 - (output_channels - 1)
#
# filter_weight += filter_index * input_channels;
#
# float bias = filter_bias[filter_index];
#
# output_im += filter_index * input_size * input_size;//start_channel is for 1x1 feature map in fire layer
#
# //loop over output feature map
# //out
# for(int i = 0; i < input_size; i++)
# {
# for(int j = 0; j < input_size; j++)
# {
# float tmp = bias;
# for(int k = 0; k < input_channels; k++)
# {
# tmp += input_im[k * input_size * input_size + i * input_size + j] * filter_weight[k];
# }
# //add relu after conv
# output_im[i * input_size + j] = (tmp > 0.0) ? tmp : 0.0;
# }
# }
# }
#
# //last layer use a 13 x 13 avgPool layer as classifier
# //one class score per kernel
# __kernel void avgpool2d(
# __global float *restrict input_im,
# __global float *restrict output_im)
# {
# int class_index = get_global_id(0);//get class score index
#
# input_im += 169 * class_index;
#
# float tmp = 0.0f;
#
# for(int i = 0; i < 169; i++)
# {
# tmp += input_im[i];
# }
#
# output_im[class_index] = tmp / 169.0;
# }
# ```
# Run OpenCL implement
# +
rtime = time()
#first conv layer
conv3x3(queue,(64,), None, 3, 224, 0, 2, 0, 111, d_sample, d_conv1_weight, d_conv1_bias, d_result_conv)
maxpool(queue,(64,), None, 111, 55, d_result_conv, d_result_pool1)
#block1
conv1x1(queue,(16,), None, 64, 55, d_result_pool1, d_fire1_squeeze_weight, d_fire1_squeeze_bias, d_result_fire1_squeeze)
conv1x1(queue,(64,), None, 16, 55, d_result_fire1_squeeze, d_fire1_expand1x1_weight, d_fire1_expand1x1_bias, d_result_fire1_expand)
conv3x3(queue,(64,), None, 16, 55, 1, 1, 64, 55, d_result_fire1_squeeze, d_fire1_expand3x3_weight, d_fire1_expand3x3_bias, d_result_fire1_expand)
conv1x1(queue,(16,), None, 128, 55, d_result_fire1_expand, d_fire2_squeeze_weight, d_fire2_squeeze_bias, d_result_fire2_squeeze)
conv1x1(queue,(64,), None, 16, 55, d_result_fire2_squeeze, d_fire2_expand1x1_weight, d_fire2_expand1x1_bias, d_result_fire2_expand)
conv3x3(queue,(64,), None, 16, 55, 1, 1, 64, 55, d_result_fire2_squeeze, d_fire2_expand3x3_weight, d_fire2_expand3x3_bias, d_result_fire2_expand)
maxpool(queue,(128,), None, 55, 27, d_result_fire2_expand, d_result_pool2)
#block2
conv1x1(queue,(32,), None, 128, 27, d_result_pool2, d_fire3_squeeze_weight, d_fire3_squeeze_bias, d_result_fire3_squeeze)
conv1x1(queue,(128,), None, 32, 27, d_result_fire3_squeeze, d_fire3_expand1x1_weight, d_fire3_expand1x1_bias, d_result_fire3_expand)
conv3x3(queue,(128,), None, 32, 27, 1, 1, 128, 27, d_result_fire3_squeeze, d_fire3_expand3x3_weight, d_fire3_expand3x3_bias, d_result_fire3_expand)
conv1x1(queue,(32,), None, 256, 27, d_result_fire3_expand, d_fire4_squeeze_weight, d_fire4_squeeze_bias, d_result_fire4_squeeze)
conv1x1(queue,(128,), None, 32, 27, d_result_fire4_squeeze, d_fire4_expand1x1_weight, d_fire4_expand1x1_bias, d_result_fire4_expand)
conv3x3(queue,(128,), None, 32, 27, 1, 1, 128, 27, d_result_fire4_squeeze, d_fire4_expand3x3_weight, d_fire4_expand3x3_bias, d_result_fire4_expand)
maxpool(queue,(256,), None, 27, 13, d_result_fire4_expand, d_result_pool3)
#block3
conv1x1(queue,(48,), None, 256, 13, d_result_pool3, d_fire5_squeeze_weight, d_fire5_squeeze_bias, d_result_fire5_squeeze)
conv1x1(queue,(192,), None, 48, 13, d_result_fire5_squeeze, d_fire5_expand1x1_weight, d_fire5_expand1x1_bias, d_result_fire5_expand)
conv3x3(queue,(192,), None, 48, 13, 1, 1, 192, 13, d_result_fire5_squeeze, d_fire5_expand3x3_weight, d_fire5_expand3x3_bias, d_result_fire5_expand)
conv1x1(queue,(48,), None, 384, 13, d_result_fire5_expand, d_fire6_squeeze_weight, d_fire6_squeeze_bias, d_result_fire6_squeeze)
conv1x1(queue,(192,), None, 48, 13, d_result_fire6_squeeze, d_fire6_expand1x1_weight, d_fire6_expand1x1_bias, d_result_fire6_expand)
conv3x3(queue,(192,), None, 48, 13, 1, 1, 192, 13, d_result_fire6_squeeze, d_fire6_expand3x3_weight, d_fire6_expand3x3_bias, d_result_fire6_expand)
conv1x1(queue,(64,), None, 384, 13, d_result_fire6_expand, d_fire7_squeeze_weight, d_fire7_squeeze_bias, d_result_fire7_squeeze)
conv1x1(queue,(256,), None, 64, 13, d_result_fire7_squeeze, d_fire7_expand1x1_weight, d_fire7_expand1x1_bias, d_result_fire7_expand)
conv3x3(queue,(256,), None, 64, 13, 1, 1, 256, 13, d_result_fire7_squeeze, d_fire7_expand3x3_weight, d_fire7_expand3x3_bias, d_result_fire7_expand)
conv1x1(queue,(64,), None, 512, 13, d_result_fire7_expand, d_fire8_squeeze_weight, d_fire8_squeeze_bias, d_result_fire8_squeeze)
conv1x1(queue,(256,), None, 64, 13, d_result_fire8_squeeze, d_fire8_expand1x1_weight, d_fire8_expand1x1_bias, d_result_fire8_expand)
conv3x3(queue,(256,), None, 64, 13, 1, 1, 256, 13, d_result_fire8_squeeze, d_fire8_expand3x3_weight, d_fire8_expand3x3_bias, d_result_fire8_expand)
# classifier
conv1x1(queue,(1000,), None, 512, 13, d_result_fire8_expand, d_classifier_conv_weight, d_classifier_conv_bias, d_result_classifier_conv)
avgpool(queue,(1000,), None, d_result_classifier_conv, d_result_classifier)
# Wait for the commands to finish before reading back
queue.finish()
rtime = time() - rtime
print("The kernel ran in", rtime, "seconds")
# +
# #copy result from gpu
cl.enqueue_copy(queue, h_result_classifier, d_result_classifier)
queue.finish()
label_opencl = np.argmax(h_result_classifier)
print('the label index prediction of OpenCL implement: %d' % label_opencl)
correct_result = im_output_torch.data.numpy().reshape(-1)
error = ((correct_result - h_result_classifier) ** 2).sum()
print('OpenCL implement error: ', error)
| src/pyopencl/SqueezeNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Readme
#
# ---
#
# **Advanced Lane Finding Project**
#
# The goals / steps of this project are the following:
#
# * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
# * Apply a distortion correction to raw images.
# * Use color transforms, gradients, etc., to create a thresholded binary image.
# * Apply a perspective transform to rectify binary image ("birds-eye view").
# * Detect lane pixels and fit to find the lane boundary.
# * Determine the curvature of the lane and vehicle position with respect to center.
# * Warp the detected lane boundaries back onto the original image.
# * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
#
# [//]: # (Image References)
#
#
# ## [Rubric](https://review.udacity.com/#!/rubrics/571/view) Points
#
# ### Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
#
# ---
#
# ### README
#
# All the codes are in Advanced_Lane_Line.ipynb.
#
# ## 1. Camera calibration
#
# #### 1.1 Get the images for calibation and test
#
# In this step, I read the chess board images in camera_cal folder to prepare for camera calibration.
#
# #### 1.2 Compute calibration
#
# I used the images and opencv function, mainly 'cv2.findChessboardCorners()' and 'cv2.calibrateCamera()' function to get the matrix used for calibration.
#
# ## 2. Undistortion
#
# Here, I test the chess board image and image in test_images folder.
# The following image is the test image after undistorted.
#
# 
#
# 
#
#
# ## 3. Combine threshold
#
# There are some features of lane line that can be used for lane line detection.
# Here, the color information of the yellow lane line and white lane line is uese.
#
# 
#
# ## 4. Perspective transform
#
# To get a bird view, I select the following source points and destination points to get the perspective transform matix.
#
# src points are [718,468],[1112,716],[212,716],[568,468]
#
# dst points are [990,0], [990,720], [320,720],[320,0]
#
# Then bird view image can be got.
#
# 
#
# ## 5.Find the lines
#
# Function 'fit_line()' are defined to find the line. In it, the histogram infomation for bird-view image are used, and also it can be as the base to for silde window search.
#
# 
#
# ## 6. Calculate Curvature
#
# After the line is find, use second order polynomial curve to fit the lane line, and the radius of the lane line (pixel space and real world space) can be calcualted.
#
# ## 7. Process Frame
#
# In this step, a function that contained the methods in previous steps are defined, which used in the video processing.
#
# 
#
# The output video is 'test_videos_output/advanced_lane_line.mp4'
#
# ## Discussions
#
# #### Problems/issues faced
# There are several problems that shall be considerd, for example, under the different weather condition and light condition which might cause the color information to changes a lot. While the thresholds used to find the yellow lane and white lane are the fixed values which are not flexible and might be invalid.
#
# Also the curvature of the lane line are calculated, but the accuracy of the curvature might alse be considered.
#
# #### Algorithm/pipeline could be approved
#
# As discussed in the 'Problem/issues faced' part, hwo to get high accuracy of the curvature and how to get the paramters automatically might be approved.
#
# #### The hypothetical cases might failed
#
# This processing might be used in highway cases, but as in our daily normal traffic when you go to work in the morning, the case are more complexed. There are more cases, also people, slow vehicle speed, more traffic lights, etc.
#
| Writeup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# ## Understanding Motion on a Sphere
# Locally, the Earth's surface is effectively flat, so we can describe movements (like plate motions) as linear velocities. In reality, the Earth's surface is spherical, so large-scale motions need to be described as rotations.
#
# We can describe the motion of a point of interest (P) on a sphere as a rotation about an axis, known as an Euler pole (E). Thus, the units for this motion are degrees/radians of rotation around the Euler pole along a circular cross-section of the Earth over a given time period, or _angular velocity_ ($\omega$).
#
# The diagram below illustrates the geometry described above and shows how angular velocity can be described as a function of the Earth's radius (R), the linear velocity (v), and the angle between the Euler pole and a pole going through the points of interest ($\delta$ in this diagram, $\Theta$ in subsequent diagrams). Note that angular velocity using this equation is in units of __radians/time__:
#
# <img src='sphere.JPG' width='500' style="float:left">
# ## Find Angular Velocity from Linear Velocity and Euler Pole
# First, need to use location of interest and Euler pole location to calculate angular difference ($\Theta$) between the point and the Euler pole.
#
# Note that latitude and longitude are polar coordinates, which is where the trigonometry below comes from. In addition to the latitude (degrees N from the equator), we also need the __colatitude__ (degrees S from the north pole) for the point of interest and the Euler pole to do this calculation.
#
# <img src='euler.JPG' width='500' style="float:left">
# Input Location of interest
lat = np.radians(float(input("Latitude (N): ")))
lon = np.radians(float(input("Longitude (E): ")))
# Input location of Euler pole
elat = np.radians(float(input("Euler Pole Latitude (N): ")))
elon = np.radians(float(input("Euler Pole Longitude (E): ")))
# +
# Calculate colatitude for location and Euler pole
colat = np.radians(90) - lat
colat_e = np.radians(90) - elat
print ('Colatitude of point (degrees): ',np.degrees(colat))
print('Colatitude of Euler pole (degrees): ',np.degrees(colat_e))
# +
# Find angular difference between EP and point of interest
ang_diff = np.arccos(
np.cos(colat_e)*np.cos(colat) + np.sin(colat)*np.sin(colat_e)*np.cos(lon-elon)
)
print('Angular Difference: ',round(np.degrees(ang_diff),1))
# -
# Now that you have the angle between the pole to the point of interest and the Euler pole, you can calculate angular velocity ($\omega) using the following formula. __Angular velocity is in units of radians/time__ and velocity will be units of distance/time, with the units of distance determined by the radius units:
#
# <img src='velocity.JPG' width='500' style="float:left">
# Input linear velocity at point of interest and radius of planet (6370 km for Earth)
lvel = float(input("Linear Velocity (mm/yr or km/Myr): "))
radius = float(input("Radius of Planet (km): "))
# +
# Calculate angular velocity from linear velocity and angular difference
ang_vel = lvel/(radius*np.sin(ang_diff)) # lvel is in km/Myr, so output units are radians/Myr
print('Angular Velocity (radians/Myr): ',round(ang_vel,3))
print('Angular Velocity (degrees/Myr): ',round(np.degrees(ang_vel),2))
# -
# ## Calculate Linear Velocity from Angular Velocity
# Here, we calculate linear velocity (v) from angular velocity ($\omega$) using the same equation as above. __Angular velocity is in units of radians/time__:
# <img src='velocity.JPG' width='500' style="float:left">
# Input angular velocity, angular difference, and radius
avel = float(input("Angular velocity (degrees/Myr): "))
adiff = float(input("Angular difference (degrees): "))
r = float(input("Radius of Planet (km): "))
# +
# Calculate linear velocity
lv = np.radians(avel)*r*np.sin(np.radians(adiff))
print('Linear Velocity (km/Myr or mm/yr): ',round(lv,1))
print('Linear Velocity (cm/yr): ',round(lv/10,2))
# -
| structural_geology/plate_tectonics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ZETA DATA
#
# Ground state recombinations fractions ($\zeta$) from <cite data-cite="2002ApJ...579..725L">Knox S. Long's paper</cite>.
from carsus.io.zeta import KnoxLongZeta
zeta = KnoxLongZeta()
zeta.base
| docs/io/zeta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import sys, os
sys.path.append('..')
from Data.TimeSeries import *
from Data import factors
import Quandl
import pandas as pd
import matplotlib
import cvxopt as opt
from cvxopt import blas, solvers
# %matplotlib inline
# -
# ## 0. downloading data
# Download the ETF data
# +
sector_tickers = ['GOOG/NYSEARCA_XLB',
'GOOG/NYSEARCA_XLE',
'GOOG/NYSEARCA_XLF',
'GOOG/NYSEARCA_XLI',
'GOOG/NYSEARCA_XLK',
'GOOG/NYSEARCA_XLP',
'GOOG/NYSEARCA_XLU',
'GOOG/NYSEARCA_XLV',
'GOOG/NYSEARCA_XLY']
settings = Settings()
dp = TimeSeries(settings).get_agg_data(sector_tickers)
dp
# -
dp = dp.fillna(method='pad', axis=0)
dp = dp.fillna(method='bfill', axis=0)
#dp = dp[:,'2008-01-01'::,:]
#dp[:, :,'price'].plot(figsize=[20,10])
df_rets = dp[:,:,'price'].pct_change().dropna()
(1+df_rets).cumprod().plot(figsize=[20,12])
# # Adaptive Asset Allocation (AAA)
# +
import numpy as np
import zipline
from zipline.api import (add_history, history, set_slippage,
slippage, set_commission, commission,
order_target_percent)
from zipline import TradingAlgorithm
def initialize(context):
''' Called once at the very beginning of a backtest (and live trading).
Use this method to set up any bookkeeping variables. The context object
is passed to all the other methods in your algorithm. Parameters context:
An initialized and empty Python dictionary that has been augmented so
that properties can be accessed using dot notation as well as the
traditional bracket notation. Returns None '''
#Register history container to keep a window of the last 100 prices.
add_history(100, '1d', 'price')
# Turn off the slippage model
set_slippage(slippage.FixedSlippage(spread=0.0))
# Set the commission model (Interactive Brokers Commission)
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(context, data):
''' Called when a market event occurs for any of the algorithm's securities.
Parameters data: A dictionary keyed by security id containing the current
state of the securities in the algo's universe. context: The same context
object from the initialize function. Stores the up to date portfolio as
well as any state variables defined. Returns None '''
# Allow history to accumulate 100 days of prices before trading
# and rebalance every day thereafter.
context.tick += 1
if context.tick < 100:
return
if context.tick % 60 != 0:
return
# Get rolling window of past prices and compute returns
prices = history(100, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights, _, _ = optimal_portfolio(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# -
# ## 1. 10 assets, Equal Weight, Rebalanced Monthly
#
ret_ports = pd.DataFrame()
class AAA:
def __init__(self, data) :
self.data = data
def initialize(self, context) :
pass
def handle_data(self, context, data) :
pass
def run_trading(self) :
algo = TradingAlgorithm(initialize=self.initialize, handle_data=self.handle_data)
results = algo.run(self.data)
return results
class Portfolio1(AAA):
def average_weights(self, returns):
n = len(returns)
wt = opt.matrix(1.0/n, (n, 1))
return np.asarray(wt)
def initialize(self, context):
add_history(30, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.005))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices = history(30, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights = self.average_weights(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
results = Portfolio1(dp.dropna()).run_trading()
ret_ports[1] = results.portfolio_value
# ## Portfolio 2: Equal Volatility Weight, Rebalanced Monthly
# with the 10 assets. Each asset has 1% daily vol (60 days moving vol). And no asset has over 10% of the portfolio
class Portfolio2(AAA):
def vol_weighting(self, returns):
n = len(returns)
weight_target = 1.0 / n
vol = np.std(returns, axis=1)
vol_target = 0.01 # daily vol 1% target
wt = vol_target / vol * weight_target
wt[wt > weight_target] = weight_target
return np.asarray(wt)
def initialize(self, context):
add_history(60, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices = history(60, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights = self.vol_weighting(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
results = Portfolio2(dp.dropna()).run_trading()
ret_ports[2] = results.portfolio_value
ret_ports.plot(figsize=[20, 12])
# ## Portfolio 3: Top 5 in 6 month momentum, each position = 20%
#
class Portfolio3(AAA) :
def MOM(self, returns):
mom = returns.sum(axis=1)
wt = (mom > np.median(mom)) * 0.2
return np.asarray(wt)
def initialize(self, context):
add_history(120, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices = history(120, '1d', 'price').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights = self.MOM(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
results = Portfolio3(dp.dropna()).run_trading()
ret_ports[3] = results.portfolio_value
ret_ports.plot(figsize=[20,12])
# ## Portfolio 4: Top 1/2 Momentum, with equal vol weighting
class Portfolio4(AAA) :
def initialize(self, context):
add_history(120, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices_6m = history(120, '1d', 'price').dropna()
returns_6m = prices_6m.pct_change().dropna()
prices_60d = history(60, '1d', 'price').dropna()
returns_60d = prices_60d.pct_change().dropna()
try:
# Get the strongest 5 in momentum
mom = returns_6m.T.sum(axis=1)
selected = (mom > np.median(mom)) * 1
# 60 days volatility
vol = np.std(returns_60d.T, axis=1)
vol_target = 0.01
wt = vol_target / vol * 0.2
wt[wt > 0.2] = 0.2
#
weights = wt * selected
# Rebalance portfolio accordingly
for stock, weight in zip(prices_60d.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
results = Portfolio4(dp.dropna()).run_trading()
ret_ports[4] = results.portfolio_value
ret_ports.plot(figsize=[20,12])
# ## Portfolio 5: Minimize Vol
# construct a portfolio with the minimal vol.
# +
from cvxpy import *
class Portfolio5(AAA) :
def minimize_vol(self,returns):
n = len(returns)
w = Variable(n)
gamma = Parameter(sign='positive')
mu = returns.mean(axis=1)
ret = np.array(mu)[np.newaxis] * w
Sigma = np.cov(returns)
risk = quad_form(w, Sigma)
prob = Problem(Maximize(ret - 200*risk), [sum_entries(w)==1, w >=0])
prob.solve()
#print w.value.T * Sigma * w.value
return np.asarray(w.value)
def initialize(self, context):
add_history(120, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick < 120 :
return
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices_6m = history(120, '1d', 'price').dropna()
returns_6m = prices_6m.pct_change().dropna()
prices_60d = history(60, '1d', 'price').dropna()
returns_60d = prices_60d.pct_change().dropna()
try:
# Get the strongest 5 in momentum
mom = returns_6m.T.sum(axis=1)
#selected_indices = mom[mom>0].order().tail(len(mom) /2).index
selected_indices = mom.index
#selected_indices = mom[mom > 0 ].index
selected_returns = returns_60d[selected_indices]
weights = self.minimize_vol(selected_returns.T)
# weights = minimize_vol(returns_60d.T)
# Rebalance portfolio accordingly
for stock, weight in zip(selected_returns.columns, weights):
order_target_percent(stock, weight)
except :
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# -
results = Portfolio5(dp.dropna()).run_trading()
ret_ports[5] = results.portfolio_value
ret_ports.plot(figsize=[20,10])
# ## Portfolio 6: modified momentum
#
class Portfolio6(AAA) :
def minimize_vol(self,returns):
n = len(returns)
w = Variable(n)
gamma = Parameter(sign='positive')
mu = returns.mean(axis=1)
ret = np.array(mu)[np.newaxis] * w
Sigma = np.cov(returns)
risk = quad_form(w, Sigma)
prob = Problem(Maximize(ret - 200*risk), [sum_entries(w)==1, w >=0])
prob.solve()
#print w.value.T * Sigma * w.value
return np.asarray(w.value)
def initialize(self, context):
add_history(300, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices_3m = history(60, '1d', 'price').dropna()
prices_6m = history(120, '1d', 'price').dropna()
prices_12m = history(240, '1d', 'price').dropna()
prices_60d = history(60, '1d', 'price').dropna()
returns_3m = prices_3m.pct_change().dropna()
returns_6m = prices_6m.pct_change().dropna()
returns_12m = prices_12m.pct_change().dropna()
returns_60d = prices_60d.pct_change().dropna()
try:
# Get the strongest 5 in momentum
rank_3m = returns_3m.T.sum(axis=1).rank(ascending=True)
rank_6m = returns_6m.T.sum(axis=1).rank(ascending=True)
rank_12m = returns_12m.T.sum(axis=1).rank(ascending=True)
mom = rank_3m + rank_6m + rank_12m
selected = (mom > np.median(mom)) * 1
# 60 days volatility
vol = np.std(returns_60d.T, axis=1)
vol_target = 0.01
wt = vol_target / vol * 0.2
wt[wt > 0.2] = 0.2
#
weights = wt * selected
# Rebalance portfolio accordingly
for stock, weight in zip(prices_60d.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
returns = Portfolio6(dp.dropna()).run_trading()
ret_ports[6] = returns.portfolio_value
ret_ports.plot(figsize=[20,12])
# ## Portfolio 7: Using MA as the filter,
class Portfolio7(AAA) :
def minimize_vol(self,returns):
n = len(returns)
w = Variable(n)
gamma = Parameter(sign='positive')
mu = returns.mean(axis=1)
ret = np.array(mu)[np.newaxis] * w
Sigma = np.cov(returns)
risk = quad_form(w, Sigma)
prob = Problem(Maximize(ret - 200*risk), [sum_entries(w)==1, w >=0])
prob.solve()
#print w.value.T * Sigma * w.value
return np.asarray(w.value)
def initialize(self, context):
add_history(200, '1d', 'price')
set_slippage(slippage.FixedSlippage(spread=0.0))
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
def handle_data(self, context, data):
rebalance_period = 20
context.tick += 1
if context.tick % rebalance_period != 0:
return
# Get rolling window of past prices and compute returns
prices_1m = history(1, '1M', 'price').dropna()
prices_10m = history(10, '1M', 'price').dropna()
prices_200d = history(200, '1d', 'price').dropna()
prices_60d = history(60, '1d', 'price').dropna()
returns_60d = prices_60d.pct_change().dropna()
try:
# Get the strongest 5 in momentum
ma = np.mean(prices_10m)
pr = np.mean(prices_1m)
selected_indices = pr[pr < ma].index
selected_returns = returns_60d[selected_indices]
if len(selected_indices ) <= 1:
return
weights = self.minimize_vol(selected_returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(selected_returns.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
ret_ports[7] = Portfolio7(dp.dropna()).run_trading().portfolio_value
ret_ports.plot(figsize=[20, 10])
ret_ports_ret = ret_ports.pct_change()
ret_str = ret_ports_ret * (ret_ports > ret_ports.shift(200)*1.07).shift(1)
(1+ret_str).cumprod().plot(figsize=[20,12])
ret_ports.plot(figsize=[20,12])
# +
prices = dp[:, :,'price']
df_m = prices
ticker_bm = 'GOOG/NYSEARCA_TLH'
df_benchmark = df_m[[ticker_bm]]
df_ret_12m = df_m / df_m.shift(240) - 1
df_ret_bm_12 = df_benchmark / df_benchmark.shift(240) - 1
df_ret = df_m.pct_change()
# -
df_str = (df_ret_12m[1] > df_ret_12m[ticker_bm]).shift(1) * df_ret_12m[[1]]
# +
df_str = (df_ret_12m > df_ret_12m[ticker_bm]).shift(1) * df_ret_12m
df_str1 = (df_ret_12m < df_ret_12m[ticker_bm]).shift(1) * df_ret_bm_12
#(1+df_str).cumprod().plot(figsize=[20,12])
(1+df_ret).cumprod().plot(figsize=[20,12])
# -
(1+df_str).cumprod().plot(figsize=[20,12])
df_str
selected_indices = prices_1d[prices_1d > ma].index
selected_rets = returns[selected_indices]
selected_rets
# +
returns = df_rets
price_1d = data.tail(1)
rank_3m = returns_3m.T.sum(axis=1).rank(ascending=True)
rank_6m = returns_6m.T.sum(axis=1).rank(ascending=True)
rank_12m = returns_12m.T.sum(axis=1).rank(ascending=True)
mom = rank_3m + rank_6m + rank_12m
selected = (mom > np.median(mom)) * 1
# 60 days volatility
vol = np.std(returns_60d.T, axis=1)
vol_target = 0.01
wt = vol_target / vol * 0.2
wt[wt > 0.2] = 0.2
#
weights = wt * selected
# -
rank_3m = df_rets.T.sum(axis=1).rank(ascending=False)
rank_6m = df_rets.T.sum(axis=1).rank(ascending=False)
rank_12m = df_rets.T.sum(axis=1).rank(ascending=False)
mom = rank_3m + rank_6m + rank_12m
selected = (mom > np.median(mom)) * 1
selected
# +
df_test = pd.DataFrame()
df_test['nav'] = ret_ports[4]
df_test['ma'] = pd.rolling_median(df_test['nav'], 20)
df_test['signal'] = (df_test['nav'] > df_test['ma']) * 1
df_test['ret'] = df_test['nav'].pct_change()
df_test['ret_st'] = df_test['ret'] * df_test['signal'].shift(1)
(1+df_test[['ret_st', 'ret']]).cumprod().plot(figsize=[20, 12])
# -
# ### 弥财配置
# 我们现在试一试弥财选中的几个ETF, 是否可以通过同样的方法, 构建一个回报更高的投资组合
# +
from Data.TimeSeries import *
micai_tickers = ['GOOG/NYSE_VWO',
'GOOG/NYSE_VPL',
'GOOG/NYSE_VGK',
'GOOG/NYSE_VTI',
'GOOG/NYSE_IYR']
data = TimeSeries(Settings()).get_agg_data(micai_tickers)
data = data.fillna(method='pad', axis=0)
data = data.fillna(method='bfill', axis=0)
data = data.dropna()
# -
data
# +
algo = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
results = algo1.run(data)
results.portfolio_value.plot(figsize=[20,10])
algo2.run(data).portfolio_value.plot(figsize=[20,10])
algo4.run(data).portfolio_value.plot(figsize=[20,10])
# -
# # Correlation Checking
# +
def avg_corr(returns, size=20) :
corrs = pd.rolling_corr(returns, 100)
corrs = corrs.dropna()
n = corrs.shape[1]
avg_corr = (corrs.sum().sum() - n)/(n*(n-1))
return avg_corr
df = pd.DataFrame()
df['ret'] = df_rets['GOOG/NYSE_SPY']
df['nav'] = (1+df['ret']).cumprod()
df['avg_corr'] = avg_corr(df_rets, 60)
df['ma_corr'] = pd.rolling_mean(df['avg_corr'], 30)
df['vol'] = pd.rolling_std(df['ret'], 60)*10
df['ma_vol'] = pd.rolling_mean(df['vol'], 20)
df = df.dropna()
# -
df[['nav', 'avg_corr', 'ma_corr', 'vol', 'ma_vol']].plot(figsize=[20,12])
df[['nav', 'avg_corr', 'vol']].pct_change().corr()
avg_corr = pd.DataFrame(avg_corr)
avg_corr[avg_corr < pd.rolling_median(avg_corr, 20)].plot()
avg_corr[avg_corr > pd.rolling_median(avg_corr, 20)].plot()
# +
df_rets_low_corr = df_rets['GOOG/NYSE_SPY'] * (avg_corr < pd.rolling_median(avg_corr, 20))
df_rets_high_corr = df_rets['GOOG/NYSE_SPY'] * (avg_corr > pd.rolling_median(avg_corr, 20))
(1+df_rets_low_corr).cumprod().plot(figsize=[20,12])
(1+df_rets_high_corr).cumprod().plot(figsize=[20,12])
# -
avg_corr < pd.rolling_median(avg_corr, 20)
(1+df_rets[avg_corr < avg_corr.median(), 'GOOG/NYSE_SPY']).cumprod().plot(figsize=[20, 10])
# +
# returns_6m = df_rets[-120::]
returns_60d = df_rets[-60::]
mu = returns_60d.T.mean(axis=1)
print mu
mom = returns_6m.T.sum(axis=1)
selected_indices = mom.order().tail(len(mom) /2).index
selected_returns = returns_60d[selected_indices]
# 60 days volatility
returns = selected_returns.T
n = len(returns)
w = Variable(n)
gamma = Parameter(sign='positive')
mu = returns.mean(axis=1)
ret = np.array(mu)[np.newaxis] * w
Sigma = np.cov(returns)
risk = quad_form(w, Sigma)
prob = Problem(Maximize(ret - 2*risk), [sum_entries(w)==1, w >=0])
prob.solve()
print w.value.T * Sigma * w.value
# -
np.array(mu)[np.newaxis]
rets = (1+np.dot(selected_returns_60, weights)).cumprod()
pd.DataFrame(rets).plot()
a = np.array(df_rets) * np.array(selected)[np.newaxis]
# +
returns = df_ret[3000:3500].T
n = len(returns)
returns = np.asmatrix(returns)
# # Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(0.0, (n, 1))
# # Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
b = np.array(selected)
c = np.array(range(10)) * (selected <>0)
d = c[c<>0]
A = np.eye(10)
A = A[d, :]
A = np.vstack([np.ones((1, n)), A])
A = opt.matrix(A)
b = np.zeros((len(d), 1))
b = np.vstack([1.0, b])
b = opt.matrix(b)
np.rank(A)
wt = solvers.qp(S, -pbar, G, h, A, b)['x']
# print wt
# return np.asarray(wt)
# -
print wt
print b
np.ones((5, 1))
def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x'] for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
# +
returns = df_ret[3000:3500].T
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x'] for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
# -
solvers.qp(mus[1]*S, -pbar, G, h, A, b)['x']
n
| Trading_Strategies/ETF/Adaptive Asset Allocation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Basics
# ## Whitespace Is Important
# +
listOfNumbers = [1, 2, 3, 4, 5, 6]
for number in listOfNumbers:
if (number % 2 == 0):
print(number, "is even")
else:
print(number, "is odd")
print ("All done.")
# -
# ## Importing Modules
# +
import numpy as np
A = np.random.normal(25.0, 5.0, 10)
print (A)
# -
# ## Lists
x = [1, 2, 3, 4, 5, 6]
print(len(x))
x[:3]
x[3:]
x[-2:]
x.extend([7,8])
x
x.append(9)
x
y = [10, 11, 12]
listOfLists = [x, y]
listOfLists
y[1]
z = [3, 2, 1]
z.sort()
z
z.sort(reverse=True)
z
# ## Tuples
#Tuples are just immutable lists. Use () instead of []
x = (1, 2, 3)
len(x)
y = (4, 5, 6)
y[2]
listOfTuples = [x, y]
listOfTuples
(age, income) = "32,120000".split(',')
print(age)
print(income)
# ## Dictionaries
# +
# Like a map or hash table in other languages
captains = {}
captains["Enterprise"] = "Kirk"
captains["Enterprise D"] = "Picard"
captains["Deep Space Nine"] = "Sisko"
captains["Voyager"] = "Janeway"
print(captains["Voyager"])
# -
print(captains.get("Enterprise"))
print(captains.get("NX-01"))
for ship in captains:
print(ship + ": " + captains[ship])
# ## Functions
# +
def SquareIt(x):
return x * x
print(SquareIt(2))
# +
#You can pass functions around as parameters
def DoSomething(f, x):
return f(x)
print(DoSomething(SquareIt, 3))
# -
#Lambda functions let you inline simple functions
print(DoSomething(lambda x: x * x * x, 3))
# ## Boolean Expressions
print(1 == 3)
print(True or False)
print(1 is 3)
if 1 is 3:
print("How did that happen?")
elif 1 > 3:
print("Yikes")
else:
print("All is well with the world")
# ## Looping
for x in range(10):
print(x)
for x in range(10):
if (x is 1):
continue
if (x > 5):
break
print(x)
x = 0
while (x < 10):
print(x)
x += 1
# ## Activity
# Write some code that creates a list of integers, loops through each element of the list, and only prints out even numbers!
| surprise/source_code/IntroToPython/Python101.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 ELA
# language: python
# name: ela
# ---
# + init_cell=true
import os
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import rasterio
from rasterio.plot import show
import geopandas as gpd
import pickle
# -
# Only set to True for co-dev of ela from this use case:
ela_from_source = False
ela_from_source = True
if ela_from_source:
if ('ELA_SRC' in os.environ):
root_src_dir = os.environ['ELA_SRC']
elif sys.platform == 'win32':
root_src_dir = r'C:\src\github_jm\pyela'
else:
username = os.environ['USER']
root_src_dir = os.path.join('/home', username, 'src/ela/pyela')
pkg_src_dir = root_src_dir
sys.path.insert(0, pkg_src_dir)
from ela.textproc import *
from ela.utils import *
from ela.classification import *
from ela.visual import *
from ela.visual3d import *
from ela.spatial import SliceOperation
# ## Importing data
#
data_path = None
# You probably want to explicitly set `data_path` to the location where you put the folder(s) e.g:
# +
#data_path = '/home/myusername/data' # On Linux, if you now have the folder /home/myusername/data/Bungendore
#data_path = r'C:\data\Lithology' # windows, if you have C:\data\Lithology\Bungendore
# -
# Otherwise a fallback for the pyela developer(s)
if data_path is None:
if ('ELA_DATA' in os.environ):
data_path = os.environ['ELA_DATA']
elif sys.platform == 'win32':
data_path = r'C:\data\Lithology'
else:
username = os.environ['USER']
data_path = os.path.join('/home', username, 'data')
data_path
aem_datadir = os.path.join(data_path, 'AEM')
swan_datadir = os.path.join(data_path, 'swan_coastal')
scp_datadir = os.path.join(aem_datadir, 'Swan_coastal_plains')
scp_grids_datadir = os.path.join(scp_datadir, 'grids')
ngis_datadir = os.path.join(data_path, 'NGIS')
scp_shp_datadir = os.path.join(data_path, 'NGIS/swan_coastal')
# ## reload processed data
#
dem = rasterio.open(os.path.join(swan_datadir,'Swan_DEM/CLIP.tif'))
cnd_slice_dir = os.path.join(scp_grids_datadir,'cnd')
cnd_000_005 = rasterio.open(os.path.join(cnd_slice_dir,'Swan_Coastal_Plain_CND_000m_to_005m_Final.ers'))
_, ax = plt.subplots(figsize=(12, 12))
show(cnd_000_005,title='Conductivity 0-5 metres depth (units?)', cmap='viridis', ax=ax)
band_0 = cnd_000_005.read(1)
type(band_0), band_0.shape, band_0[5,6]
fig, ax = plt.subplots(figsize=(12, 12))
show(cnd_000_005,title='Conductivity 0-5 metres depth (units?)', cmap='magma', ax=ax)
bore_locations_raw = gpd.read_file(os.path.join(scp_shp_datadir, 'scp.shp'))
# The DEM raster and the bore location shapefile do not use the same projection (coordinate reference system) so we reproject one of them. We choose the raster's UTM.
bore_locations = bore_locations_raw.to_crs(dem.crs)
# ### Subset to the location of interest
#
# The lithology logs are for all of western australia, which is much larger than the area of interest and for which we have the geolocation of boreholes. We subset to the location of interest
# +
DEPTH_FROM_COL = 'FromDepth'
DEPTH_TO_COL = 'ToDepth'
TOP_ELEV_COL = 'TopElev'
BOTTOM_ELEV_COL = 'BottomElev'
LITHO_DESC_COL = 'Description'
HYDRO_CODE_COL = 'HydroCode'
# -
# to be reused in experimental notebooks:
interp_litho_filename = os.path.join(swan_datadir,'3d_primary_litho.pkl')
with open(interp_litho_filename, 'rb') as handle:
lithology_3d_array = pickle.load(handle)
lithologies = ['sand', 'clay','quartz','shale','sandstone', 'coal','pebbles','silt','pyrite','grit','limestone']
# And to capture any of these we devise a regular expression:
any_litho_markers_re = r'sand|clay|quart|ston|shale|silt|pebb|coal|pyr|grit|lime'
regex = re.compile(any_litho_markers_re)
my_lithologies_numclasses = create_numeric_classes(lithologies)
lithologies_dict = dict([(x,x) for x in lithologies])
lithologies_dict['sands'] = 'sand'
lithologies_dict['clays'] = 'clay'
lithologies_dict['shales'] = 'shale'
lithologies_dict['claystone'] = 'clay'
lithologies_dict['siltstone'] = 'silt'
lithologies_dict['limesand'] = 'sand' # ??
lithologies_dict['calcarenite'] = 'limestone' # ??
lithologies_dict['calcitareous'] = 'limestone' # ??
lithologies_dict['mudstone'] = 'silt' # ??
lithologies_dict['capstone'] = 'limestone' # ??
lithologies_dict['ironstone'] = 'sandstone' # ??
#lithologies_dict['topsoil'] = 'soil' # ??
lithologies_adjective_dict = {
'sandy' : 'sand',
'clayey' : 'clay',
'clayish' : 'clay',
'shaley' : 'shale',
'silty' : 'silt',
'pebbly' : 'pebble',
'gravelly' : 'gravel'
}
# +
fp = os.path.join(swan_datadir, 'dem_array_data.pkl')
with open(fp, 'rb') as handle:
dem_array_data = pickle.load(handle)
# -
# ## 2D visualisations
lithology_color_names = ['yellow', 'olive', 'lightgrey', 'dimgray', 'teal', 'cornsilk', 'saddlebrown', 'rosybrown', 'chocolate', 'lightslategrey', 'gold']
lithology_cmap = discrete_classes_colormap(lithology_color_names) # Later for exporting to RGB geotiffs??
litho_legend_display_info = [(lithology_cmap[i], lithologies[i], lithology_color_names[i]) for i in range(len(lithologies))]
litho_legend = legend_fig(litho_legend_display_info)
cms = cartopy_color_settings(lithology_color_names)
dem_array_data.keys()
# +
ahd_min=-180
ahd_max=50
z_ahd_coords = np.arange(ahd_min,ahd_max,1)
dim_x,dim_y,dim_z = lithology_3d_array.shape
dims = (dim_x,dim_y,dim_z)
# -
# Burn DEM into grid
z_index_for_ahd = z_index_for_ahd_functor(b=-ahd_min)
fig, ax = plt.subplots(figsize=(12, 12))
imgplot = plt.imshow(to_carto(lithology_3d_array[:, :, z_index_for_ahd(0)]), cmap=cms['cmap'])
title = plt.title('Primary litho at +0mAHD')
# ## 3D visualisation
from ela.visual3d import *
from mayavi import mlab
vis_litho = LithologiesClassesVisual3d(lithologies, lithology_color_names, 'black')
vis_litho.render_classes_planar(lithology_3d_array, 'Primary lithology')
# ela has facilities to visualise overlaid information: DEM, classified bore logs, and volumes of interpolated lithologies. This is important to convey .
#
# First a bit of data filling for visual purposes, as NaN lithology class codes may cause issues.
# +
classified_logs_filename = os.path.join(swan_datadir, 'classified_logs.pkl')
with open(classified_logs_filename, 'rb') as handle:
df = pickle.load(handle)
# -
df_infilled = df.fillna({PRIMARY_LITHO_NUM_COL: -1.0})
# df_2 = df_1[(df_1[DEPTH_TO_AHD_COL] > (ahd_min-20))]
# A factor to apply to Z coordinates, otherwise things would be squashed visually along the heights.
# Would prefer a visual only scaling factor, but could not find a way to do so.
Z_SCALING = 20.0
z_coords = np.arange(ahd_min,ahd_max,1)
overlay_vis_litho = LithologiesClassesOverlayVisual3d(lithologies, lithology_color_names, 'black', dem_array_data, z_coords, Z_SCALING, df_infilled, PRIMARY_LITHO_NUM_COL)
def view_class(value):
f = overlay_vis_litho.view_overlay(value, lithology_3d_array)
return f
f = view_class(1.0)
f = view_class(2.0)
# 
vis_litho = LithologiesClassesVisual3d(lithologies, lithology_color_names, 'black')
vis_litho.render_classes_planar(lithology_3d_array, 'Primary lithology')
| case_studies/swan_coastal_plain/scp_vis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# not sure which is better to clean strings
import csv
from string import punctuation, digits
from nltk.corpus import stopwords
from collections import Counter
imdb_path = './data/imdb_labelled.txt'
amzn_path = './data/amazon_cells_labelled.txt'
yelp_path = './data/yelp_labelled.txt'
# -
imdb_df = pd.read_csv(imdb_path, names=['review', 'label'], delimiter='\t', encoding='latin-1', header=None, quoting=csv.QUOTE_NONE, error_bad_lines=False)
print(imdb_df)
amzn_df = pd.read_csv(amzn_path, names=['review', 'label'], delimiter='\t', encoding='latin-1', header=None, quoting=csv.QUOTE_NONE, error_bad_lines=False)
print(amzn_df)
yelp_df = pd.read_csv(yelp_path, names=['review', 'label'], delimiter='\t', encoding='latin-1', header=None, quoting=csv.QUOTE_NONE, error_bad_lines=False)
print(yelp_df)
reviews = pd.concat([imdb_df, amzn_df, yelp_df])
print(reviews)
train_set, test_set = train_test_split(reviews, test_size=0.2, shuffle=True)
train_set['label'].value_counts().plot(kind='pie', legend=True, labels=['positive', 'negative'], autopct='%1.1f%%', fontsize=17, figsize=[7,7])
plt.title('Review Distribution in Training Set')
plt.show()
# +
all_words_counter = Counter()
pos_words_counter = Counter()
neg_words_counter = Counter()
pos_neg_ratios = Counter()
neg_pos_ratios = Counter()
extended_filter = stopwords.words('english') + ['arent', 'couldnt', 'didnt', 'doesnt', 'dont', 'hadnt', 'hasnt', 'havent', 'isnt', 'mightnt', 'mustnt', 'neednt', 'shant', 'shes', 'shouldnt', 'shouldve', 'thatll', 'wasnt', 'werent', 'wont', 'wouldnt', 'youd', 'youll', 'youre', 'youve']
for review, label in train_set.values:
words = [w for w in review.translate(str.maketrans('', '', punctuation + digits)).lower().split() if w.isalpha()]
for word in words:
if word not in extended_filter:
all_words_counter[word] += 1
(pos_words_counter if label == 1 else neg_words_counter)[word] += 1
num_top = 20
for word, _ in all_words_counter.most_common():
ratio = float(pos_words_counter[word]) / float(neg_words_counter[word] + 1.0)
pos_neg_ratios[word] = ratio
ratio = float(neg_words_counter[word]) / float(pos_words_counter[word] + 1.0)
neg_pos_ratios[word] = ratio
positive_words = pos_neg_ratios.most_common()[:num_top]
negative_words = neg_pos_ratios.most_common()[:num_top]
# -
positive_words
negative_words
# +
# reference https://github.com/shekhargulati/sentiment-analysis-python/tree/master/opinion-lexicon-English
positive_lexicon = None
negative_lexicon = None
with open('./data/positive.txt') as f:
lines = f.readlines()
positive_lexicon = [word.strip() for word in lines[35:]]
with open('./data/negative.txt') as f:
lines = f.readlines()
negative_lexicon = [word.strip() for word in lines[35:]]
print(len(positive_lexicon))
print(len(negative_lexicon))
positive_words += positive_lexicon
negative_words += negative_lexicon
# -
# ## Base Naive Bayes Classifier Model
class BetterNaiveBayesClassifier:
def __init__(self, dataset):
self.dataset = dataset
self.categories = dataset['label'].unique()
self.vocab = {}
self.prior = {}
self.likelihood_positive = {}
self.likelihood_negative = {}
def train(self, alpha=0):
review_by_categories = {c: self.dataset.loc[self.dataset['label'] == c] for c in self.categories}
self.prior = {c: len(review_by_categories[c]) / len(self.dataset) for c in self.categories}
for review in self.dataset['review']:
for word in self.get_words(review):
if word not in extended_filter:
if word not in self.vocab:
self.vocab[word] = 1
else:
self.vocab[word] += 1
# update likelihood
for word in self.vocab:
count_pos = 0
count_neg = 0
total_pos = 0
total_neg = 0
for review, sentiment in zip(self.dataset['review'], self.dataset['label']):
if sentiment == 1:
count_pos += review.count(word)
total_pos += len(self.get_words(review))
elif sentiment == 0:
count_neg += review.count(word)
total_neg += len(self.get_words(review))
self.likelihood_positive[word] = (count_pos + alpha) / (total_pos + alpha * len(self.vocab))
self.likelihood_negative[word] = (count_neg + alpha) / (total_neg + alpha * len(self.vocab))
for word in positive_words:
self.likelihood_positive[word] = 1
self.likelihood_negative[word] = 1e-5
for word in negative_words:
self.likelihood_positive[word] = 1e-5
self.likelihood_negative[word] = 1
def predict(self, reviews):
results = []
for review in reviews:
ppos = self.prior[1]
pneg = self.prior[0]
for word in self.get_words(review):
if word in self.vocab:
ppos *= self.likelihood_positive[word]
pneg *= self.likelihood_negative[word]
results.append(1 if ppos > pneg else 0)
return results
def get_words(self, review):
return [w for w in review.translate(str.maketrans('', '', punctuation + digits)).lower().split() if w.isalpha()]
bnbc = BetterNaiveBayesClassifier(train_set)
bnbc.train(alpha=1)
predictions = bnbc.predict(test_set['review'])
acc = accuracy_score(predictions, test_set['label'])
print(f'Accuracy of Better Naive Bayes is {acc*100.0:.3f}%')
# +
rev = ['I hate this', 'I love this', 'I hate that I love this']
preds = bnbc.predict(rev)
for r, p in zip(rev, preds):
sentiment = 'positive' if p == 1 else 'negative'
print(f'the review "{r}" is a {sentiment} review')
# -
# ## (More) Preprocessing
# +
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
stemmer = PorterStemmer()
lemmatizer = WordNetLemmatizer()
vectorizer = TfidfVectorizer(min_df = 5,
max_df = 0.8,
sublinear_tf = True,
use_idf = True)
def tokenizer(review):
words = [w for w in review.translate(str.maketrans('', '', punctuation + digits)).lower().split() if w.isalpha()]
stems = [stemmer.stem(word) for word in words]
lemms = [lemmatizer.lemmatize(word) for word in stems]
return words
my_vectorizer = TfidfVectorizer(use_idf=True,
encoding='latin-1',
strip_accents='unicode',
tokenizer=tokenizer,
stop_words=extended_filter)
# -
# ## Support Vector Machine
# +
from sklearn import svm
from sklearn.metrics import classification_report
train_vectors = vectorizer.fit_transform(train_set['review'])
test_vectors = vectorizer.transform(test_set['review'])
SVM_linear = svm.SVC(kernel='linear')
SVM_linear.fit(train_vectors, train_set['label'])
preds_svm_linear = SVM_linear.predict(test_vectors)
report = classification_report(test_set['label'], preds_svm_linear, output_dict=True)
print('positive: ', report['1'])
print('negative: ', report['0'])
# +
train_vectors = my_vectorizer.fit_transform(train_set['review'])
test_vectors = my_vectorizer.transform(test_set['review'])
SVM = svm.SVC(kernel='linear')
SVM.fit(train_vectors, train_set['label'])
preds_svm_linear = SVM.predict(test_vectors)
report = classification_report(test_set['label'], preds_svm_linear, output_dict=True)
print('positive: ', report['1'])
print('negative: ', report['0'])
# -
# ## Logistic Regression
# +
from sklearn.linear_model import LogisticRegression
train_vectors = my_vectorizer.fit_transform(train_set['review'])
test_vectors = my_vectorizer.transform(test_set['review'])
regressor = LogisticRegression(random_state = 0)
regressor.fit(train_vectors, train_set['label'])
preds_logis_regres = regressor.predict(test_vectors)
report = classification_report(test_set['label'], preds_logis_regres, output_dict=True)
print('positive: ', report['1'])
print('negative: ', report['0'])
# -
# ## SKLearn Naive Bayes Subclasses
from sklearn.naive_bayes import GaussianNB, MultinomialNB
# ### Gaussian Naive Bayes
# +
gnb = GaussianNB()
gnb.fit(train_vectors.toarray(), train_set['label'])
preds = gnb.predict(test_vectors.toarray())
report = classification_report(test_set['label'], preds, output_dict=True)
print('positive: ', report['1'])
print('negative: ', report['0'])
# +
mnb = MultinomialNB()
mnb.fit(train_vectors.toarray(), train_set['label'])
preds = mnb.predict(test_vectors.toarray())
report = classification_report(test_set['label'], preds, output_dict=True)
print('positive: ', report['1'])
print('negative: ', report['0'])
# -
# ## Live Demos
# +
from ipywidgets import interact, widgets
from IPython.display import display, Markdown, Latex, clear_output
text = widgets.Text(
value='',
placeholder='type your review here',
description='Review:',
disabled=False
)
display(text)
def callback(wdgt):
clear_output()
display(text)
review = wdgt.value
result = bnbc.predict([review]).pop()
result = 'positive' if result == 1 else 'negative'
display(Markdown(f'the review "{review}" is a {result} review.'))
text.on_submit(callback)
# -
# ## Pickling for Reuse
# +
import pickle
pickle.dump(bnbc, open('better_naive_bayes_classifier.pickle', 'wb'))
pickle.dump(SVM, open('SVM_80_acc.pickle', 'wb'))
| assets/ipynb/5334-main-project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ray Serve - Model Serving Challenges
#
# © 2019-2021, Anyscale. All Rights Reserved
#
# 
# ## The Challenges of Model Serving
#
# Model development happens in a data science research environment. There are many challenges, but also tools at the data scientists disposal.
#
# Model deployment to production faces an entirely different set of challenges and requires different tools, although it is desirable to bridge the divide as much as possible.
#
# Here is a partial lists of the challenges of model serving:
#
# ### It Should Be Framework Agnostic
#
# Model serving frameworks must be able to serve models from popular systems like TensorFlow, PyTorch, scikit-learn, or even arbitrary Python functions. Even within the same organization, it is common to use several machine learning frameworks.
#
# Also, machine learning models are typically surrounded by lots of application logic. For example, some model serving is implemented as a RESTful service to which scoring requests are made. Often this is too restrictive, as some additional processing may be desired as part of the scoring process, and the performance overhead of remote calls may be suboptimal.
#
# ### Pure Python
#
# It has been common recently for model serving to be done using JVM-based systems, since many production enterprises are JVM-based. This is a disadvantage when model training and other data processing are done using Python tools, only.
#
# In general, model serving should be intuitive for developers and simple to configure and run. Hence, it is desirable to use pure Python and to avoid verbose configurations using YAML files or other means.
#
# Data scientists and engineers use Python to develop their machine learning models, so they should also be able to use Python to deploy their machine learning applications. This need is growing more critical as online learning applications combine training and serving in the same applications.
#
# ### Simple and Scalable
#
# Model serving must be simple to scale on demand across many machines. It must also be easy to upgrade models dynamically, over time. Achieving production uptime and performance requirements are essential for success.
#
# ### DevOps Integrations
#
# Model serving deployments need to integrate with existing "DevOps" CI/CD practices for controlled, audited, and predicatble releases. Patterns like [Canary Deployment](https://martinfowler.com/bliki/CanaryRelease.html) are particularly useful for testing the efficacy of a new model before replacing existing models, just as this pattern is useful for other software deployments.
#
# ### Flexible Deployment Patterns
#
# There are unique deployment patterns, too. For example, it should be easy to deploy a forest of models, to split traffic to different instances, and to score data in batches for greater efficiency.
#
# See also this [Ray blog post](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) on the challenges of model serving and the way Ray Serve addresses them. It also provides an example of starting with a simple model, then deploying a more sophisticated model into the running application.
# ## Why Ray Serve?
#
# [Ray Serve](https://docs.ray.io/en/latest/serve/index.html) is a scalable model-serving library built on [Ray](https://ray.io).
#
# For users, Ray Serve offers these benefits:
#
# * **Framework Agnostic**: You can use the same toolkit to serve everything from deep learning models built with [PyTorch](https://docs.ray.io/en/latest/serve/tutorials/pytorch.html#serve-pytorch-tutorial), [Tensorflow](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), or [Keras](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), to [scikit-Learn](https://docs.ray.io/en/latest/serve/tutorials/sklearn.html#serve-sklearn-tutorial) models, to arbitrary business logic.
# * **Python First:** Configure your model serving with pure Python code. No YAML or JSON configurations required.
#
# As a library, Ray Serve enables the following:
#
# * [Splitting traffic between backends dynamically](https://docs.ray.io/en/latest/serve/advanced.html#serve-split-traffic) with zero downtime. This is accomplished by decoupling routing logic from response handling logic.
# * [Support for batching](https://docs.ray.io/en/latest/serve/advanced.html#serve-batching) to improve performance helps you meet your performance objectives. You can also use a model for batch and online processing.
# * Because Serve is a library, it's esay to integrate it with other tools in your environment, such as CI/CD.
#
# Since Serve is built on Ray, it also allows you to scale to many machines, in your datacenter or in cloud environments, and it allows you to leverage all of the other Ray frameworks.
# ## Two Simple Ray Serve Examples
#
# We'll explore a more detailed example in the next lesson, where we actually serve ML models. Here we explore how simple deployments are simple with Ray Serve! We will first use a function that does "scoring", sufficient for _stateless_ scenarios, then a use class, which enables _stateful_ scenarios.
#
# But first, initialize Ray as before:
# +
import ray
from ray import serve
import requests # for making web requests
# -
ray.init(ignore_reinit_error=True)
# Now we initialize Serve itself:
serve.init(name='serve-example-1') # Name for this Serve instance
# Next, define our stateless function for processing requests.
#
# Note that Serve leverages the [Flask API](https://flask.palletsprojects.com/en/1.1.x/api/), which is often familiar to developers, as it is a natural first approach for deploying models as RESTful services.
def echo(flask_request): # Uses the Flask API
return "hello " + flask_request.args.get("name", "serve!")
serve.create_backend("hello", echo)
serve.create_endpoint("hello", backend="hello", route="/hello")
for i in range(10):
response = requests.get(f"http://127.0.0.1:8000/hello?name=request_{i}").text
print(f'{i:2d}: {response}')
# You should see `hello request_N` in the output. Try making `requests.get()` invocations without the `?name=request_{i}` parameter. You should see `hello serve!`.
#
# We'll explain the concepts of _backends_ and _endpoints_ below.
#
# Now let's serve another "model" in the same service:
class Counter:
def __init__(self, initial_count = 0):
self.count = initial_count
def __call__(self, flask_request):
self.count += 1
return {"current_counter": self.count, "args": flask_request.args}
# When we create the _backend_, we can pass constructor arguments after the label and the name of the class:
serve.create_backend("counter", Counter, 0) # initial_count = 0
serve.create_endpoint("counter", backend="counter", route="/counter")
for i in range(10):
response = requests.get(f"http://127.0.0.1:8000/counter?i={i}").json()
print(f'{i:2d}: {response}')
# ## Exercise - Add Another New Backend and Endpoint
#
# Using either a function or a stateful class, add another _backend_ and _endpoint_, then try it out.
# ## Ray Serve Concepts
#
# Let's explain _backends_ and _endpoints_.
#
# For more details, see this [key concepts](https://docs.ray.io/en/latest/serve/key-concepts.html) documentation.
# ### Backends
#
# Backends define the implementation of your business logic or models that will handle requests when queries come in to _endpoints._
#
# To define a backend, first define the “handler” or business logic that will take requests and construct responses. Specifically, the handler should take as input a [Flask Request object](https://flask.palletsprojects.com/en/1.1.x/api/?highlight=request#flask.Request) and return any JSON-serializable object as output.
#
# Use a function when your response is _stateless_ and a class when your response is _stateful_ (although the class instances could be stateless, of course). Another advantage of using a class is the ability to specify constructor arguments in `serve.create_backend`, as was shown in the `counter` example above.
#
# Finally, a backend is defined using `serve.create_backend`, specifying a logical, unique name, and the handler.
# You can list all defined backends and delete them to reclaim resources. However, a backend cannot be deleted while it is in use by an endpoint, because then traffic to an endpoint could not be handled:
serve.create_backend("counter_toss", Counter, 0)
serve.list_backends()
serve.delete_backend("counter_toss")
serve.list_backends()
# ### Endpoints
#
# While a backend defines the request handling logic, an endpoint allows you to expose a backend via HTTP. Endpoints are “logical” and can have one or multiple backends that serve requests to them.
#
# To create an endpoint, you specify a name for the endpoint, the name of a backend to handle requests to the endpoint, and the route and the list of HTTP methods (e.g., `[GET]`, which is the default) where it will be accesible. By default endpoints are serviced only by the backend provided to `serve.create_endpoint`, but in some cases you may want to specify multiple backends for an endpoint, e.g., for A/B testing or incremental rollout. For information on traffic splitting, please see [Splitting Traffic Between Backends](https://docs.ray.io/en/latest/serve/advanced.html#serve-split-traffic).
#
# Let's define a second endpoint for our `hello` backend, this one providing `POST` access. (We could have defined the original `hello` endpoint to support `POST` and `GET` using `methods = ['POST', 'GET']`.)
serve.create_endpoint("post_hello", backend="hello", route="/post_hello", methods=["POST"])
eds = serve.list_endpoints()
eds.keys(), eds
for i in range(10):
response = requests.post(f"http://127.0.0.1:8000/post_hello", data = {'name': f'request_{i}'}).text
print(f'{i:2d}: {response}')
# In this case, the data is not part of the `args` that our function expects to find, so the default `serve!` is returned.
# ### Splitting Traffic Between Backends
#
# There are [several more advanced customizations](https://docs.ray.io/en/latest/serve/advanced.html) you can do. Let's look at a common one that supports the model deployment patterns we discussed above; splitting traffic between different backends.
#
# We'll implement the [Canary Deployment](https://martinfowler.com/bliki/CanaryRelease.html) pattern for testing a new model.
#
# First, let's reuse our original `echo` function as the "old" backend and define a new backend function, `new_echo`, which will handle `POST` requests better. Recall that the `echo` backend was named `hello`.
def new_echo(flask_request): # Uses the Flask API
name = flask_request.args.get("name", None) or flask_request.form.get("name", "serve!")
return "hello " + name + " (new)"
serve.create_backend("new_hello", new_echo)
# Initially, set all traffic to be served by the "old" backend. Note that our endpoint handles both `GET` and `POST` requests.
serve.create_endpoint("canary_endpoint", backend="hello", route="/canary-test", methods=['GET', 'POST'])
# Normally, you would only direct 1% or less of the traffic to the new backend in a real Canary deployment, but we'll use 25% so we see a lot of "hits". Now define the traffic split between the two backends:
serve.set_traffic("canary_endpoint", {"hello": 0.75, "new_hello": 0.25})
for i in range(10):
gresponse = requests.get(f"http://127.0.0.1:8000/canary-test?name='request_{i}").text
presponse = requests.post(f"http://127.0.0.1:8000/canary-test", data = {'name': f'request_{i}'}).text
print(f'{i:2d}: GET: {gresponse:25s} POST: {presponse}')
# The "new" implementation correctly handles data for `POST` requests. The old implementation always returns `serve!`, while both work fine for `GET` requests.
#
# Now route all traffic to the new, better backend.
serve.set_traffic("canary_endpoint", {"new_hello": 1.0})
for i in range(10):
gresponse = requests.get(f"http://127.0.0.1:8000/canary-test?name='request_{i}").text
presponse = requests.post(f"http://127.0.0.1:8000/canary-test", data = {'name': f'request_{i}'}).text
print(f'{i:2d}: GET: {gresponse:25s} POST: {presponse}')
# By the way, notice that this example really has nothing to do with model serving, per se. We basically have a flexible framework for any kind of request-response serving, backed by the transparent ability to scale to a cluster with Ray.
# Recently, a new `shadow_traffic` method was added, `serve.shadow_traffic(endpoint, backend, fraction)`. It allows you to duplicate a fraction of the traffic to another backend. This is useful for sampling and for testing a new algorithm without affecting current handling.
# ## Serve is a Singleton in the Ray Cluster
#
# You may have noticed that when defining endpoints and backends, we called Serve API methods, not methods on a Serve _class instance_. Serve defaults to being a [singleton](https://en.wikipedia.org/wiki/Singleton_pattern) in the whole Ray cluster, not just the driver program. We passed a name argument to `serve.init()`, which creates a separate Ray actor internally, but the endpoints and backends defined are still global.
#
# This means that even when you terminate this notebook, our definitions above will persist! Hence, you need to clean up any endpoints and backends that are no longer needed or shutdown Serve completely.
# If you don't want to shut down serve, but remove everything currently defined, the following statements can be used:
eps = serve.list_endpoints()
for name in eps.keys():
serve.delete_endpoint(name)
bes = serve.list_backends()
for name in bes.keys():
serve.delete_backend(name)
eps = serve.list_endpoints()
bes = serve.list_backends()
print(f'endpoints: {eps}, backends {bes}')
# The previous steps aren't necessary if you want to shutdown Serve completely. Just run the following:
serve.shutdown()
ray.shutdown() # "Undo ray.init()".
| ray-serve/01-Model-Serving-Challenges.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Linestyles
# ==========
#
# Plot the different line styles.
#
#
# +
import numpy as np
import matplotlib.pyplot as plt
def linestyle(ls, i):
X = i * .5 * np.ones(11)
Y = np.arange(11)
plt.plot(X, Y, ls, color=(.0, .0, 1, 1), lw=3, ms=8,
mfc=(.75, .75, 1, 1), mec=(0, 0, 1, 1))
plt.text(.5 * i, 10.25, ls, rotation=90, fontsize=15, va='bottom')
linestyles = ['-', '--', ':', '-.', '.', ',', 'o', '^', 'v', '<', '>', 's',
'+', 'x', 'd', '1', '2', '3', '4', 'h', 'p', '|', '_', 'D', 'H']
n_lines = len(linestyles)
size = 20 * n_lines, 300
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
plt.axes([0, 0.01, 1, .9], frameon=False)
for i, ls in enumerate(linestyles):
linestyle(ls, i)
plt.xlim(-.2, .2 + .5*n_lines)
plt.xticks([])
plt.yticks([])
plt.show()
| _downloads/plot_linestyles.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.0
# language: julia
# name: julia-1.4
# ---
include("../src/Qaintessent.jl")
using .Qaintessent
# construct and show a circuit gate chain
N = 5
cgc = CircuitGateChain{N}([
single_qubit_circuit_gate(3, HadamardGate(), N),
controlled_circuit_gate((1, 4), 2, RxGate(√0.2), N),
controlled_circuit_gate((2,4), (1,5), SwapGate(), N),
single_qubit_circuit_gate(2, PhaseShiftGate(0.2π), N),
single_qubit_circuit_gate(3, RotationGate(0.1π, [1, 0, 0]), N),
single_qubit_circuit_gate(1, RyGate(1.4π), N),
two_qubit_circuit_gate(1,2, SwapGate(), N),
controlled_circuit_gate(4, 5, TdagGate(), N),
single_qubit_circuit_gate(3, SGate(), N),
])
# quantum Fourier transform circuit
qft_circuit(4)
| examples/view_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from scipy.optimize import minimize
# +
def objective(x):
return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]
def constraint1(x):
return x[0]*x[1]*x[2]*x[3]-25.0
def constraint2(x):
sum_eq = 40.0
for i in range(4):
sum_eq = sum_eq - x[i]**2
return sum_eq
# +
# initial guesses
n = 4
x0 = np.zeros(n)
x0[0] = 1.0
x0[1] = 5.0
x0[2] = 5.0
x0[3] = 1.0
# show initial objective
print('Initial SSE Objective: ' + str(objective(x0)))
# -
# optimize
b = (1.0,5.0)
bnds = (b, b, b, b)
con1 = {'type': 'ineq', 'fun': constraint1}
con2 = {'type': 'eq', 'fun': constraint2}
cons = ([con1,con2])
solution = minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
# +
x = solution.x
# show final objective
print('Final SSE Objective: ' + str(objective(x)))
# print solution
print('Solution')
print('x1 = ' + str(x[0]))
print('x2 = ' + str(x[1]))
print('x3 = ' + str(x[2]))
print('x4 = ' + str(x[3]))
# -
| Unit_03/Sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A simple toy example for robustness curve separation
import os
os.chdir("../")
import sys
import json
import math
import numpy as np
import numpy.matlib
import sklearn.svm
import matplotlib.pyplot as plt
import warnings
import seaborn as sns
sns.set(context='paper')
# ## Plot settings:
# +
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 26
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('text', usetex=True)
# dictionary that maps color string to 'good looking' seaborn colors that are easily distinguishable
colors = {
"orange": sns.xkcd_rgb["yellowish orange"],
"red": sns.xkcd_rgb["pale red"],
"green": sns.xkcd_rgb["medium green"],
"blue": sns.xkcd_rgb["denim blue"],
"yellow": sns.xkcd_rgb["amber"],
"purple": sns.xkcd_rgb["dusty purple"],
"cyan": sns.xkcd_rgb["cyan"]
}
# -
# ## Calculate and plot:
# +
save_name_clf = "fig_toy_curve_separation_clf"
save_name_adv = "fig_toy_curve_separation_adv"
random_state = np.random.RandomState(42)
dim = 16
n_per_class = 100
data = np.ones((2 * n_per_class, dim**2))
vmin = -1
vmax = 1
data *= vmax
data[:n_per_class, :int(dim**2 / 2)] = vmin
data[n_per_class:, int(dim**2 / 2):] = vmin
labels = np.zeros(2 * n_per_class)
labels[:n_per_class] = 1
s1 = data[0, :]
s2 = data[n_per_class, :]
clf_l2 = sklearn.svm.LinearSVC(random_state=0, tol=1e-5)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
clf_l2.fit(data, labels)
# print(clf_l2.coef_)
clf_l1 = sklearn.svm.LinearSVC(random_state=0,
tol=1e-5,
penalty="l1",
dual=False)
clf_l1.fit(data, labels)
# print(clf_l1.coef_)
wl1 = clf_l1.coef_.flatten()
wl2 = clf_l2.coef_.flatten()
shift = 1.01
# Adversarial for l_1 and l_2
adv1 = s1.copy()
adv1[:int(dim**2 / 2)] += shift
adv1[int(dim**2 / 2):] -= shift
# Adversarial for l_1 but not l_2
adv2 = s1.copy()
# i1 = np.argmax(wl1)
i2 = np.argmin(wl1)
# adv2[i1] -= shift
adv2[i2] += shift
cor = clf_l1.predict(s1.reshape(1, dim**2))
p1a1 = clf_l1.predict(adv1.reshape(1, dim**2))
p1a2 = clf_l1.predict(adv2.reshape(1, dim**2))
p2a1 = clf_l2.predict(adv1.reshape(1, dim**2))
p2a2 = clf_l2.predict(adv2.reshape(1, dim**2))
if p1a1 == cor:
raise Exception("Adv1 is not an adversarial for l1.")
if p2a1 == cor:
raise Exception("Adv1 is not an adversarial for l2.")
if p1a2 == cor:
raise Exception("Adv2 is not an adversarial for l1.")
if p2a2 != cor:
raise Exception("Adv2 is an adversarial for l2.")
repl1 = np.matlib.repmat(clf_l1.coef_, 16, 1)
repl2 = np.matlib.repmat(clf_l2.coef_, 16, 1)
vmin = min(
(min(s1), min(s2), min(min(clf_l1.coef_)), min(min(clf_l2.coef_)),
min(adv1), min(adv2)))
vmax = max(
(max(s1), max(s2), max(max(clf_l1.coef_)), max(max(clf_l2.coef_)),
max(adv1), max(adv2)))
n_cols = 4
n_rows = 1
fig, axx = plt.subplots(n_rows,
n_cols,
figsize=(6 * n_cols, 5 * n_rows),
dpi=400)
axx[0].imshow(s1.reshape(dim, dim), vmin=vmin, vmax=vmax)
axx[1].imshow(s2.reshape(dim, dim), vmin=vmin, vmax=vmax)
for ax in axx:
ax.tick_params(
axis="both", # changes apply to the x-axis
which="both", # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the top edge are off
right=False, # ticks along the top edge are off
labelbottom=False,
labelleft=False,
)
ax.grid(False)
axx[0].set_title('Class 1')
axx[1].set_title('Class 2')
# fig, axx = plt.subplots(n_rows, n_cols)
axx[2].imshow(repl1)
axx[3].imshow(repl2)
for ax in axx:
ax.tick_params(
axis="both", # changes apply to the x-axis
which="both", # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the top edge are off
right=False, # ticks along the top edge are off
labelbottom=False,
labelleft=False,
)
ax.grid(False)
axx[2].set_title('Sparse classifier')
axx[3].set_title('Dense classifier')
fig.tight_layout()
fig.savefig('res/{}.pdf'.format(save_name_clf))
n_rows = 1
n_cols = 2
fig, axx = plt.subplots(n_rows,
n_cols,
figsize=(6 * n_cols, 5 * n_rows),
dpi=400)
axx[1].imshow(adv1.reshape(dim, dim), vmin=vmin, vmax=vmax)
axx[0].imshow(adv2.reshape(dim, dim), vmin=vmin, vmax=vmax)
for ax in axx:
ax.tick_params(
axis="both", # changes apply to the x-axis
which="both", # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the top edge are off
right=False, # ticks along the top edge are off
labelbottom=False,
labelleft=False,
)
ax.grid(False)
axx[1].set_title('Adversarial for both classifiers')
axx[0].set_title('Adversarial for sparse classifier')
fig.tight_layout()
fig.savefig('res/{}.pdf'.format(save_name_adv))
| experiments/rob_curves_separation_toy_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Imports
# %matplotlib nbagg
# # %matplotlib osx
import os
import re
import shutil
import spectral
import matplotlib
import matplotlib.pyplot as plt
from tifffile import imsave
import numpy as np
from tkinter.filedialog import askdirectory, askopenfilenames
from tkinter import Tk
import general_funcs
# +
# Extract all thermal images taken by thermal sensor mounted to headwall sensor
# Tk().withdraw()
input_folder = askdirectory(title='Select headwall thermal data folder')
file_list = os.listdir(input_folder)
print(file_list)
# regex = re.compile('*.hdr')
# filtered_file_list = [i for i in file_list if regex.search(i)]
# filtered_file_list.sort(key=lambda file_name: int(re.split('_|\.', file_name)[1]))
# print('File folder:', input_folder)
# print('Files used:', filtered_file_list)
# -
# Playground
img_flag = 0
for file_name in filtered_file_list:
img_chunk = spectral.open_image(os.path.join(input_folder, file_name))
h, w, d = img_chunk.shape
i_c = (img_chunk.asarray() - 49152)
print(file_name)
print('Size:', h, w, d)
print('Average:', i_c.mean())
print('Min:', i_c.min())
print('Max:', i_c.max())
print('Variance:', i_c.var())
print()
# +
# Extract all thermal images taken by thermal sensor mounted to headwall sensor
# Tk().withdraw()
input_folder = askdirectory(title='Select headwall thermal data folder')
file_list = os.listdir(input_folder)
regex = re.compile('raw_\d*\.hdr')
filtered_file_list = [i for i in file_list if regex.search(i)]
filtered_file_list.sort(key=lambda file_name: int(re.split('_|\.', file_name)[1]))
print('File folder:', input_folder)
print('Files used:', filtered_file_list)
# -
output_folder = askdirectory(title='Choose output folder')
img_chunks = []
for file_name in filtered_file_list:
file = os.path.join(input_folder, file_name)
img_chunk = spectral.open_image(file).asarray()
# img_chunk = np.einsum('ijk->kij', img_chunk)
print(img_chunk.shape)
img_chunks.append(img_chunk)
output_file = os.path.join(output_folder, file_name + '_test.tiff')
imsave(output_file, img_chunk)
binary_repr_v = np.vectorize(np.binary_repr)
test = binary_repr_v(img_chunk, 16)
img_chunks[5].mean()
| headwall_hyper_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="dUeKVCYTbcyT"
# #### Copyright 2019 The TensorFlow Authors.
# + cellView="form" id="4ellrPx7tdxq"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="7JfLUlawto_D"
# # Classification on imbalanced data
# + [markdown] id="DwdpaTKJOoPu"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="mthoSGBAOoX-"
# This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. .
#
# This tutorial contains complete code to:
#
# * Load a CSV file using Pandas.
# * Create train, validation, and test sets.
# * Define and train a model using Keras (including setting class weights).
# * Evaluate the model using various metrics (including precision and recall).
# * Try common techniques for dealing with imbalanced data like:
# * Class weighting
# * Oversampling
#
# + [markdown] id="kRHmSyHxEIhN"
# ## Setup
# + id="JM7hDSNClfoK"
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# + id="c8o1FHzD-_y_"
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
# + [markdown] id="Z3iZVjziKHmX"
# ## Data processing and exploration
# + [markdown] id="4sA9WOcmzH2D"
# ### Download the Kaggle Credit Card Fraud data set
#
# Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
#
# Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
# + id="pR_SnbMArXr7"
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
# + id="-fgdQgmwUFuj"
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
# + [markdown] id="xWKB_CVZFLpB"
# ### Examine the class label imbalance
#
# Let's look at the dataset imbalance:
# + id="HCJFrtuY2iLF"
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
# + [markdown] id="KnLKFQDsCBUg"
# This shows the small fraction of positive samples.
# + [markdown] id="6qox6ryyzwdr"
# ### Clean, split and normalize the data
#
# The raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
# + id="Ef42jTuxEjnj"
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
# + [markdown] id="uSNgdQFFFQ6u"
# Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
# + id="xfxhKg7Yr1-b"
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
# + [markdown] id="8a_Z_kBmr7Oh"
# Normalize the input features using the sklearn StandardScaler.
# This will set the mean to 0 and standard deviation to 1.
#
# Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
# + id="IO-qEUmJ5JQg"
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
# + [markdown] id="XF2nNfWKJ33w"
# Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
#
# + [markdown] id="uQ7m9nqDC3W6"
# ### Look at the data distribution
#
# Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
#
# * Do these distributions make sense?
# * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.
# * Can you see the difference between the distributions?
# * Yes the positive examples contain a much higher rate of extreme values.
# + id="raK7hyjd_vf6"
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
# + [markdown] id="qFK1u4JX16D8"
# ## Define the model and metrics
#
# Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
# + id="3JQDzUqT3UYG"
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
# + [markdown] id="SU0GX6E6mieP"
# ### Understanding useful metrics
#
# Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
#
#
#
# * **False** negatives and **false** positives are samples that were **incorrectly** classified
# * **True** negatives and **true** positives are samples that were **correctly** classified
# * **Accuracy** is the percentage of examples correctly classified
# > $\frac{\text{true samples}}{\text{total samples}}$
# * **Precision** is the percentage of **predicted** positives that were correctly classified
# > $\frac{\text{true positives}}{\text{true positives + false positives}}$
# * **Recall** is the percentage of **actual** positives that were correctly classified
# > $\frac{\text{true positives}}{\text{true positives + false negatives}}$
# * **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
# * **AUPRC** refers to Area Under the Curve of the Precision-Recall Curve. This metric computes precision-recall pairs for different probability thresholds.
#
# Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
#
# Read more:
# * [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
# * [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
# * [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
# * [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
# * [Relationship between Precision-Recall and ROC Curves](https://www.biostat.wisc.edu/~page/rocpr.pdf)
# + [markdown] id="FYdhSAoaF_TK"
# ## Baseline model
# + [markdown] id="IDbltVPg2m2q"
# ### Build the model
#
# Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
#
#
# Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
# + id="ouUkwPcGQsy3"
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# + id="1xlR_dekzw7C"
model = make_model()
model.summary()
# + [markdown] id="Wx7ND3_SqckO"
# Test run the model:
# + id="LopSd-yQqO3a"
model.predict(train_features[:10])
# + [markdown] id="YKIgWqHms_03"
# ### Optional: Set the correct initial bias.
# + [markdown] id="qk_3Ry6EoYDq"
# These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.
# + [markdown] id="PdbfWDuVpo6k"
# With the default bias initialization the loss should be about `math.log(2) = 0.69314`
# + id="H-oPqh3SoGXk"
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
# + [markdown] id="hE-JRzfKqfhB"
# The correct bias to set can be derived from:
#
# $$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
# $$ b_0 = -log_e(1/p_0 - 1) $$
# $$ b_0 = log_e(pos/neg)$$
# + id="F5KWPSjjstUS"
initial_bias = np.log([pos/neg])
initial_bias
# + [markdown] id="d1juXI9yY1KD"
# Set that as the initial bias, and the model will give much more reasonable initial guesses.
#
# It should be near: `pos/total = 0.0018`
# + id="50oyu1uss0i-"
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
# + [markdown] id="4xqFYb2KqRHQ"
# With this initialization the initial loss should be approximately:
#
# $$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
# + id="xVDqCWXDqHSc"
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
# + [markdown] id="FrDC8hvNr9yw"
# This initial loss is about 50 times less than if would have been with naive initialization.
#
# This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
# + [markdown] id="0EJj9ixKVBMT"
# ### Checkpoint the initial weights
#
# To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
# + id="_tSUm4yAVIif"
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
# + [markdown] id="EVXiLyqyZ8AX"
# ### Confirm that the bias fix helps
#
# Before moving on, confirm quick that the careful bias initialization actually helped.
#
# Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
# + id="Dm4-4K5RZ63Q"
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
# + id="j8DsLXHQaSql"
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
# + id="E3XsMBjhauFV"
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
# + id="dxFaskm7beC7"
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
# + [markdown] id="fKMioV0ddG3R"
# The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
# + [markdown] id="RsA_7SEntRaV"
# ### Train the model
# + id="yZKAc8NCDnoR"
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
# + [markdown] id="iSaDBYU9xtP6"
# ### Check training history
# In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
#
# Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
# + id="WTSkhT1jyGu6"
def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
# + id="u6LReDsqlZlk"
plot_metrics(baseline_history)
# + [markdown] id="UCa4iWo6WDKR"
# Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
# + [markdown] id="aJC1booryouo"
# ### Evaluate metrics
#
# You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
# + id="aNS796IJKrev"
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
# + id="MVWBGfADwbWI"
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
# + [markdown] id="nOTjD5Z5Wp1U"
# Evaluate your model on the test dataset and display the results for the metrics you created above.
# + id="poh_hZngt2_9"
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
# + [markdown] id="PyZtSr1v6L4t"
# If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
# + [markdown] id="P-QpQsip_F2Q"
# ### Plot the ROC
#
# Now plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
# + id="lhaxsLSvANF9"
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
# + id="DfHHspttKJE0"
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="Y5twGRLfNwmO"
# ### Plot the ROC
# Now plot the [AUPRC](https://developers.google.com/machine-learning/glossary?hl=en#PR_AUC). Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model.
#
# + id="XV6JSlFGEqGI"
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
# + id="FdQs_PcqEsiL"
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="gpdsFyp64DhY"
# It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
# + [markdown] id="cveQoiMyGQCo"
# ## Class weights
# + [markdown] id="ePGp6GUE1WfH"
# ### Calculate class weights
#
# The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
# + id="qjGWErngGny7"
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
# + [markdown] id="Mk1OOE2ZSHzy"
# ### Train a model with class weights
#
# Now try re-training and evaluating the model with class weights to see how that affects the predictions.
#
# Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
# + id="UJ589fn8ST3x"
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
# + [markdown] id="R0ynYRO0G3Lx"
# ### Check training history
# + id="BBe9FMO5ucTC"
plot_metrics(weighted_history)
# + [markdown] id="REy6WClTZIwQ"
# ### Evaluate metrics
# + id="nifqscPGw-5w"
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
# + id="owKL2vdMBJr6"
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
# + [markdown] id="PTh1rtDn8r4-"
# Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.
# + [markdown] id="hXDAwyr0HYdX"
# ### Plot the ROC
# + id="3hzScIVZS1Xm"
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="_0krS8g1OTbD"
# ### Plot the AUPRC
# + id="7jHnmVebOWOC"
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="5ysRtr6xHnXP"
# ## Oversampling
# + [markdown] id="18VUHNc-UF5w"
# ### Oversample the minority class
#
# A related approach would be to resample the dataset by oversampling the minority class.
# + id="sHirNp6u7OWp"
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
# + [markdown] id="WgBVbX7P7QrL"
# #### Using NumPy
#
# You can balance the dataset manually by choosing the right number of random
# indices from the positive examples:
# + id="BUzGjSkwqT88"
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# + id="7ie_FFet6cep"
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
# + [markdown] id="IYfJe2Kc-FAz"
# #### Using `tf.data`
# + [markdown] id="usyixaST8v5P"
# If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
# + id="yF4OZ-rI6xb6"
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
# + [markdown] id="RNQUx-OA-oJc"
# Each dataset provides `(feature, label)` pairs:
# + id="llXc9rNH7Fbz"
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
# + [markdown] id="sLEfjZO0-vbN"
# Merge the two together using `experimental.sample_from_datasets`:
# + id="e7w9UQPT9wzE"
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
# + id="EWXARdTdAuQK"
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
# + [markdown] id="irgqf3YxAyN0"
# To use this dataset, you'll need the number of steps per epoch.
#
# The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
# + id="xH-7K46AAxpq"
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
# + [markdown] id="XZ1BvEpcBVHP"
# ### Train on the oversampled data
#
# Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
#
# Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
# + id="soRQ89JYqd6b"
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
# + [markdown] id="avALvzUp3T_c"
# If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
#
# But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
#
# This smoother gradient signal makes it easier to train the model.
# + [markdown] id="klHZ0HV76VC5"
# ### Check training history
#
# Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
# + id="YoUGfr1vuivl"
plot_metrics(resampled_history)
# + [markdown] id="1PuH3A2vnwrh"
# ### Re-train
#
# + [markdown] id="KFLxRL8eoDE5"
# Because training is easier on the balanced data, the above training procedure may overfit quickly.
#
# So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
# + id="e_yn9I26qAHU"
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
# + [markdown] id="UuJYKv0gpBK1"
# ### Re-check training history
# + id="FMycrpJwn39w"
plot_metrics(resampled_history)
# + [markdown] id="bUuE5HOWZiwP"
# ### Evaluate metrics
# + id="C0fmHSgXxFdW"
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
# + id="FO0mMOYUDWFk"
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
# + [markdown] id="_xYozM1IIITq"
# ### Plot the ROC
# + id="fye_CiuYrZ1U"
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="vayGnv0VOe_v"
# ### Plot the AUPRC
#
# + id="wgWXQ8aeOhCZ"
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
# + [markdown] id="3o3f0ywl8uqW"
# ## Applying this tutorial to your problem
#
# Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.
| site/en/tutorials/structured_data/imbalanced_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Simulate exoplanet yield from TESS
# The purpose of this code is to simulate the exoplanet yield from the TESS Mission. We do this by taking the various fields that TESS observes and, using a galaxy model, put planets orbiting the stars and see whether we can detect that planet.
# +
from __future__ import division, print_function
import numpy as np
import pandas as pd
import astroquery
import matplotlib.pyplot as plt
import glob
# %matplotlib inline
msun = 1.9891E30
rsun = 695500000.
G = 6.67384E-11
AU = 149597870700.
# -
# lets read our galaxt model files
# +
import matplotlib
matplotlib.style.use('ggplot')
names = ['Dist','Mv','CL','Typ','LTef','logg','Age',
'Mass','BV','UB','VI','VK','V','FeH',
'l','b','Av','Mbol']
galmodfiles = glob.glob('../data/besmod*.csv')
thisGalmodidx = 0
galmodarea = np.genfromtxt('bess_reas.txt',usecols=2)[thisGalmodidx]
intial_q = pd.read_csv(galmodfiles[thisGalmodidx], skiprows=2, names=names)
# +
intial_q['isMdwarf'] = pd.Series((intial_q.CL == 5) & (intial_q.Typ >= 7.), name='isMdwarf' )
intial_q['I'] = pd.Series(-1. * (intial_q.VI - intial_q.V), name='I')
intial_q['Teff'] = pd.Series(10**intial_q.LTef , name='Teff')
g = 10**intial_q.logg * 0.01
intial_q['Radius'] = pd.Series(np.sqrt(G*intial_q.Mass*msun / g) / rsun, name='Radius')
# -
# we previously got the besancon models, they are in the directory ../data/
# We also saved the areas for each field in bess_reas.txt
# We are doing this in a monte carlo fashion. Our outer loop is each field. The row closest to the equator is easiest because there is no overlap.
# We saved a few functions in occSimFuncs.py
# +
from occSimFuncs import (Fressin_select, Dressing_select, per2ars,
get_duration, TESS_noise_1h, nearly_equal, get_transit_depth)
consts = {'obslen': 75, #days
'sigma_threshold': 10.,
'simsize': 8, #size of the galmod field in sq deg
'full_fov': True, # if true do whole 24x24deg ccd
}
#make the catalog equal to the full fov area
if consts['full_fov'] & (consts['simsize'] < galmodarea):
multiple = galmodarea / consts['simsize']
numstars = int(intial_q.shape[0] * multiple)
rows = np.random.choice(intial_q.index.values, size=numstars)
newq = intial_q.ix[rows]
q = newq.set_index(np.arange(newq.shape[0]))
elif (simsize > galmodarea):
raise('Galmod area is too small!')
else:
q = intial_q
#some planet parameters we will need later
q['cosi'] = pd.Series(np.random.random(size=q.shape[0]),name='cosi')
q['noise_level'] = TESS_noise_1h(q.I)
#reload(occSimFuncs)
# -
# draw a bunch of planest and accociate them with each star
# +
#draw a bunch of planets, we will run Dressing for cool stars and Fressin for more massive
mstar_planets = Dressing_select(numstars)
ms_planets = Fressin_select(numstars)
q['planetRadius'] = pd.Series(np.where(q.isMdwarf,mstar_planets[0],ms_planets[0]), name='planetRadius')
q['planetPeriod'] = pd.Series(np.where(q.isMdwarf,mstar_planets[1],ms_planets[1]), name='planetPeriod')
# for i, thisStar in enumerate(q.isMdwarf):
# if thisStar == True:
# q.loc[i,'planetRadius'], q.loc[i,'planetPeriod'] = Dressing_select()
# else:
# q.loc[i,'planetRadius'], q.loc[i,'planetPeriod'] = Fressin_select()
q['Ntransits'] = np.floor(consts['obslen'] / q.planetPeriod)
q['ars'] = per2ars(q.planetPeriod,q.Mass,q.Radius)
q['duration'] = get_duration(q.planetPeriod,q.ars,b=q.impact)
q['duration_correction'] = np.sqrt(q.duration) # correction for CDPP because transit dur != 1 hour
q['transit_depth'] = get_transit_depth(q.planetRadius,q.Radius)
# -
# now lets see if those planets are detected
q['needed_for_detection'] = (q.transit_depth * q.duration_correction *
np.sqrt(q.Ntransits)) / consts['sigma_threshold']
q['has_planets'] = q.planetRadius > 0.0
q['detected'] = (q.noise_level < q.needed_for_detection) & (q.Ntransits > 1) & (q.planetRadius > 0.0) & q.has_planets
total_planets = (q.ars**-1)[q.detected]
print('total planets = {}'.format(np.sum(total_planets)))
| code/SimTessYield.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="dhN2myCLZSKY"
# # Instalación e Importación librerías Spark y PySpark
# + colab={"base_uri": "https://localhost:8080/"} id="Zi6DL5-XYGVX" outputId="6fbeb984-891d-4da1-86a9-3aee820183c8"
# !pip install pyspark
# !pip install -U -q PyDrive
# !apt install openjdk-8-jdk-headless -qq
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
# + [markdown] id="wVkZCBk3Zvlc"
# # Importamos los paquetes necesarios
# + id="Gy7uIqJ0ZyQd"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import pyspark
from pyspark.sql import *
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark import SparkContext, SparkConf
# + id="P3iVvi4oZ8R0"
# create the session
conf = SparkConf().set("spark.ui.port", "4050")
# create the context
sc = pyspark.SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# + colab={"base_uri": "https://localhost:8080/", "height": 223} id="xFi19SZ3abVi" outputId="a212e767-3080-4cfc-db22-60c6a655cf8a"
# ejecutamos spark
spark
# + [markdown] id="cZOu_TG4bzil"
# ## Spark ML
#
#  + 
# ## PEC3: Diciembre 2020
# ### Practica sobre cómo generar un flujo de ejecución en un problema de Machine Learning
#
# Esta práctica simula un ejercicio completo de ETL (Extract-Transform-Load) junto a un análisis exploratorio de un dataset real, para posteriormente aplicar differentes algoritmos de aprendizaje automático que resuelvan un problema de regresión.
#
# #### Contenido del ejercicio
#
# * *Conocimiento del dominio*
# * *Parte 1: Extracción, transformación y carga [ETL] del dataset* (2 punto sobre 10)
# * *Parte 2: Explorar los datos* (2 puntos sobre 10)
# * *Parte 3: Visualizar los datos* (2 puntos sobre 10)
# * *Parte 4: Preparar los datos* (1 puntos sobre 10)
# * *Parte 5: Modelar los datos* (3 puntos sobre 10)
#
# *Nuestro objetivo será predecir de la forma más exacta posible la energía generada por un conjunto de plantas eléctricas usando los datos generados por un conjunto de sensores.*
#
#
# ## Conocimiento del dominio
#
# ### Background
#
# La generación de energía es un proceso complejo, comprenderlo para poder predecir la potencia de salida es un elemento vital en la gestión de una planta energética y su conexión a la red. Los operadores de una red eléctrica regional crean predicciones de la demanda de energía en base a la información histórica y los factores ambientales (por ejemplo, la temperatura). Luego comparan las predicciones con los recursos disponibles (por ejemplo, plantas, carbón, gas natural, nuclear, solar, eólica, hidráulica, etc). Las tecnologías de generación de energía, como la solar o la eólica, dependen en gran medida de las condiciones ambientales, pero todas las centrales eléctricas son objeto de mantenimientos tanto planificados y como puntuales debidos a un problema.
#
# En esta practica usaremos un ejemplo del mundo real sobre la demanda prevista (en dos escalas de tiempo), la demanda real, y los recursos disponibles de la red electrica de California: http://www.caiso.com/Pages/TodaysOutlook.aspx
#
# 
#
# El reto para un operador de red de energía es cómo manejar un déficit de recursos disponibles frente a la demanda real. Hay tres posibles soluciones a un déficit de energía: construir más plantas de energía base (este proceso puede costar muchos anos de planificación y construcción), comprar e importar de otras redes eléctricas regionales energía sobrante (esta opción puede ser muy cara y está limitado por las interconexiones entre las redes de transmisión de energía y el exceso de potencia disponible de otras redes), o activar pequeñas [plantas de pico](https://en.wikipedia.org/wiki/Peaking_power_plant). Debido a que los operadores de red necesitan responder con rapidez a un déficit de energía para evitar un corte del suministro, estos basan sus decisiones en una combinación de las dos últimas opciones. En esta práctica, nos centraremos en la última elección.
#
# ### La lógica de negocio
#
# Debido a que la demanda de energía solo supera a la oferta ocasionalmente, la potencia suministrada por una planta de energía pico tiene un precio mucho más alto por kilovatio hora que la energía generada por las centrales eléctricas base de una red eléctrica. Una planta pico puede operar muchas horas al día, o solo unas pocas horas al año, dependiendo de la condición de la red eléctrica de la región. Debido al alto coste de la construcción de una planta de energía eficiente, si una planta pico solo va a funcionar por un tiempo corto o muy variable, no tiene sentido económico para que sea tan eficiente como una planta de energía base. Además, el equipo y los combustibles utilizados en las plantas base a menudo no son adecuados para uso en plantas de pico.
#
# La salida de potencia de una central eléctrica pico varía dependiendo de las condiciones ambientales, por lo que el problema de negocio a resolver se podría describir como _predecir la salida de potencia de una central eléctrica pico en función de la condiciones ambientales_ - ya que esto permitiría al operador de la red hacer compensaciones económicas sobre el número de plantas pico que ha de conectar en cada momento (o si por el contrario le interesa comprar energía más cara de otra red).
#
# Una vez descrita esta lógica de negocio, primero debemos proceder a realizar un análisis exploratorio previo y trasladar el problema de negocio (predecir la potencia de salida en función de las condiciones medio ambientales) en un tarea de aprendizaje automático (ML). Por ejemplo, una tarea de ML que podríamos aplicar a este problema es la regresión, ya que tenemos un variable objetivo (dependiente) que es numérica. Para esto usaremos [Apache Spark ML Pipeline](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark-ml-package) para calcular dicha regresión.
#
# Los datos del mundo real que usaremos en esta práctica se componen de 9.568 puntos de datos, cada uno con 4 atributos ambientales recogidos en una Central de Ciclo Combinado de más de 6 años (2006-2011), proporcionado por la Universidad de California, Irvine en [UCI Machine Learning Repository Combined Cycle Power Plant Data Set](https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant)). Para más detalles sobre el conjunto de datos visitar la página de la UCI, o las siguientes referencias:
#
# * <NAME>, [Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods](http://www.journals.elsevier.com/international-journal-of-electrical-power-and-energy-systems/), International Journal of Electrical Power & Energy Systems, Volume 60, September 2014, Pages 126-140, ISSN 0142-0615.
# * <NAME>, <NAME> and <NAME>: [Local and Global Learning Methods for Predicting Power of a Combined Gas & Steam Turbine](http://www.cmpe.boun.edu.tr/~kaya/kaya2012gasturbine.pdf), Proceedings of the International Conference on Emerging Trends in Computer and Electronics Engineering ICETCEE 2012, pp. 13-18 (Mar. 2012, Dubai).
# + colab={"base_uri": "https://localhost:8080/"} id="1N2vCfPrb1n_" outputId="3391ab8b-e026-4550-9262-5ac5114da30b"
# Descomprimimos la carpeta pra2.zip
# !unzip /content/pra2.zip
# + [markdown] id="Ir6LowQodgDU"
# **Tarea a realizar durante la primera parte:**
#
# Revisar la documentacion y referencias de:
# * [Spark Machine Learning Pipeline](https://spark.apache.org/docs/latest/ml-guide.html#main-concepts-in-pipelines).
# + [markdown] id="GIQ1rzbQeKll"
# **Tarea a realizar durante la primera parte:**Revisar la documentacion y referencias de:
# * [Spark Machine Learning Pipeline](https://spark.apache.org/docs/latest/ml-guide.html#main-concepts-in-pipelines).
# 19:06
#
# ## Parte 1: Extracción, transformación y carga [ETL] del dataset
#
# Ahora que entendemos lo que estamos tratando de hacer, el primer paso consiste en cargar los datos en un formato que podemos consultar y utilizar fácilmente. Esto se conoce como ETL o "extracción, transformación y carga". Primero, vamos a cargar nuestro archivo de HDFS.Nuestros datos están disponibles en la siguiente ruta:
#
# ```
# /carpeta-datos/pra2
# ```
# + [markdown] id="Z4S0KC8bekNS"
# ### Ejercicio 1(a)
#
# Empezaremos por visualizar una muestra de los datos. Para esto usaremos las funciones de hdfs para explorar el contenido del directorio de trabajo:/carpeta/datos/pra2
# + colab={"base_uri": "https://localhost:8080/"} id="R4PSHpQJepVG" outputId="214a6d54-f8bf-4feb-dbca-26178178cf94"
# !hdfs dfs -ls /nombre-carpeta/data/pra2
# + [markdown] id="Y-rgbydFe5r6"
# Usar la función `cat` y `| head -10` para visualizar el contenido de las primeras 10 filas del primer fichero de la lista
# + colab={"base_uri": "https://localhost:8080/"} id="_DBXI9YTe6WU" outputId="6f7bface-18f5-48c7-c430-d5055e6dce71"
# !hdfs dfs -cat /content/pra2/sheet1.csv | head -10
# + [markdown] id="obGoTIeyfNCT"
# ==========================================================================================================================================================================================================
# ### Ejercicio 1(b)
#
# Ahora usaremos PySpark para visualizar las 5 primeras líneas de los datos
#
# *Hint*: Primero crea un RDD a partir de los datos usando [`sc.textFile()`](https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext.textFile).
#
# *Hint*: Luego piensa como usar el RDD creado para mostrar datos, el método [`take()`](https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.take) puede ser una buena opción a considerar.
# + id="Aw7VUWnafVIN"
# En modo cluster
sc = pyspark.SparkContext(master="local[1]", appName="pra2_mrusso")
# + colab={"base_uri": "https://localhost:8080/", "height": 200} id="w7qo0cmLfPnO" outputId="0aadb5d3-f675-4a55-f6c0-410261b5e9bd"
sc
# + colab={"base_uri": "https://localhost:8080/"} id="99bDU9POfuCR" outputId="c7c09672-19f2-4a93-b401-716f7e4f6e39"
# Cargamos los ficheros y guardamos la variable
textosRDD = sc.textFile("/content/pra2")
print(textosRDD.take(5))
# + colab={"base_uri": "https://localhost:8080/"} id="YfF29tysgaQj" outputId="b8c7ed28-c090-4da4-cf6b-e78b3922070a"
# guardarmos el resultado
data = textosRDD
type(textosRDD)
# + colab={"base_uri": "https://localhost:8080/"} id="RmWbcj9_gzO1" outputId="239ce73b-bd88-469c-ec1f-469d90161b3e"
data.take(20)
# + [markdown] id="epTOxfV6hmyH"
# A partir nuestra exploración inicial de una muestra de los datos, podemos hacer varias observaciones sobre el proceso de ETL:
# - Los datos son un conjunto de .csv (archivos con valores separados por coma)
# - Hay una fila de cabecera, que es el nombre de las columnas
# - Parece que el tipo de los datos en cada columna es constante (es decir, cada columna es de tipo double)
#
# El esquema de datos que hemos obtenido de UCI es:
# - AT = Atmospheric Temperature in C
# - V = Exhaust Vacuum Speed
# - AP = Atmospheric Pressure
# - RH = Relative Humidity
# - PE = Power Output. Esta es la variable dependiente que queremos predecir usando los otras cuatro
#
# Para usar el paquete Spark CSV, usaremos el método [sqlContext.read.format()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.format) para especificar el formato de la fuente de datos de entrada: `'csv'`
#
# Podemos especificar diferentes opciones de como importar los datos usando el método [options()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.options).
#
# Usaremos las siguientes opciones:
# - `delimiter=','` porque nuestros datos se encuentran delimitados por comas
# - `header='true'` porque nuestro dataset tiene una fila que representa la cabecera de los datos
# - `inferschema='true'` porque creemos que todos los datos son números reales, por lo tanto la librería puede inferir el tipo de cada columna de forma automática.
#
# El ultimo componente necesario para crear un DataFrame es determinar la ubicación de los datos usando el método [load()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.load).
#
# Juntando todo, usaremos la siguiente operación:
#
# `sqlContext.read.format().options().load()`
# + [markdown] id="1ox4Ckbbh_zo"
# ### Ejercicio 1(c)
#
# Crear un DataFrame a partir de los datos.
# - El formato es csv
#
# En el campo opciones incluiremos 3, formadas por nombre de opción y valor, separadas por coma.
# - El separador es el tabulador
# - El fichero contiene cabecera 'header'
# - Para crear un dataframe necesitamos un esquema (schema). A partir de los datos Spark puede tratar de inferir el esquema, le diremos 'true'.
#
# El directorio a cargar es el especificado anteriormente. Es importante indicarle a Spark que es una ubicación ya montada en el sistema dbfs, como se ha mostrado en el ejercicio 2a.
# + id="MCwZ-nZAiAre"
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
# + id="ss6qrDG7iWxa"
# Creamos el dataframe
powerPlantDF = sqlContext.read.format('csv').options(delimiter=',',
header='true',
inferschema='true').load('/content/pra2/*.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="h42tIWO1jLy2" outputId="e9e6be3b-d2de-4ac2-fe5c-1f890210ffa9"
type(powerPlantDF)
# + colab={"base_uri": "https://localhost:8080/"} id="5ftVOf5zjTmQ" outputId="7478ae39-7614-41cc-cd20-515121f18a16"
# Contamos cuantos datos tenemos en nuestro dataframe
powerPlantDF.count()
# + id="xUZ04D2gkH55"
# TEST
expected = set([(s, 'double') for s in ('AP', 'AT', 'PE', 'RH', 'V')])
assert expected==set(powerPlantDF.dtypes), "Incorrect schema for powerPlantDF"
# + colab={"base_uri": "https://localhost:8080/"} id="A4eXelfykPX6" outputId="f999d9e6-d24e-4528-ad90-a0eba1180d07"
# Comprobamos los tipos de valores
print(powerPlantDF.dtypes)
# + colab={"base_uri": "https://localhost:8080/"} id="NMTtQO49kemW" outputId="209b3d36-b4a4-4413-f830-595b7c06a0fe"
# Para mostrar el dataframe
powerPlantDF.head(10)
# + colab={"base_uri": "https://localhost:8080/"} id="yMGTl1_GkpLO" outputId="89ccdb18-45b8-4209-dbed-fba6c4993a17"
# mostramos los 10 primeros resultados o show() los primeros 20
powerPlantDF.show(10)
# + [markdown] id="WSbfe0xqh_mz"
# Ahora en lugar de usar [spark csv](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html) para inferir (inferSchema()) los tipos de las columnas, especificaremos el esquema como [DataType](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.types.DataType), el cual es una lista de [StructField](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.types.StructType).
#
# La lista completa de tipos se encuentra en el modulo [pyspark.sql.types](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.types). Para nuestros datos, usaremos [DoubleType()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.types.DoubleType).
#
# Por ejemplo, para especificar cual es el nombre de la columna usaremos: `StructField(`_name_`,` _type_`, True)`. (El tercer parámetro, `True`, significa que permitimos que la columna tenga valores null.)
#
# ### Ejercicio 1(d)
#
# Crea un esquema a medida para el dataset.
# + id="f0J5KMotlK8h"
# para crear un nuevo schema importamos los métodos de los tipos
from pyspark.sql.types import *
# construimos el schema
customSchema = StructType([ \
StructField('AT', DoubleType(), True), \
StructField('V', DoubleType(), True), \
StructField('AP', DoubleType(), True), \
StructField('RH', DoubleType(), True), \
StructField('PE', DoubleType(), True)])
# + id="n8eTWHjamWa4"
# TEST
assert set([f.name for f in customSchema.fields])==set(['AT', 'V', 'AP', 'RH', 'PE']), 'Incorrect column names in schema.'
assert set([f.dataType for f in customSchema.fields])==set([DoubleType(), DoubleType(), DoubleType(), DoubleType(), DoubleType()]), 'Incorrect column types in schema.'
# + [markdown] id="t18J4tXXmdzH"
# ### Exercicio 1(e)
#
# Ahora, usaremos el esquema que acabamos de crear para leer los datos. Para realizar esta operación, modificaremos el paso anterior `sqlContext.read.format`. Podemos especificar el esquema haciendo:
# - Anadir `schema = customSchema` al método load (simplemente anadelo usando una coma justo después del nombre del archivo)
# - Eliminado la opción `inferschema='true'` ya que ahora especificamos el esquema que han de seguir los datos
# + id="Dz0hokFhmeby"
# creamos el dataframe con un customSchema
alt_powerPlantDF = sqlContext.read.format('csv') \
.options(delimiter=',', header='true') \
.schema(customSchema) \
.load('/content/pra2/*.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="lKSAYIjlm-VB" outputId="706e7869-d0d1-4133-e865-397ab7dde259"
alt_powerPlantDF.show(10)
# + colab={"base_uri": "https://localhost:8080/"} id="3iwfczTAmXvI" outputId="97e61dfe-6072-4849-d0b1-4aec315de869"
print(alt_powerPlantDF.dtypes)
# + [markdown] id="fm1wWhuNqtoY"
# ## Parte 2: Explorar tus Datos
#
# ### Ejercicio 2(a)
#
#
#
# Ahora que ya hemos cargado los datos, el siguiente paso es explorarlos y realizar algunos análisis y visualizaciones básicas.
#
#
#
# Este es un paso que siempre se debe realizar **antes de** intentar ajustar un modelo a los datos, ya que este paso muchas veces nos permitirá conocer una gran información sobre los datos.
#
# En primer lugar vamos a registrar nuestro DataFrame como una tabla de SQL llamado power_plant. Debido a que es posible que repitas esta práctica varias veces, vamos a tomar la precaución de eliminar cualquier tabla existente en primer lugar.
#
# Una vez ejecutado el paso anterior, podemos registrar nuestro DataFrame como una tabla de SQL usando sqlContext.registerDataFrameAsTable().
#
# Crea una tabla llamada power_plant con las indicaciones mostradas.
#
# + id="0R6t02DJqufN"
sqlContext.registerDataFrameAsTable(alt_powerPlantDF, "power_plant")
# + colab={"base_uri": "https://localhost:8080/"} id="-JAn1prjrIfM" outputId="a97e00e6-4baf-4b44-849e-d16a9103f7d5"
# Para poder realizar consultas utilizamos queries de SQL
sqlContext.sql("SELECT * FROM power_plant")
# + [markdown] id="BfFLmhRxrZkK"
# Ahora que nuestro DataFrame existe como una tabla SQL, podemos explorarlo utilizando comandos SQL y `sqlContext.sql(...)`. Utiliza la función `show()` para visualizar el resultado del dataframe.
# + colab={"base_uri": "https://localhost:8080/"} id="Trj5RdqgraFJ" outputId="0159e349-423c-4808-8f36-5175ae8880a4"
dfAll = sqlContext.sql("SELECT * FROM power_plant")
dfAll.show()
# + colab={"base_uri": "https://localhost:8080/"} id="LlHnRjnOrSZA" outputId="1a4226d0-f828-4d4c-f750-02d32f4e6dc5"
# Realizamos una prueba con dos columnas
sqlContext.sql("SELECT AT, V from power_plant").show()
# + colab={"base_uri": "https://localhost:8080/"} id="AKsJGO9CsNX4" outputId="66da812d-1a50-4cb6-c22d-fb22036e9195"
dfAll.printSchema()
# + [markdown] id="N2wWGWiRsjdo"
# **Definición de Esquema**
#
# Una vez más, nuestro esquema es el siguiente:
#
# - AT = Atmospheric Temperature in C
# - V = Exhaust Vacuum Speed
# - AP = Atmospheric Pressure
# - RH = Relative Humidity
# - PE = Power Output
#
# PE es nuestra variable objetivo. Este es el valor que intentamos predecir usando las otras mediciones.
#
# *Referencia [UCI Machine Learning Repository Combined Cycle Power Plant Data Set](https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant)*
# + [markdown] id="SWp490t9tKtG"
# ## 2b
#
# Ahora vamos a realizar un análisis estadístico básico de todas las columnas.
#
# Calculad y mostrad los resultados en modo tabla (la función `show()` os puede ser de ayuda):
# * Número de registros en nuestros datos
# * Media de cada columna
# * Máximo y mínimo de cada columna
# * Desviación estándar de cada columna
#
# Hint: Revisad [DataFrame](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame) ya que contiene métodos que permiten realizar dichos cálculos de manera sencilla.
# + colab={"base_uri": "https://localhost:8080/"} id="qpMFKsLQtMtV" outputId="8b0e8b94-fb8c-4e18-ea46-1d36378c3645"
df = sqlContext.table('power_plant')
type(df)
# + colab={"base_uri": "https://localhost:8080/"} id="25YA66HktbgS" outputId="d31199d1-d4b6-4514-cf8e-809b3d693498"
df.describe().show()
# + [markdown] id="xuK-xyvatxm9"
# ## Parte 3: Visualizar los datos
#
# Para entender nuestros datos, intentamos buscar correlaciones entre las diferentes características y sus correspondientes etiquetas. Esto puede ser importante cuando seleccionamos un modelo. Por ejemplo, si una etiqueta y sus características se correlacionan de forma lineal, un modelo de regresión lineal obtendrá un buen rendimiento; por el contrario si la relación es no lineal, modelos más complejos, como arboles de decisión pueden ser una mejor opción. Podemos utilizar herramientas de visualización para observar cada uno de los posibles predictores en relación con la etiqueta como un gráfico de dispersión para ver la correlación entre ellos.
#
# ============================================================================
# ### Ejercicio 3(a)
#
# #### Añade las siguientes figuras:
# Vamos a ver si hay una correlación entre la temperatura y la potencia de salida. Podemos utilizar una consulta SQL para crear una nueva tabla que contenga solo el de temperatura (AT) y potencia (PE), y luego usar un gráfico de dispersión con la temperatura en el eje X y la potencia en el eje Y para visualizar la relación (si la hay) entre la temperatura y la energía.
#
# Realiza los siguientes pasos:
# - Carga una muestra de datos aleatorios de 1000 pares de valores para PE y AT. Puedes utilizar una ordenación aleatoria o un sample() sobre el resultado. Para hacer el plot puedes hacer un collect().
# - Utiliza matplotlib y Pandas para hacer un scatter plot (https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.scatter.html)
# + id="cRjy6o7qt4KN"
from matplotlib import pyplot as plt
import pandas as pd
from pyspark.sql.functions import rand
x_y = sqlContext.sql("SELECT AT, PE FROM power_plant") \
.orderBy(rand()) \
.limit(1000) \
.collect()
# + colab={"base_uri": "https://localhost:8080/"} id="GqDfOKAHuyXa" outputId="e9cef98d-cb03-4e9c-e5a8-ca97b74039d8"
type(x_y)
# + colab={"base_uri": "https://localhost:8080/"} id="qs43clcAu2el" outputId="c7179ab7-6849-4f2e-8b26-3279312d10bf"
x_y
# + id="-S1fgrDGuklB"
x_y_DF = pd.DataFrame(x_y, columns=['AT', 'PE'])
# + colab={"base_uri": "https://localhost:8080/", "height": 363} id="2Ls0C2azutyV" outputId="4cf2db23-d5cf-48cd-b22d-3999bb013b6f"
x_y_DF.head(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="MHROaWMUthtm" outputId="5e94d69a-4390-43ca-99b9-e9bee8188b1f"
# creamos un gráfico tipo scatterplot
plt.scatter(x = x_y_DF['PE'],
y = x_y_DF['AT'])
plt.xlabel("Potencia")
plt.ylabel("Temperatura")
plt.title("Dispersión temperatura VS potencia")
plt.show()
# + [markdown] id="TABQ8mjVw9ID"
# ### Ejercicio 3(b)
#
# Repitiendo el proceso anterior, usa una sentencia SQL para crear un gráfico de dispersión entre las variables Power (PE) y Exhaust Vacuum Speed (V).
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="CintSFUtw9vm" outputId="dda6b835-8615-4439-8d6a-1f0d5455fd70"
x_y_vacuum=sqlContext.sql('SELECT PE, V FROM power_plant').orderBy(rand()).limit(1000).collect()
x_y_DF_vacuum = pd.DataFrame(x_y_vacuum, columns=['PE', 'V']);
plt.scatter(x=x_y_DF_vacuum['PE'],
y=x_y_DF_vacuum['V'])
plt.xlabel("Potencia")
plt.ylabel("Exhaust Vacuum Speed")
plt.title("Dispersión potencia vs exhaust vacuum speed")
#OPCIONES DE MATPLOTLIB - Titulo, x-y-label...
plt.show()
# + [markdown] id="CAtDEVTo1ru0"
# Ahora vamos a repetir este ejercicio con el resto de variables y la etiqueta Power Output.
#
# ### Ejercicio 3(c)
#
# Usa una sentencia SQL para crear un gráfico de dispersión entre las variables Power (PE) y Pressure (AP).
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="9s1irroA1uio" outputId="9d942ac0-c156-490f-aa02-19719438f756"
x_y_pressure=sqlContext.sql('SELECT PE, AP FROM power_plant').orderBy(rand()).limit(1000).collect()
x_y_DF_pressure = pd.DataFrame(x_y_pressure, columns=['PE', 'AP']);
plt.scatter(x=x_y_DF_pressure['PE'],
y=x_y_DF_pressure['AP'])
plt.xlabel("Potencia")
plt.ylabel("Presión")
plt.title("Dispersión potencia vs presión")
#OPCIONES DE MATPLOTLIB - Titulo, x-y-label...
plt.show()
# + [markdown] id="Godaz0Q6xGh7"
# ### Ejercicio 3(d)
#
# Usa una sentencia SQL para crear un gráfico de dispersión entre las variables Power (PE) y Humidity (RH).
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="zRZ4U1QlxHWq" outputId="bf4ce6b7-e693-489f-9468-272515f64476"
x_y_humidity=sqlContext.sql('SELECT PE, RH FROM power_plant').orderBy(rand()).limit(1000).collect()
x_y_DF_humidity = pd.DataFrame(x_y_humidity, columns=['PE', 'RH']);
plt.scatter(x=x_y_DF_humidity['PE'],
y=x_y_DF_humidity['RH'])
plt.xlabel("Potencia")
plt.ylabel("Humedad")
plt.title("Dispersión potencia vs humedad")
#OPCIONES DE MATPLOTLIB - Titulo, x-y-label...
plt.show()
# + [markdown] id="qhlLtyxbiKFu"
# ## Parte 4: Preparación de los datos
#
# El siguiente paso es preparar los datos para aplicar la regresión. Dado que todo el dataset es numérico y consistente, esta será una tarea sencilla y directa.
#
# El objetivo es utilizar el método de regresión para determinar una función que nos de la potencia de salida como una función de un conjunto de características de predicción. El primer paso en la construcción de nuestra regresión es convertir las características de predicción de nuestro DataFrame a un vector de características utilizando el método [pyspark.ml.feature.VectorAssembler()](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.VectorAssembler).
#
# El VectorAssembler es una transformación que combina una lista dada de columnas en un único vector. Esta transformación es muy útil cuando queremos combinar características en crudo de los datos con otras generadas al aplicar diferentes funciones sobre los datos en un único vector de características. Para integrar en un único vector toda esta información antes de ejecutar un algoritmo de aprendizaje automático, el VectorAssembler toma una lista con los nombres de las columnas de entrada (lista de strings) y el nombre de la columna de salida (string).
# + [markdown] id="nSCUNERKjth1"
# ============
# ### Ejercicio 4
#
# - Leer la documentación y los ejemplos de uso de [VectorAssembler](https://spark.apache.org/docs/latest/ml-features.html#vectorassembler)
# - Convertir la tabla SQL `power_plant` en un `dataset` llamado datasetDF
# - Establecer las columnas de entrada del VectorAssember: `["AT", "V", "AP", "RH"]`
# - Establecer la columnas de salida como `"features"`
# + [markdown] id="se2FaprEpbOv"
# En entorno **BATCH** podemos utilizar tanto Scikit-Learn con dataframe de Pandas o seguir con SparkML o MLlib (esta práctica utilizaremos ML de Spark).
#
# NOTA: En **STREAMING**, siempre es aconsejable utilizar ML en streaming (Spark Streaming ML, Apache Mahout, MLflow...) mejor no utilizar scikit-Learn
# + id="2-E9wzc8o1eY"
# creamos el dataset para vectorizarlo
from pyspark.ml.feature import VectorAssembler
datasetDF = sqlContext.table('power_plant')
vectorizer = VectorAssembler()
# + colab={"base_uri": "https://localhost:8080/"} id="Ci2UYZxZqr7l" outputId="f21ea4b3-7af8-42db-ec40-752f21343c41"
# Creamos las columnas INPUT
vectorizer.setInputCols(['AT','V','AP','RH'])
vectorizer.setOutputCol('features')
# + colab={"base_uri": "https://localhost:8080/"} id="g3y8_CNorrQa" outputId="66c83682-3fa3-4624-bac4-2fd8f271cdfc"
# aplicamos el vectorizer al dataframe
vectorizer.transform(datasetDF).head().features
# + [markdown] id="wU5iwpIXtWID"
# ## Parte 5: Modelar los datos
#
# Ahora vamos a modelar nuestros datos para predecir que potencia de salida se dara cuando tenemos una serie de lecturas de los sensores
#
# La API de [Apache Spark MLlib](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml) ofrece diferentes implementaciones de técnicas de regresion para modelar datasets. Ene ste ejercicio vamos a modelar nuestros datos para predecir que potencia de salida se dara cuando tenemos una serie de lecturas de los sensores basándonos en una simple regresion lineal ya que vimos algunos patrones lineales en nuestros datos en los graficos de dispersion durante la etapa de exploracion.
#
# Necesitamos una forma de evaluar como de bien nuestro modelo de [regresion lineal](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression) predice la produccion de potencia en funcion de parametros de entrada. Podemos hacer esto mediante la division de nuestros datos iniciales establecidos en un _Training set_ utilizado para entrenar a nuestro modelo y un _Test set_ utilizado para evaluar el rendimiento de nuestro modelo. Podemos usar el metodo nativo de los DataFrames [randomSplit()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.randomSplit) para dividir nuestro dataset. El metodo toma una lista de pesos y una semilla aleatoria opcional. La semilla se utiliza para inicializar el generador de numeros aleatorios utilizado por la funcion de division.
#
# NOTA: Animamos a los alumnos a explorar las diferentes técnicas de regresión disponibles en la [API ML de Spark](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.classification)
# + [markdown] id="774WV1xhtjev"
# ================================================================================
# ### Ejercicio 5(a)
#
# Utiliza el método [randomSplit()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.randomSplit) para dividir `datasetDF` en trainingSetDF (80% del DataFrame de entrada) y testSetDF (20% del DataFrame de entrada), para poder reproducir siempre el mismo resultado, usar la semilla 1800009193. Finalmente, cachea (cache()) cada datafrane en memoria para maximizar el rendimiento.
# + id="yG6chR4ytnNb"
# creamos train y test
from pyspark.storagelevel import StorageLevel
# creamos una semilla para la aleatoriedad del split
seed = 1800009193
weights = [0.2, 0.8]
(split20DF, split80DF) = datasetDF.randomSplit(weights=weights, seed=seed)
# + colab={"base_uri": "https://localhost:8080/"} id="NHmJP1mOv46h" outputId="9b538307-ef41-49d4-c4d2-d73ea9428e62"
# Guardamos los split en cache
split20DF.persist(StorageLevel.MEMORY_ONLY)
split80DF.persist(StorageLevel.MEMORY_ONLY)
# + colab={"base_uri": "https://localhost:8080/"} id="yA_iV6sIwaRJ" outputId="59537bde-d066-4302-d7ea-e8587ef8659b"
split20DF.count()
# + colab={"base_uri": "https://localhost:8080/"} id="pUVLsYWBwdjK" outputId="6b0cf1b6-67d7-4d09-c6b3-c924c76cf6a4"
split80DF.count()
# + id="7NPRnNC1wfW2"
# guardamos en variables
trainingSetDF = split80DF.cache()
testSetDF = split20DF.cache()
# + [markdown] id="P4dqkSdRx3wZ"
# A continuacion vamos a crear un modelo de regresion lineal y utilizar su ayda para entender como entrenarlo. Ver la API de [Linear Regression](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression) para mas detalles.
#
# ### Ejercicio 5(b)
#
# - Lee la documentacion y los ejemplos de [Linear Regression](https://spark.apache.org/docs/latest/ml-classification-regression.html#linear-regression)
# - Ejecuta la siguiente celda
# + id="gRf-vauBx4hj"
from pyspark.ml.regression import LinearRegression
from pyspark.ml import Pipeline
# + colab={"base_uri": "https://localhost:8080/"} id="j7sDc7EV72JW" outputId="8701a751-cd36-4f5d-a5bd-a5d43eefc96e"
# inicializamos el estimador
lr = LinearRegression()
print(lr.explainParams())
# + [markdown] id="H91E2HAU9X52"
# La siguiente celda esta basada en [Spark ML Pipeline API for Linear Regression](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression).
#
# El primer paso es establecer los valores de los parametros siguientes:
# - Define el nombre de la columna a donde guardaremos la prediccion como "Predicted_PE"
# - Define el nombre de la columna que contiene la etiqueta como "PE"
# - Define el numero maximo de iteraciones a 100
# - Define el parametro de regularizacion a 0.1
#
# Ahora, crearemos el [ML Pipeline](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Pipeline) (flujo de ejecucion) y estableceremos las fases del pipeline como vectorizar y posteriormente aplicar el regresor lineal que hemos definido.
#
# Finalmente, crearemos el modelo entrenandolo con el DataFrame `trainingSetDF`.
#
# ### Ejercicio 5(c)
#
# - Lee la documentacion [Linear Regression](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression) documentation
# - Completa y ejecuta la siguiente celda introduciendo los pará,etros descritos para nuestra regresion.
# + colab={"base_uri": "https://localhost:8080/"} id="M3S9sTPs9YnJ" outputId="3485d9ae-bb10-4b50-a0ab-c0a1751e942d"
# Para Spark ML configuramos los parámetros de esta manera:
lr.setPredictionCol("Predicted_PE") \
.setLabelCol("PE") \
.setMaxIter(100) \
.setRegParam(0.1)
# + id="_PbVendF-ISD" colab={"base_uri": "https://localhost:8080/"} outputId="4dc0e526-d6fe-4af0-f647-5d937f9e544e"
# utilizamos la API Pipeline para crear el flujo (muy similar al de scikit-learn)
lrPipeline = Pipeline()
lrPipeline.setStages([vectorizer, lr])
# + id="npgUdWuBOegR"
# Creamos nuestro primer modelo
lrModel = lrPipeline.fit(trainingSetDF)
# + colab={"base_uri": "https://localhost:8080/"} id="jYrtnOiDOuaB" outputId="b1c951f4-9a39-40c8-8578-3182eb27300d"
lrModel
# + [markdown] id="3Ecs5ylfO6F2"
# Del articulo de Wikipedia [Linear Regression](https://en.wikipedia.org/wiki/Linear_regression) podemos leer:
# > In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable \\( y \\) and one or more explanatory variables (or independent variables) denoted \\(X\\). In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.
#
# Los modelos de regresion lineal tienen muchos usos practicos. La mayoria de los cuales se clasifican en de las siguientes dos categorias:
# - Si el objetivo es la prediccion o la reduccion de errores, la regresion lineal puede utilizarse para adaptar un modelo predictivo a un conjunto de datos observados \\(y\\) y \\(X\\). Despues de desarrollar un modelo de este tipo, dado un cierto valor \\( X\\) del que no conocemos su valor de \\(y \\), el modelo ajustado se puede utilizarse para hacer una prediccion del valor del posible valor \\(y \\).
# - Dada una variable \\(y\\) y un numero de variables \\( X_1 \\), ..., \\( X_p \\) que pueden estar relacionadas con \\(y\\), un analisis de regresion lineal puede ser aplicado a cuantificar como de fuerte es la relacion entre \\(y\\) y cada \\( X_j\\), para evaluar que \\( X_j \\) puede no tener ninguna relacion con \\(y\\), y de esta forma identificar que subconjuntos de \\( X_j \\) contienen informacion redundante sobre \\(y\\).
#
# Como estamos interesados en ambos usos, nos gustaria para predecir la potencia de salida en funcion de las variables de entrada, y nos gustaria saber cuales de las variables de entrada estan debilmente o fuertemente correlacionadas con la potencia de salida.
#
# Ya que una regresion lineal tan solo calcula la linea que minimiza el error cuadratico medio en el dataset de entrenamiento, dadas multiples dimensiones de entrada podemos expresar cada predictor como una funcion lineal en la forma:
#
# \\[ y = a + b x_1 + b x_2 + b x_i ... \\]
#
# donde \\(a\\) es el intercept (valor para el punto 0) y las \\(b\\) son los coeficientes.
#
# Para expresar los coeficientes de esa linea podemos recuperar la etapa del Estimador del Modelo del pipeline y de expresar los pesos y el intercept de la funcion.
#
# ### Ejercicio 5(d)
#
# Ejecuta la celda siguiente y asegurate que entiendes lo que sucede.
# + colab={"base_uri": "https://localhost:8080/"} id="Vfw2dWLHS_as" outputId="b1eef0d4-ce24-4a0d-a811-60830735a310"
lrModel.stages[1]
# + colab={"base_uri": "https://localhost:8080/"} id="EECf4jnKO7HU" outputId="380e1d49-b927-46f6-f858-062d1978b467"
# creamos la intercept
intercept = lrModel.stages[1].intercept
# Creamos los coeficientes
weights = lrModel.stages[1].coefficients
# crear un array con el nombre de los atributos excepto el PE (nuestra var dep)
featuresNoLabel = [col for col in datasetDF.columns if col != "PE"]
# Unimos los coeficientes y los atributos
coefficent = zip(weights, featuresNoLabel)
# creamos la ecuación de la Regresión Lineal
equation = "y = {intercept}".format(intercept=intercept)
variables = []
for x in coefficents:
weight = abs(x[0])
name = x[1]
symbol = "+" if (x[0] > 0) else "-"
equation += (" {} ({} * {})".format(symbol, weight, name))
# Finally here is our equation
print("Linear Regression Equation: " + equation)
# + [markdown] id="0ORHDJGvXOpp"
# #### ejemplo resultado
#
# Linear Regression Equation: y = 436.42968944499756 - (1.9177667442995632 * AT) - (0.2541937108619571 * V) + (0.07919159694384864 * AP) - (0.1473348449135295 * RH)
# + [markdown] id="urBACmakXtIq"
# ### PREGUNTA
#
# - Que obtenemos como resultado de esta ejecución?
#
# La ecuación devuelve nuestra observación meno el coeficiente positivo o negativo por cada uno de los atributos o variables indipendientes (predictores), cuanto más pequeños y se acerca a nuestra observación, el valor predicho es casi a cero, significa que la predicción se acerca a nuestra intercepta, cuando más lejano se encuentra (positivo o negativo), significa que el valor predicho se encuentra lejos a nivel de distancia de nuestra intercepta. Hay que examinar entre ellos los valores residuales, ya que valores muy ajustados produciría un **overfitting** y muy lejos **underfitting**.
# + [markdown] id="tU88j44mbUiJ"
# ***
#
# ### Ejercicio 5(e)
#
# Ahora estudiaremos como se comportan nuestras predicciones en este modelo. Aplicamos nuestro modelo de regresion lineal para el 20% de los datos que hemos separado del conjunto de datos de entrada. La salida del modelo sera una columna de produccion de electricidad teorica llamada "Predicted_PE".
#
# - Ejecuta la siguiente celda
# - Desplazate por la tabla de resultados y observa como los valores de la columna de salida de corriente (PE) se comparan con los valores correspondientes en la salida de potencia predecida (Predicted_PE)
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="4qjzOU7fbVFa" outputId="227d271b-90dd-4add-ecef-3dc9f1ba6141"
# aplicamos el LRmodel a nuestro test data
predictionsAndLabelDF = lrModel.transform(testSetDF).select("AT","V","AP",
"RH","PE",
"Predicted_PE")
display(predictionsAndLabelDF)
# + [markdown] id="RuopUWaTdXj-"
# A partir de una inspección visual de las predicciones, podemos ver que están cerca de los valores reales.
#
# Sin embargo, nos gustaría disponer de una medida científica exacta de la bondad del modelo. Para realizar esta medición, podemos utilizar una métrica de evaluación como la [Error cuadrático medio](https://en.wikipedia.org/wiki/Root-mean-square_deviation) (RMSE) para validar nuestro modelo.
#
# RSME se define como: \\( RMSE = \sqrt{\frac{\sum_{i = 1}^{n} (x_i - y_i)^2}{n}}\\) donde \\(y_i\\) es el valor observado \\(x_i\\) es el valor predicho
#
# RMSE es una medida muy habitual para calcular las diferencias entre los valores predichos por un modelo o un estimador y los valores realmente observados. Cuanto menor sea el RMSE, mejor será nuestro modelo.
#
# Spark ML Pipeline proporciona diferentes métricas para evaluar modelos de regresión, incluyendo [RegressionEvaluator()](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.RegressionEvaluator).
#
# Después de crear una instancia de [RegressionEvaluator](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.RegressionEvaluator), fijaremos el nombre de la columna objetivo "PE" y el nombre de la columna de predicción a "Predicted_PE". A continuación, invocaremos el evaluador en las predicciones.
#
# ### Ejercicio 5(f)
# Completa y ejecuta la celda siguiente:
# + id="2orr8v-XdYZt"
from pyspark.ml.evaluation import RegressionEvaluator
# creamos nuestra métrica de evaluación RMSE
regEval = RegressionEvaluator(predictionCol = "Predicted_PE",
labelCol = "PE",
metricName = "rmse"
)
# + colab={"base_uri": "https://localhost:8080/"} id="pdeDOmpLeNDG" outputId="ef41bbb7-eba7-48da-de2b-8e4a276745d6"
# inicializamos nuestro evaluator
rmse = regEval.evaluate(predictionsAndLabelDF)
print("Root Mean Squared Error is: %.2f" % rmse)
# + [markdown] id="ygtvoTHZfGmA"
# Otra medida de evaluación estadística muy útil es el coeficiente de determinación, que se denota \\(R ^ 2 \\) o \\(r ^ 2\\) y pronunciado "R cuadrado". Es un número que indica la proporción de la variación en la variable dependiente que es predecible a partir de las variables independientes y proporciona una medida de lo bien que los resultados observados son replicados por el modelo, basado en la proporción de la variación total de los resultados explicada por el modelo. El coeficiente de determinación va de 0 a 1 (más cerca de 1), y cuanto mayor sea el valor, mejor es nuestro modelo.
#
#
# Para calcular \\(r^2\\), hemos de ejecutar el evaluador `regEval.metricName: "r2"`
#
# Vamos a calcularlo ejecutando la celda siguiente.
# + colab={"base_uri": "https://localhost:8080/"} id="0keav9vvfHDf" outputId="ac9f2073-1309-4ffb-9b03-de50ada673c8"
# Utilizamos la otra métrica de evaluación
#r2 = regEval.evaluate(predictions, {regEval.metricName: "r2"})
r2 = regEval.evaluate(predictionsAndLabelDF, {regEval.metricName: "r2"})
print("r2: {0:.2f}".format(r2))
# + [markdown] id="4Zpp39l_ffkZ"
# - Que resultado de \\(r^2\\) hemos obtenido? A partir de dicho parametro, crees que el modelo calculado se ajusta bien a los datos?
#
# ### RESPUESTA
# El R2 indica la proporción de la varianza entre la variable dependiente y los atributos indipendientes. Al variar entre 0 y 1 cuanto más cercano esté al 1 es mejor, que corresponde al 100% es decir 1. En este caso tenemos 0.93, al parecer es un buen indicador.
#
# ***
#
# + [markdown] id="MEiTPpgKge0F"
# En general, suponiendo una distribución Gaussiana de errores, un buen modelo tendrá 68% de las predicciones dentro de 1 RMSE y 95% dentro de 2 RMSE del valor real (ver http://statweb.stanford.edu/~susan/courses/s60/split/node60.html).
#
# Vamos a examinar las predicciones y ver si un RMSE como el obtenido cumple este criterio.
#
# Crearemos un nuevo DataFrame usando [selectExpr()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.selectExpr) para generar un conjunto de expresiones SQL, y registrar el DataFrame como una tabla de SQL utilizando [registerTempTable()](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.registerTempTable).
#
# ### Ejercicio 5(g)
#
# Ejecuta la celda siguiente y asegúrate que entiendes lo que sucede.
# + id="k6YAO4kVgff4"
# calculamos los errores residuales y dividirlo por el RMSE
predictionsAndLabelDF.selectExpr("PE", "Predicted_PE",
"PE - Predicted_PE Residual_Error",
"(PE - Predicted_PE) / {} Within_RMSE".format(rmse)) \
.registerTempTable("Power_Plant_RMSE_Evaluation")
# + [markdown] id="XC-n-2JQgzO-"
# Podemos utilizar sentencias SQL para explorar la tabla `Power_Plant_RMSE_Evaluation`. En primer lugar vamos a ver qué datos en la tabla utilizando una sentencia SELECT de SQL.
#
# Completa la siguiente consulta para que devuelva los elementos de la tabla `Power_Plant_RMSE_Evaluation` y muestra algunos por pantalla con la acción show().
# + colab={"base_uri": "https://localhost:8080/"} id="_tR5dy0Rgzxx" outputId="28f1b972-271a-4ae3-a303-9e6119ce46e7"
sqlContext.sql("SELECT * FROM Power_Plant_RMSE_Evaluation") \
.show()
# + [markdown] id="0GCUt45aiuln"
# ## TAREA PARA CASA - parte 1
#
# ## Responde las siguientes preguntas
# - ¿A partir de los resultados del ejercicio 5, te parece que el modelo ajusta bien los datos?
# - ¿Qué otras técnicas de regresión usaríais a partir del análisis de datos del apartado 3? ¿Estan todas disponibles en Spark? ¿Y si no es el caso, qué otros librarias y herramientas de Machine Learning podriamos usar?
#
#
# + [markdown] id="A6gJchfqhruC"
# ### Ejercicio 5(h)Mostrad el RMSE como un histograma.
# https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.hist.htmlPuedes jugar con los diferentes parámetros del histograma como el numero de 'bins' o el parámetro densidad, que cambia entre el 'contaje' de elementos en cada bin o la normalización para que el area del histograma sea 1.
# De forma similar al ejercicio 3, toma una muestra aleatoria de 2000 elementos y haz un collect para hacer el plot.
# + id="kJs1-HtNh2IG"
import numpy as np
df_x=sqlContext.sql(<FILL_IN>)
#MATPLOTLIB
#LO PODÉIS MODIFICAR PARA QUE PODÁIS MOSTRAR LA DISTRIBUCIÓN EN HISTOGRAMA DE RSME
#ESTO ES SOLO UNA BASE, PERO HAY MANERAS DIFERENTES
x = pd.DataFrame(<FILL_IN>).to_numpy()
num_bins=<FILL_IN>
plt.hist(<FILL_IN>)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="W2IfznbrfgCF" outputId="6aaba53a-1482-4a3f-d9e7-209a05fdc726"
import numpy as np
df_x=sqlContext.sql("SELECT Within_RMSE FROM Power_Plant_RMSE_Evaluation") \
.orderBy(rand())\
.limit(2000).collect()
#MATPLOTLIB
#LO PODÉIS MODIFICAR PARA QUE PODÁIS MOSTRAR LA DISTRIBUCIÓN EN HISTOGRAMA DE RSME
#ESTO ES SOLO UNA BASE, PERO HAY MANERAS DIFERENTES
df_x[:10]
# + [markdown] id="zB8wqOzxiAHb"
# Observa que el histograma deberia mostrar claramente que el RMSE se centra alrededor de 0 con la gran mayoría de errores dentro de 2 RMSE.
#
# Usando una instrucción SELECT de SQL un poco más compleja, podemos contar el número de predicciones dentro de + o - 1,0 y + o - 2,0.
#
# Cuantas predicciones estan dentro de cada uno de los intervalos (+-1 RSME, +-2RSME y más allá)? Completad la parte de código que falta para poder averiguarlo.
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="f6cuUJhqfacT" outputId="e8ab840c-8f82-42c6-fde2-8ddb0548aba6"
#Comprobamos la creación del dataframe en Pandas
x = pd.DataFrame(df_x, columns=['Wihin_RMSE'], index=None)
x.head()
# + id="1NmZ5TJ4iCF_"
sqlContext.sql("SELECT case when Within_RSME <= 1.0 AND Within_RSME >= -1.0 then 1 when Within_RSME <= 2.0 AND Within_RSME >= -2.0 then 2 else 3 end RSME_Multiple, COUNT(*) AS count FROM Power_Plant_RMSE_Evaluation GROUP BY case when Within_RSME <= 1.0 AND Within_RSME >= -1.0 then 1 when Within_RSME <= 2.0 AND Within_RSME >= -2.0 then 2 else 3 end")
#MATPLOTLIB
# + [markdown] id="Ir_ZqGlwiDo3"
# - ¿Cuantas predicciones estan a 1 RMSE como máximo de los valores reales?
# - ¿Y cuantas predicciones estan a 2 RMSE como máximo de los valores reales?
# + [markdown] id="V-MvBB-giFSJ"
# ## Responde las siguientes preguntas
# 1. ¿A partir de los resultados del ejercicio 5, te parece que el modelo ajusta bien los datos?
# 2. ¿Qué otras técnicas de regresión usaríais a partir del análisis de datos del apartado 3? ¿Estan todas disponibles en Spark? ¿Y si no es el caso, qué otros librarias y herramientas de Machine Learning podriamos usar?
# 3. Investigad el modelo **LinearRegression** y observad los resultados al cambiar el resto de parámetros
# 4. Pueden ir investigando el resto de estimadores y observar y comparar los resultados
| 06-big-data/practices/pra03/03_Spark_Standalone_en_Google_Colab_022021_SparkML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Yesterdays, we learn about operators which are mostly used for solving mathematical experssions. Arthematic operators have some precedance while solving mathematical expressions i:e **BODMAS** (Body of Divide,Multiply,Addition,Subtraction) or sometimes we say it **PEMDAS** (parenthesis, exponents, Multiplication, divison addition and subtraction)
#
# NOTE : In BODMAS and PEMDAS multiplication and division is done ir-regular because they have same precedance anyone of them can be done earlier then second one.
# + active=""
# 4+8-7(5**2)/3*2+7
# 4+8-7(25)/3*2+7
# 4+8-(175)/3*2+7
# 4+8-58.33*2+7
# 4+8-116.66+7
# 12-116.66+7
# 19-116.66
# -97.66
# -
# ## Control Flow
# + active=""
# if logic:
# body
# elif logic:
# body
# else:
# body
# -
if True:
print('Pakistan')
else:
print('Pak-Navy')
# +
if True:
print('Pakistan')
else:
print('Pak-Navy')
if False:
print('Pakistan')
else:
print('Pak-Navy')
# +
# if statemnet
test_Value = 100
if test_Value > 1:
print('This code is executeable!!!')
if test_Value > 100:
print('This code is not executeable!!!')
print('Remaining program continue here...')
# +
# elif Statments
pet_type = 'Fish'
if pet_type == 'Dog':
print('You have Dog')
elif pet_type == 'Cat':
print('You have Cat')
elif pet_type == 'Fish':
print('You have a Fish')
else:
print('Not sure')
# +
# else statement
test_Value = 50
if test_Value > 1:
print('Value is greater than 1')
else:
print('Value is less than 1')
test_string = 'VALID'
if test_string == 'NOT_VALID':
print('string equals NOT_VALID')
else:
print('string equals something else')
# +
user = input('User Name: ')
password = input('password: ')
if (user == 'Qasim' or user == 'Muneeb') and password == '<PASSWORD>':
print('Valid User')
else:
print('Invalid User')
# -
user = input('User Name: ')
password = input('Password: ')
if user in ['Qasim', 'Hassan', 'Muneeb'] and password == '<PASSWORD>':
print('Valid user')
else:
print('Invalid user')
# ## Loops
# + active=""
# for <temporary Variable> in <list variable>:
# <action statement>
# <action statement>
# -
nums = [1,2,3,4,5]
for i in nums:
print(i)
# +
#continue
big_number_list = [1,2,-1,4,-5,5,2,-9]
for i in big_number_list:
if i < 0:
continue
print(i)
# +
#break
big_number_list = [1,2,-1,4,-5,5,2,-9]
for i in big_number_list:
if i < 0:
print('Negative number Detected..!!')
break
print(i)
# +
# nested loop
groups = [['Qasim','Muneeb'],['Haseeb', 'Amir'], ['Zaidi','Abbas']]
for group in groups:
print(group)
for name in group:
print(name)
# -
for i in range(5):
print(str(i)+ ' Pakistan Zindabad')
for i in range(5,10):
print(str(i)+ ' Pakistan Zindabad')
for i in range(5,10,2):
print(str(i)+ ' Pakistan Zindabad')
for i in range(1,11):
# print(2,"x",i,'=', 2*i)
# print("2 x "+str(i)+" ="+str(2*i))
print(f"2 x {i} = {2*i}")
for i in range(10,0,-1):
# print(2,"x",i,'=', 2*i)
# print("2 x "+str(i)+" ="+str(2*i))
print(f"2 x {i} = {2*i}")
# +
table_no = int(input('Enter table no: '))
start = int(input('starting point of table: '))
end = int(input('Ending point of table: '))
for i in range(start,end+1):
print(f"{table_no} x {i} = {table_no * i} ")
# +
table_no = int(input('Enter table no: '))
start = int(input('starting point of table: '))
end = int(input('Ending point of table: '))
for table in range(1,table_no+1):
print("------------------")
for i in range(start,end+1):
print(f"{table} x {i} = {table_no * i} ")
# +
table_no = int(input('Enter table no: '))
start = int(input('starting point of table: '))
end = int(input('Ending point of table: '))
for i in range(start,end+1):
for table in range(1,table_no+1):
print(f"{table} x {i} = {table_no * i} ", end='\t')
print()
# -
hungry = True
while hungry:
print('Time to eat ..!!')
hungry = False # if we donot add this loop will in infite
x = 5
while x > 0:
print(x)
x-=1
x = 1
while x < 5:
print(x)
x +=1
# +
counter = 1
while counter <=10:
print('line no : ',counter)
counter +=1
# +
counter = 1000
while counter >=990:
print('line no :', counter)
counter-=1
# -
| Day3 control_flow & loops/control_flow & loops.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## DepthwiseConv2D
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
# %matplotlib inline
colnames = ['Name', 'FileSize', 'Blocks', 'Image', 'Output', 'Params', 'J_runtime_h', 'J_frames_h', 'J_runtime_l', 'J_frames_l', 'S_runtime', 'S_frames', 'nan_1', 'nan_2']
data = pd.read_csv('DepthwiseConv2D.csv', names=colnames, skiprows=1)
data.head()
ax = plt.gca()
data.plot(kind='scatter',x='Image',y='S_runtime',color='b', ax=ax, figsize=(14,10), label = 'Sipeed')
data.plot(kind='scatter',x='Image',y='J_runtime_h',color='g', ax=ax, figsize=(14,10), label='JeVois @ 1344MHz')
data.plot(kind='scatter',x='Image',y='J_runtime_l',color='r', ax=ax, figsize=(14,10), label='JeVois @ 408MHz')
plt.title('DepthwiseConv2d - JeVois, Sipeed')
plt.xlabel('Image Size [Pixles]')
plt.ylabel('Runtime [ms]')
plt.show()
ax = plt.gca()
data.plot(kind='scatter',x='Params',y='S_frames',color='b', ax=ax, figsize=(14,10), label = 'Sipeed')
data.plot(kind='scatter',x='Params',y='J_frames_h',color='g', ax=ax, figsize=(14,10), label='JeVois @ 1344MHz')
data.plot(kind='scatter',x='Params',y='J_frames_l',color='r', ax=ax, figsize=(14,10), label='JeVois @ 408MHz')
plt.title('DepthwiseConv2d - JeVois, Sipeed')
plt.xlabel('Number of parameters')
plt.ylabel('Frames [fps]')
plt.yscale('symlog')
plt.ylim([5,1000])
plt.show()
| Graph_generator/graph_generator_v2/graphs_depthwiseConv2d-compare.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: gantutorial
# language: python
# name: gantutorial
# ---
# # Building and training your first Network with PyTorch
# ### hello nn.module world
# <div class="alert alert-block alert-success">
# <b>__init__</b> is a reseved method in python classes. It is known as a constructor in object oriented concepts. <br>
# This method called when an object is created from the class and it allow the class to initialize the attributes of a class. <br>
# We will define how our network is created in this method!
# </div>
# +
# %%writefile mycoolnetwork.py
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyCoolNetwork(nn.Module):
def __init__(self):
super(MyCoolNetwork, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
# +
from mycoolnetwork import MyCoolNetwork
net = MyCoolNetwork()
print(net)
# -
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
# <div class="alert alert-block alert-success">
# <b>PyTorch has your back!</b> You just have to define the <b>forward function</b>, and the backward function (where gradients are computed) is automatically defined for you using <b>autograd</b>. You can use any of the Tensor operations in the forward function.
# </div>
#
# ### Where are the weights?
# The weights or learnable parameters of a model are returned by net.parameters() or for single layers mylayer.parameters()
params = list(net.parameters())
print(len(params))
print(params[0].size())
print(list(net.fc2.parameters()))
# <div class="alert alert-block alert-info">
# <b>Exercise:</b> Make a network that combines an input x with another input z through separate fully-connected layers.</div>
# ## Writing a supervised learning training loop
# how do we fit this thing?
# ### Loss Functions
# A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.
# A simple loss is: nn.MSELoss which computes the mean-squared error between the input and the target.
# There are several different loss functions under the nn package
[x for x in dir(nn) if x.endswith('Loss')]
# +
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
# -
# ### Backprop
# To backpropagate the error all we have to do is to loss.backward().
# You need to clear the existing gradients though, else gradients will be accumulated to existing gradients.
#
# Now we shall call loss.backward(), and have a look at conv1’s bias gradients before and after the backward.
# +
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
# -
# ### Updating the weigths
# To take this search for the minimum error to practice we use **Stochastic Gradient Descent**, it consists of showing the input vectors of a subset of training data, compute the outputs, their errors, calculate the gradient for those examples and adjust the weights accordingly. This process is repeated over several subsets of examples until the objective function average stops decreasing.
#
# 
#
# Check out http://sebastianruder.com/optimizing-gradient-descent/
# We can do it with simple Python code ...
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
#
# <div class="alert alert-block alert-success">
# <b>PyTorch has your back!</b> However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: <b>torch.optim</b> that implements all these methods.
# </div>
#
#
from torch import optim
[x for x in dir(optim) if '__' not in x]
# Or ... with Pytorch
# +
# create your optimizer with Pytorch
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
# -
# We need to add one more Exercise here
# ## References & Interesting links
# - [PyTorch Tutorials](https://pytorch.org/tutorials/)
# - [NIPS 2016 Tutorial: Generative Adversarial Networks](https://arxiv.org/abs/1701.00160)
| notebooks/part-03-GANs-with-pytorch/00-firstGAN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime
date_time_str = '2019-12-06 09:45:00.0'
date_time_obj = datetime.datetime.strptime(date_time_str,'%Y-%m-%d %H:%M:%S.%f')
txt_data = pd.read_csv('/home/deaddemocracy/Documents/Major_Project/Bridge data/No3.txt', skiprows = 1,
sep ='\t')
# +
# txt_data
# -
txt_data['rtime'] = np.arange(len(txt_data))
txt_data['rtime'] = txt_data['rtime'].map(lambda coun:(pd.Timestamp(date_time_obj) + datetime.timedelta(milliseconds = 100 * coun)))
print(txt_data.head());
txt_data.to_csv("/home/deaddemocracy/Documents/Major_Project/Bridge data/Output.csv", sep = ';')
df = pd.read_csv('/home/deaddemocracy/Documents/Major_Project/Bridge data/Output.csv', sep =';')
df.isnull().sum(); # Displays the number of NaN cells in each columns
df = df.drop(['Unnamed: 0', 'Time(s)', 'ChipTime'],axis = 1)
df = df.dropna(axis = 1, how = 'all') # Drops columns with NaN values
df['rtime'] = pd.to_datetime(df['rtime'])
df = df.set_index('rtime')
df.index
df
df.to_csv('preprocessed.csv')
| src/Preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
import nibabel as nib
import glob
import matplotlib.pyplot as plt
import nibabel as nib
from BSUnet import *
num_of_epochs = 1
global_best_metric = 0
def read_ct(path):
img = nib.load(path)
img = img.get_data()
return img
def loadCT(path):
images = glob.glob(path+"/volume*")
segmentations = glob.glob(path+"/segmentation*", )
images = sorted(images)
segmentations = sorted(segmentations)
return images , segmentations
def preprocess(imgs):
imgs_p = np.ndarray((imgs.shape[0], img_rows, img_cols), dtype=np.uint8)
for i in range(imgs.shape[0]):
imgs_p[i] = resize(imgs[i], (img_cols, img_rows), preserve_range=True)
imgs_p = imgs_p[..., np.newaxis]
return imgs_p
def trainUnet(model,model_checkpoint,num_channels=2,num_ct=1,folders=2,batch_size=8):
"""
Training by taking ct scans of only num_ct files and each data point of shape
(512,512,num_channels)
"""
path = "../data/batch"
images ,segmentations = loadCT(path)
for i in range(0,len(images),num_ct):
print("image " + str(i)+" out of "+str(len(images)))
X_train = []
y_train = []
img = read_ct(images[i])
seg = read_ct(segmentations[i])
print("Shape of img : ", img.shape)
##img shape: (512,512,X) X is the sum of all slices of num_ct files
for j in range(0,img.shape[2]):
## simg shape (512,512)
simg = img[:,:,j].astype(float)
sseg = seg[:,:,j]
##HU clipping
simg[simg >250 ] = 250
simg[simg < -200] = -200
## Normalization
simg -= -200
simg /= 450
## treating tumor as part of liver
sseg[sseg > 0] = 1
if np.sum(sseg == 1)>0 :
X_train.append(simg)
y_train.append(sseg)
print("Len of X_train ",len(X_train))
X_train = np.array(X_train)
y_train = np.array(y_train)
X_train = X_train[...,np.newaxis]
y_train = y_train[...,np.newaxis]
print("shape of X_train ",X_train.shape)
print("Shape of y_train ",y_train.shape)
model.fit(X_train,y_train,callbacks=[model_checkpoint],batch_size=batch_size) ## set epoch to 1
return model
# +
def evaluate(model,fromIndex,batch_size=8):
path = "../data/Test"
images ,segmentations = loadCT(path)
histot = []
for i in range(fromIndex,len(images)):
print("image " + str(i))
X_test = []
y_test = []
img = read_ct(images[i])
seg = read_ct(segmentations[i])
print("Shape of img : ", img.shape)
##img shape: (512,512,X) X is the sum of all slices of num_ct files
for j in range(0,img.shape[2]):
simg = img[:,:,j].astype(float)
sseg = seg[:,:,j]
## simg shape (512,512)
##HU clipping
simg[simg >250 ] = 250
simg[simg < -200] = -200
## Normalization
simg -= -200
simg /= 450
## treating tumor as part of liver
sseg[sseg > 0] = 1
if np.sum(sseg == 1)>0 :
X_test.append(simg)
y_test.append(sseg)
print("Len of X_test ",len(X_test))
X_test = np.array(X_test)
y_test = np.array(y_test)
X_test = X_test[...,np.newaxis]
y_test = y_test[...,np.newaxis]
# mean = np.mean(X_test) # mean for data centering
# std = np.std(X_test) # std for data normalization
# X_test -= mean
# X_test /= std
print("shape of X_train ",X_test.shape)
print("Shape of y_train ",y_test.shape)
history = model.evaluate(X_test,y_test,batch_size=batch_size)
print(history)
histot.append(history)
return histot
# -
num_channels = 1
num_ct = 1
# model = liverUnet(input_size=(512,512,num_channels))
# model = get_unet_sorr(input_size=(512,512,num_channels))
model = segmentedUnet(input_size=(512,512,num_channels),output_ch=(512,512,num_channels))
model_checkpoint = ModelCheckpoint('./weights/BSUnet/best_weights.hdf5', monitor='loss',verbose=1, save_best_only=True)
model.summary()
# +
# num_epochs = 2
# for e in range(num_epochs):
# print("*"*50)
# print("** epoch ",e)
# model = trainUnet(model,model_checkpoint,num_channels=num_channels,num_ct=num_ct,folders=1,batch_size=10)
# model.save_weights('./weights/BSUnet/after_epoch{}.hdf5'.format(e))
# model.save_weights('./weights/BSUnet/final_weights.hdf5')
# -
model.load_weights('./weights/BSUnet/final_weights.hdf5')
# # Validation
# model.load_weights('unet_liver_preprocess.hdf5')
# model.load_weights('unet_liver_after_epoch.hdf5')
h = evaluate(model,0,batch_size=2)
alp = np.array(h)
y = np.mean(alp,axis=0)
print(y)
print(global_best_metric)
# if y[1]>global_best_metric[1]:
# global_best_metric = y
values = {}
histories = {}
for e in range(12):
model.load_weights('./weights/BSUnet/after_epoch{}.hdf5'.format(e))
h = evaluate(model,0,batch_size = 2)
alp = np.array(h)
mean = np.mean(h,axis=0)
values[e] = mean
histories[e] = h
# # Testing
images ,segmentations = loadCT("../data/Test")
img = read_ct(images[11])
seg = read_ct(segmentations[11])
X_test = []
y_test = []
for j in range(0,img.shape[2]):
simg = img[:,:,j].astype(float)
sseg = seg[:,:,j]
## simg shape (512,512)
##HU clipping
simg[simg >250 ] = 250
simg[simg < -200] = -200
## Normalization
simg -= -200
simg /= 450
## treating tumor as part of liver
sseg[sseg > 0] = 1
if np.sum(sseg == 1)>0 :
X_test.append(simg)
y_test.append(sseg)
print("Len of X_test ",len(X_test))
X_test = np.array(X_test)
y_test = np.array(y_test)
X_test = X_test[...,np.newaxis]
y_test = y_test[...,np.newaxis]
print("shape of X_train ",X_test.shape)
print("Shape of y_train ",y_test.shape)
# history = model.evaluate(X_test,y_test,batch_size=batch_size)
idx = 100
plt.imshow(X_test[idx][:,:,0].reshape(512,512), cmap="gray")
plt.imshow(y_test[idx][:,:,0].reshape(512,512), cmap='jet', alpha=0.5)
result = model.predict(X_test[idx:idx+1])
result.shape
plt.imshow(X_test[idx][:,:,0].reshape(512,512), cmap="gray")
plt.imshow(result[0][:,:,0].reshape(512,512), cmap='jet', alpha=0.5)
np.sum(result>0)
values
histories[9]
| BSUnetTrainForLiver.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NumPy
#
# NumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Data Science with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks.
#
# Numpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great [StackOverflow post](http://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists).
#
import numpy as np
# Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays.
#
# # Numpy Arrays
#
# Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).
#
# Let's begin our introduction by exploring how to create NumPy arrays.
#
# ## Creating NumPy Arrays
#
# ### From a Python List
#
# We can create an array by directly converting a list or list of lists:
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
# ## Built-in Methods
#
# There are lots of built-in ways to generate Arrays
# ### arange
#
# Return evenly spaced values within a given interval.
np.arange(0,10)
np.arange(0,11,2)
# ### zeros and ones
#
# Generate arrays of zeros or ones
np.zeros(3)
np.zeros((5,5))
np.ones(3)
np.ones((3,3))
# ### linspace
# Return evenly spaced numbers over a specified interval.
np.linspace(0,10,3)
np.linspace(0,10,50)
# ## eye
#
# Creates an identity matrix
np.eye(4)
# ## Random
#
# Numpy also has lots of ways to create random number arrays:
#
# ### rand
# Create an array of the given shape and populate it with
# random samples from a uniform distribution
# over ``[0, 1)``.
np.random.rand(2)
np.random.rand(5,5)
# ### randn
#
# Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform:
np.random.randn(2)
np.random.randn(5,5)
# ### randint
# Return random integers from `low` (inclusive) to `high` (exclusive).
np.random.randint(1,100)
np.random.randint(1,100,10)
# ## Array Attributes and Methods
#
# Let's discuss some useful attributes and methods or an array:
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
# ## Reshape
# Returns an array containing the same data with a new shape.
arr.reshape(5,5)
# ### max,min,argmax,argmin
#
# These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
# ## Shape
#
# Shape is an attribute that arrays have (not a method):
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
# ### dtype
#
# You can also grab the data type of the object in the array:
arr.dtype
| 01-Arrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
points = np.arange(-10,10,0.5)
# points would be all the numbers in the range provided
points.shape
# points above are cordinate vectors
# we can get the cordinate matrix by using the meshgrid function
dx, dy = np.meshgrid(points,points)
dx.shape
dy.shape
dx
dy
plt.imshow(dx)
plt.imshow(dx)
plt.colorbar()
t = ( np.tan(dx) + np.tan(dy) )
plt.imshow(p)
plt.colorbar()
s = ( np.sin(dx) + np.sin(dy) )
plt.imshow(s)
plt.colorbar()
plt.title('sin(x) + sin(y)')
selectA = np.arange(1, 11)
selectA
selectB = np.arange(201,211)
selectB
cond = np.array([False, True, True, False, False, True, False, True, False, True])
# we should have same length or dimention for the cond, selectA, and selectB
answer = np.where(cond, selectA, selectB)
answer
#lets create a random matrix
r = np.random.randn(10,10)
r
ra = np.where(r<0, 1, r)
ra
| numpy/Visualize Array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
y = np.array([1 , 2 , 3, 4])
y
print(type(y))
y = np.array([3.14 , 5 , 4])
y
#
#
# Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type. If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point):
#
y = np.array([4,5,6,3] , dtype='float')
y
np.array([range(i , i+3) for i in [2 , 4 , 6]])
# # Basic array Form Np
np.zeros((2,4) , dtype='int')
np.ones((3,4))
np.full((3,5) , 3.14)
np.random.randn(3,5) #it fills array as per standard normal distribution value
np.random.random((2,3)) #random. random() is one of the function for doing random sampling in numpy. It returns an array of specified shape and fills it with random floats in the half-open interval [0.0, 1.0).
# +
# Create a 3x3 array of normally distributed random values
# with mean 0 and standard deviation 1
np.random.normal(0, 1, (3, 3))
# -
np.random.randint(0 , 10 , (3 , 4)) # Create a 3x4 array of random integers in the interval [0, 10)
np.eye(3,4) #create identity matrix
np.empty(3)
# Create an uninitialized array of three integers
# The values will be whatever happens to already exist at that memory location
# 1. bool_ Boolean (True or False) stored as a byte
# 2. int_ Default integer type (same as C long; normally either int64 or int32)
# 3. intc Identical to C int (normally int32 or int64)
# 4. intp Integer used for indexing (same as C ssize_t; normally either int32 or int64)
# 5. int8 Byte (-128 to 127)
# 6. int16 Integer (-32768 to 32767)
# 7. int32 Integer (-2147483648 to 2147483647)
# 8. int64 Integer (-9223372036854775808 to 9223372036854775807)
# 9. uint8 Unsigned integer (0 to 255)
# 10. uint16 Unsigned integer (0 to 65535)
# 11. uint32 Unsigned integer (0 to 4294967295)
# 12. uint64 Unsigned integer (0 to 18446744073709551615)
# 13. float_ Shorthand for float64.
# 14. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
# 15. float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa
# 16. float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa
# 17. complex_ Shorthand for complex128.
# 18. complex64 Complex number, represented by two 32-bit floats
# 19. complex128 Complex number, represented by two 64-bit floats
# # Basic Numpy array
# Attribute of the array
# +
np.random.seed(0) #means same random value generated each time whenever this will run
x1 = np.random.randint(10 , size = 6)
x2 = np.random.randint(10 , size = (3,4))
x3 = np.random.randint(10 , size = (3,4,5))
# -
print(x3.ndim , x3.shape , x3.size)
print(x3.dtype)
print(str(x3.itemsize) +" bytes " , str(x3.nbytes) + " bytes")
x1
x1[-1]
x2
x2[0 ,0]
# +
#array slicing x[start:stop:step]
# -
y = np.arange(10 , dtype='int')
y[:7]
y[4:]
y[4:6]
y[::4] #every other element
y[2::2] #every other element starting from index 2
y[::-1] # all elements in reversed order
x2
x2[:2 , :3] #two rows 3 column
x2[: , :3]
x2[::-1 , ::-1]
x2[:2 , :2]
# Now if we modify this subarray, we'll see that the original array is changed! Observe:
x2_sub = x2[:2 , :2]
x2_sub[0 , 0]=4
x2_sub
x2
#
#
# This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer.
#
# # Creating copies of the array
#
#
# If we now modify this subarray, the original array is not touched:
#
x2_sub_copy = x2[:2 , :3].copy()
x2_sub_copy
x2_sub_copy[0 , 2]=45
x2_sub_copy
x2
# # Reshaping
# Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix. This can be done with the reshape method, or more easily done by making use of the newaxis keyword within a slice operation:
grid = np.arange(1,10).reshape(3,3)
grid
y = np.array([1,5,6,87])
print(y.shape)
y.reshape((4 , 1))
y[: , np.newaxis]
# # Array Concatenation and Splitting
# np.concatenate, np.vstack, and np.hstack. np.concatenate takes a tuple or list of arrays as its first argument
x = np.array([1 , 3 , 4 ])
y = np.array([3 , 45 , 6 , 6])
np.concatenate([x,y])
z = np.array([4 ,8,9,3])
np.concatenate([x , y , z])
grid = np.arange(2 , 8).reshape(2,3)
grid
np.concatenate([grid , grid]) # concatenate along the first axis
# concatenate along the second axis (zero-indexed)
np.concatenate([grid , grid] , axis=1)
np.concatenate([x , grid])
# For working with arrays of mixed dimensions, it can be clearer to use the np.vstack (vertical stack) and np.hstack (horizontal stack) functions:
np.vstack([x , grid])
h= np.array([[1],[2]])
np.hstack([grid ,h])
# # Splitting of the arrays
# np.split, np.hsplit, and np.vsplit. For each of these, we can pass a list of indices giving the split points:
x = [2 , 3 ,4, 48,5 ,5 ,65 , 4 ]
x1 , x2 , x3 , x4 = np.split(x , [2,1,4])
print(x1 , x2 , x3 , x4)
# +
grid = np.arange(16).reshape((4, 4))
grid
# -
uppr , lower= np.vsplit(grid , [1])
print(uppr , lower)
left, right = np.hsplit(grid, [2])
print(left)
print(right)
| NUMPY/numpy_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Highmaps Demos
# ===============
# Drilldown: http://www.highcharts.com/maps/demo/map-drilldown
# -------------------------------------------------------------
# +
from highcharts import Highmap
from highcharts.highmaps.highmap_helper import jsonp_loader, js_map_loader, geojson_handler
"""
Drilldown is a techique to present data in different detail level.
This example is to show how to generate drilldown map with both state and county level data in the US
However, the example here requires a complicated JS functions added in to chart.events options
due to the fact that it needs to query data from cloud everytime user doom in to a state when click
Similar result can be achineved by query whole data and add into the chart using add_drilldown_data_set method
which is shown in different example: map-drilldown-2.py
"""
Drilldown_functions_dict = {
'US_States': """function (e) {
if (!e.seriesOptions) {
var chart = this,
mapKey = 'countries/us/' + e.point.drilldown + '-all',
fail = setTimeout(function () {
if (!Highcharts.maps[mapKey]) {
chart.showLoading('<i class="icon-frown"></i> Failed loading ' + e.point.name);
fail = setTimeout(function () {
chart.hideLoading();
}, 1000);
}
}, 3000);
chart.showLoading('<i class="icon-spinner icon-spin icon-3x"></i>');
$.getScript('http://code.highcharts.com/mapdata/' + mapKey + '.js', function () {
data = Highcharts.geojson(Highcharts.maps[mapKey]);
$.each(data, function (i) {
this.value = i;
});
chart.hideLoading();
clearTimeout(fail);
chart.addSeriesAsDrilldown(e.point, {
name: e.point.name,
data: data,
dataLabels: {
enabled: true,
format: '{point.name}'
}
});
});
}
this.setTitle(null, { text: e.point.name });
}""",
}
H = Highmap()
"""
Drilldown map requires an additional JS library from highcharts, which can be added using
add_JSsource method
Also, it needs a bootstrap CSS file, which is added using add_CSSsource method
"""
H.add_JSsource('http://code.highcharts.com/maps/modules/drilldown.js')
H.add_CSSsource('http://netdna.bootstrapcdn.com/font-awesome/3.2.1/css/font-awesome.css')
map_url = 'http://code.highcharts.com/mapdata/countries/us/us-all.js'
geojson = js_map_loader(map_url)
data = geojson_handler(geojson)
for i, item in enumerate(data):
item.update({'drilldown':item['properties']['hc-key']})
item.update({'value': i}) # add bogus data
options = {
'chart' : {
'events': {
'drilldown': Drilldown_functions_dict['US_States'],
'drillup': "function () {\
this.setTitle(null, { text: 'USA' });\
}",
}
},
'title' : {
'text' : 'Highcharts Map Drilldown'
},
'subtitle': {
'text': 'USA',
'floating': True,
'align': 'right',
'y': 50,
'style': {
'fontSize': '16px'
}
},
'legend': {} if H.options['chart'].__dict__.get('width', 0) < 400 else { # make sure the width of chart is enough to show legend
'layout': 'vertical',
'align': 'right',
'verticalAlign': 'middle'
},
'colorAxis': {
'min': 0,
'minColor': '#E6E7E8',
'maxColor': '#005645'
},
'mapNavigation': {
'enabled': True,
'buttonOptions': {
'verticalAlign': 'bottom'
}
},
'plotOptions': {
'map': {
'states': {
'hover': {
'color': '#EEDD66'
}
}
}
},
'drilldown': {
'activeDataLabelStyle': {
'color': '#FFFFFF',
'textDecoration': 'none',
'textShadow': '0 0 3px #000000'
},
'drillUpButton': {
'relativeTo': 'spacingBox',
'position': {
'x': 0,
'y': 60
}
}
}
}
H.add_data_set(data,'map','USA',dataLabels = {
'enabled': True,
'format': '{point.properties.postal-code}'
})
H.set_dict_options(options)
H
| examples/ipynb/highmaps/map-drilldown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from splinter import Browser
from bs4 import BeautifulSoup as bs
import pandas as pd
import requests
import time
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
# ## NASA Mars News
news_url = 'https://mars.nasa.gov/news'
browser.visit(news_url)
time.sleep(3)
news_html = browser.html
news_soup = bs(news_html, "html.parser")
news_title_find = news_soup.find('div', class_="content_title")
news_title = news_title_find.text
# news_title
news_p_find = news_soup.find('div', class_ = "article_teaser_body")
news_p = news_p_find.text
# news_p
print(news_title)
print(news_p)
# ## JPL Space Images - Featured Image
# +
jpl_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(jpl_url)
jpl_html = browser.html
jpl_soup = bs(jpl_html, "html.parser")
# print(jpl_soup.prettify())
featured_image = jpl_soup.find('a', class_="button fancybox")
featured_image_url = 'https://www.jpl.nasa.gov'+ featured_image['data-fancybox-href']
print(featured_image_url)
# -
# ## Mars Weather
# +
mars_weather_url = ('https://twitter.com/marswxreport?lang=en')
browser.visit(mars_weather_url)
time.sleep(3)
weather_html = browser.html
weather_soup = bs(weather_html, "html.parser")
# print(weather_soup.prettify())
mars_tweets = [weather_soup.find_all('p', class_="TweetTextSize"), weather_soup.find_all('span', class_="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0")]
# -
for tweets in mars_tweets:
mars_tweet = tweets
for tweet in mars_tweet:
if 'InSight' in tweet.text:
mars_weather = tweet.text
if tweet.a in tweet:
mars_weather = mars_weather.strip(tweet.a.text)
break
print(mars_weather)
# ## Mars Facts
mars_facts_url = 'https://space-facts.com/mars/'
browser.visit(mars_facts_url)
tables = pd.read_html(mars_facts_url)
mars_table = tables[0]
mars_table.rename(columns={0:'Facts',1:'Value'}, inplace=True)
mars_table
mars_facts = tables[0].to_html()
mars_facts
# ## Mars Hemisphere
# +
hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemisphere_url)
hemisphere_html = browser.html
hemisphere_soup = bs(hemisphere_html, "html.parser")
hemispheres = hemisphere_soup.find_all('a', class_="itemLink")
# hemispheres[0].get('href')
# -
link_list = []
for hemi in hemispheres:
if hemi.get('href') not in link_list:
link_list.append(hemi.get('href'))
links = ['https://astrogeology.usgs.gov' + link for link in link_list]
links
hemisphere_image_urls = []
for link in links:
url = link
browser.visit(url)
mars_html = browser.html
soup = bs(mars_html, "html.parser")
title_text = soup.find('h2', class_="title")
img_url = soup.find('div', class_="downloads")
hemi_dict = {'title': title_text.text, 'img_url': img_url.a.get('href')}
hemisphere_image_urls.append(hemi_dict)
hemisphere_image_urls
| Mission_to_Mars/mission_to_mars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # IMDB Movie Reviews Sentiment Analysis using Pretrained Word Embeddings
# ### Pretrained word embeddings are said to be useful only if available data is very less and pretrained word embeddings. To prove this, this model will trained with small subset of IMDB train data and its evaluation performance on test data will be compared with that of model created with Jointly Learned word embeddings and trained with full training data.
#
# ### Preprocessed IMDB data that comes packaged with Keras is used here.
#
# +
# Imports required packages
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
import numpy as np
import matplotlib.pyplot as plt
import os
# -
max_features = 10000 # count of most common words
embedding_dim = 100 # dimension of embedding
max_input_length = 500 # number of review words to into consideration
# +
# Loads train and test data from preprocessed IMDB database that comes with Keras
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = max_features)
# -
print("Train samples count:", x_train.shape[0],
"\nTest samples count:", x_test.shape[0])
# +
# Pads reviews to be of same size
x_train = pad_sequences(x_train, maxlen = max_input_length)
x_test = pad_sequences(x_test, maxlen = max_input_length)
# +
# Restricts number of training samples to 200 to prove the understanding mentioned in the title of this notebook
training_sample_count = 200
x_val = x_train[training_sample_count:]
y_val = y_train[training_sample_count:]
x_train = x_train[:training_sample_count]
y_train = y_train[:training_sample_count]
# +
# Downloads and unzip GloVe - the pretrained word embeddings
# !wget http://nlp.stanford.edu/data/glove.6B.zip
# !unzip -q glove.6B.zip
# +
# Creates embedding index
embedding_index = {}
glove_file_name = "glove.6B.100d.txt"
glove_file_path = os.path.join(os.getcwd(), glove_file_name)
file_stream = open(glove_file_path)
for line in file_stream:
values = line.split()
embedding_index[values[0]] = np.asarray(values[1:], dtype = "float32")
file_stream.close()
# -
print("Number of words in the embedding index is:", len(embedding_index))
# +
# Below steps prepare embedding matrix to be used to load the weights of embedding layer from
# Reverses the order of key and value pairs to consider only top 10000 (var max_features) words
index_word_dict = dict([(index, word) for (word, index) in imdb.get_word_index().items()])
# Creates embedding matrix with zero (0) as placeholder
embedding_matrix = np.zeros((max_features, embedding_dim))
for word_index in range(1, max_features+1):
embedding_vector = embedding_index.get(index_word_dict[word_index])
if embedding_vector is not None:
embedding_matrix[word_index - 1] = embedding_vector
# +
# Creates same model that was used with jointly learned embedding (refer other notebook)inde
model = Sequential()
model.add(Embedding(max_features, embedding_dim, input_length = max_input_length))
model.add(Flatten())
model.add(Dense(32, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
# +
# Shows the model summary
model.summary()
# +
# Sets GloVe embedding matrix as weigths of the embedding layer and sets the layer as non-trainable
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
# +
# Compiles the model
model.compile(optimizer = "rmsprop", loss = "binary_crossentropy", metrics = ["acc"])
# +
# Fits the model
epochs = 10
batch_size = 32
history = model.fit(x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_val, y_val))
# +
# Evaluates training and validation performance
history_dict = history.history
epoch_range = range(1, len(history_dict["acc"]) + 1)
train_losses = history_dict["loss"]
val_losses = history_dict["val_loss"]
train_accuracies = history_dict["acc"]
val_accuracies = history_dict["val_acc"]
plt.plot(epoch_range, val_losses, "b", label = "Validation Loss")
plt.plot(epoch_range, train_losses, "bo", label = "Training Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.title("Validation & Training Loss")
plt.figure()
plt.plot(epoch_range, val_accuracies, "b", label = "Validation Accuracy")
plt.plot(epoch_range, train_accuracies, "bo", label = "Training Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Validation & Training Accuracy")
plt.show()
# +
# Prints the validation performance
print("Validation Loss:", history_dict["val_loss"][-1],
"\nValidation Accuracy:", history_dict["val_acc"][-1])
# +
# Prints the test performance
eval_result = model.evaluate(x_test, y_test)
print("Test Loss:", eval_result[0],
"\nTest Accuracy:", eval_result[1])
# -
# ### Now, fit the model with same training data, but without the weights from the pretrained embeddings and find if pretrained embeddings actually helped to score any better on the same training set.
# +
# Re-creates the same model
model = Sequential()
model.add(Embedding(max_features, embedding_dim, input_length = max_input_length))
model.add(Flatten())
model.add(Dense(32, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
# Compiles the model
model.compile(optimizer = "rmsprop", loss = "binary_crossentropy", metrics = ["acc"])
# Fits the model
history = model.fit(x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_val, y_val))
# +
# Plots the training and validation performance, once again
history_dict = history.history
epoch_range = range(1, len(history_dict["acc"]) + 1)
train_losses = history_dict["loss"]
val_losses = history_dict["val_loss"]
train_accuracies = history_dict["acc"]
val_accuracies = history_dict["val_acc"]
plt.plot(epoch_range, val_losses, "b", label = "Validation Loss")
plt.plot(epoch_range, train_losses, "bo", label = "Training Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.title("Validation & Training Loss")
plt.figure()
plt.plot(epoch_range, val_accuracies, "b", label = "Validation Accuracy")
plt.plot(epoch_range, train_accuracies, "bo", label = "Training Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Validation & Training Accuracy")
plt.show()
# +
# Prints the validation performance
print("Validation Loss:", history_dict["val_loss"][-1],
"\nValidation Accuracy:", history_dict["val_acc"][-1])
# +
# Prints the test performance
eval_result = model.evaluate(x_test, y_test)
print("Test Loss:", eval_result[0],
"\nTest Accuracy:", eval_result[1])
# -
# ### CONCLUSION:
#
# #### There are two observations.
#
# #### 1) When data is scarce - 200 training samples in this case, there was no significant gain in terms of achieving higher test accuracy when pretained word embeddings were used, and test performance of the model with jointly learned word embeddings even did little better!. In fact latter one did well by 2% (~52% vs. ~50%).
#
# #### 2) When word embeddings were jointly learned in the previous notebook with all 25000 training data, test accuracy as high as ~83% was achieved. But it could not beat the baselined model (in notebook <em>1_IMDB_Movie_Review_Sentiment_Analysis_using_Densed_Neural_Network.ipynb</em>) that achieved ~85% test accuracy.
#
# #### It is clear from the above test data evaluation result that task specific joinltly learned embeddings perfor better when data is not scarce, and pretrained might (or might not) be of help when less data is available.
#
# #### Other configuration such as 100 dimentional embedding vectors, top 10000 words and first 500 top review words were kept common for all the models.
#
# #### Refer next notebook that will use Recurrent Neural Networks (RNN) to check if it performs better than any of these approaches used so far.
| Deep_Learning/IMDB_Movie_Reviews_Sentiment_Analysis/3_IMDB_Movie_Review_Sentiment_Analysis_using_Pretrained_Embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" width="500 px" align="center">
# ## Grover Search for Combinatorial Problems
#
# This notebook is based on an official notebook by Qiskit team, available at https://github.com/qiskit/qiskit-tutorial under the [Apache License 2.0](https://github.com/Qiskit/qiskit-tutorial/blob/master/LICENSE) license.
# Initially done by <NAME> and <NAME> (based on [this paper](https://arxiv.org/abs/1708.03684)).
#
# Your **TASK** is to follow the tutorial, as you implement the Grover algorithm (studied this week) to a real SAT problem!
# ## Introduction
#
# Grover search is one of the most popular algorithms used for searching a solution among many possible candidates using Quantum Computers. If there are $N$ possible solutions among which there is exactly one solution (that can be verified by some function evaluation), then Grover search can be used to find the solution with $O(\sqrt{N})$ function evaluations. This is in contrast to classical computers that require $\Omega(N)$ function evaluations: the Grover search is a quantum algorithm that provably can be used search the correct solutions quadratically faster than its classical counterparts.
#
# Here, we are going to illustrate the use of Grover search to solve a combinatorial problem called [Exactly-1 3-SAT problem](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#Exactly-1_3-satisfiability). The Exactly-1 3-SAT problem is a NP-complete problem, namely, it is one of the most difficult problems that are interconnected (meaning that if we solve any one of them, we essentially can solve all of them). Unfortunately, there are many natural problems that are NP-complete, such as, the Traveling Salesman Problem (TSP), the Maximum Cut (MaxCut) and so on. Up to now, there is no classical and quantum algorithm that can efficiently solve such NP-hard problems.
#
# We begin with an example of the Exactly-1 3-SAT problem. Then, we show how to design an evaluation function which is also known as the oracle (or, blackbox) which is essential to Grover search. Finally, we show the circuit of Grover search using the oracle and present their results on simulator and real-device backends.
# ## Exactly-1 3-SAT problem
#
# The Exactly-1 3-SAT problem is best explained with the following concrete problem. Let us consider a Boolean function $f$ with three Boolean variables $x_1, x_2, x_3$ as below.
#
# $$
# f(x_1, x_2, x_3) = (x_1 \vee x_2 \vee \neg x_3) \wedge (\neg x_1 \vee \neg x_2 \vee \neg x_3) \wedge (\neg x_1 \vee x_2 \vee x_3)
# $$
#
# In the above function, the terms on the right-hand side equation which are inside $()$ are called clauses. Each clause has exactly three literals. Namely, the first clause has $x_1$, $x_2$ and $\neg x_3$ as its literals. The symbol $\neg$ is the Boolean NOT that negates (or, flips) the value of its succeeding literal. The symbols $\vee$ and $\wedge$ are, respectively, the Boolean OR and AND. The Boolean $f$ is satisfiable if there is an assignment of $x_1, x_2, x_3$ that evaluates to $f(x_1, x_2, x_3) = 1$ (or, $f$ evaluates to True). The Exactly-1 3-SAT problem requires us to find an assignment such that $f = 1$ (or, True) and there is *exactly* one literal that evaluates to True in every clause of $f$.
#
# A naive way to find such an assignment is by trying every possible combinations of input values of $f$. Below is the table obtained from trying all possible combinations of $x_1, x_2, x_3$. For ease of explanation, we interchangably use $0$ and False, as well as $1$ and True.
#
# |$x_1$ | $x_2$ | $x_3$ | $f$ | Comment |
# |------|-------|-------|-----|---------|
# | 0 | 0 | 0 | 1 | Not a solution because there are three True literals in the second clause |
# | 0 | 0 | 1 | 0 | Not a solution because $f$ is False |
# | 0 | 1 | 0 | 1 | Not a solution because there are two True literals in the first clause |
# | 0 | 1 | 1 | 1 | Not a solution because there are three True literals in the third clause |
# | 1 | 0 | 0 | 0 | Not a solution because $f$ is False |
# | 1 | 0 | 1 | 1 | **Solution**. BINGO!! |
# | 1 | 1 | 0 | 1 | Not a soluton because there are three True literals in the first clause |
# | 1 | 1 | 1 | 0 | Not a solution because $f$ is False |
#
#
# From the table above, we can see that the assignment $x_1x_2x_3 = 101$ is the solution fo the Exactly-1 3-SAT problem to $f$. In general, the Boolean function $f$ can have many clauses and more Boolean variables.
#
# ## A blackbox function to check the assignment of Exactly-1 3-SAT problem
#
# Here, we describe a method to construct a circuit to check the assignment of Exactly-1 3-SAT problem. The circuit can then be used as a blackbox (or, oracle) in Grover search. To design the blackbox, we do not need to know the solution to the problem in advance: it suffices to design a blackbox that checks if the assignment results in $f$ evaluates to True or False. It turns out that we can design such a blackbox efficiently (in fact, any NP-complete problem has the property that although finding the solution is difficult, checking the solution is easy).
#
# For each clause of $f$, we design a sub-circuit that outputs True if and only if there is exactly one True literal in the clause. Combining all sub-circuits for all clauses, we can then obtain the blackbox that outputs True if and only if all clauses are satisfied with exactly one True literal each.
#
# For example, let us consider the clause $(x_1 \vee \neg x_2 \vee x_3)$. It is easy to see that $y$ defined as
#
# $$
# y = x_1 \oplus \neg x_2 \oplus x_3 \oplus ( x_1 \wedge \neg x_2 \wedge x_3),
# $$
#
# import matplotlib.pyplot as plt
# # %matplotlib inline
# import numpy as npis True if and only if exactly one of $x_1$, $\neg x_2$, and $x_3$ is True. Using two working qubits, $y$ can be computed by the following sub-circuit. Below, $x_1x_2x_3$ is renamed as $q_1q_2q_3$, $q_4$ is used as a working qubit, and $q_5$ is used to store the value of $y$.
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import available_backends, execute, register, get_backend, compile
from qiskit.tools import visualization
from qiskit.tools.visualization import circuit_drawer
# -
q = QuantumRegister(6)
qc = QuantumCircuit(q)
qc.x(q[2])
qc.cx(q[1], q[5])
qc.cx(q[2], q[5])
qc.cx(q[3], q[5])
qc.ccx(q[1], q[2], q[4])
qc.ccx(q[3], q[4], q[5])
qc.ccx(q[1], q[2], q[4])
qc.x(q[2])
circuit_drawer(qc)
# In the sub-circuit above, the three `ccx` gates on the right are used to compute $( q_1 \wedge \neg q_2 \wedge q_3)$ and write the result to $q_5$, while the three `cx` gates on the left are used to compute $q_1 \oplus \neg q_2 \oplus q_3$ and write the result to $q_5$. Notice that the right-most `ccx` gate is used to reset the value of $q_4$ so that it can be reused in the succeeding sub-circuits.
#
# From the above sub-circuit, we can define a blackbox function to check the solution of the Exactly-1 3-SAT problem as follows.
def black_box_u_f(circuit, f_in, f_out, aux, n, exactly_1_3_sat_formula):
"""Circuit that computes the black-box function from f_in to f_out.
Create a circuit that verifies whether a given exactly-1 3-SAT
formula is satisfied by the input. The exactly-1 version
requires exactly one literal out of every clause to be satisfied.
"""
num_clauses = len(exactly_1_3_sat_formula)
for (k, clause) in enumerate(exactly_1_3_sat_formula):
# This loop ensures aux[k] is 1 if an odd number of literals
# are true
for literal in clause:
if literal > 0:
circuit.cx(f_in[literal-1], aux[k])
else:
circuit.x(f_in[-literal-1])
circuit.cx(f_in[-literal-1], aux[k])
# Flip aux[k] if all literals are true, using auxiliary qubit
# (ancilla) aux[num_clauses]
circuit.ccx(f_in[0], f_in[1], aux[num_clauses])
circuit.ccx(f_in[2], aux[num_clauses], aux[k])
# Flip back to reverse state of negative literals and ancilla
circuit.ccx(f_in[0], f_in[1], aux[num_clauses])
for literal in clause:
if literal < 0:
circuit.x(f_in[-literal-1])
# The formula is satisfied if and only if all auxiliary qubits
# except aux[num_clauses] are 1
if (num_clauses == 1):
circuit.cx(aux[0], f_out[0])
elif (num_clauses == 2):
circuit.ccx(aux[0], aux[1], f_out[0])
elif (num_clauses == 3):
circuit.ccx(aux[0], aux[1], aux[num_clauses])
circuit.ccx(aux[2], aux[num_clauses], f_out[0])
circuit.ccx(aux[0], aux[1], aux[num_clauses])
else:
raise ValueError('We only allow at most 3 clauses')
# Flip back any auxiliary qubits to make sure state is consistent
# for future executions of this routine; same loop as above.
for (k, clause) in enumerate(exactly_1_3_sat_formula):
for literal in clause:
if literal > 0:
circuit.cx(f_in[literal-1], aux[k])
else:
circuit.x(f_in[-literal-1])
circuit.cx(f_in[-literal-1], aux[k])
circuit.ccx(f_in[0], f_in[1], aux[num_clauses])
circuit.ccx(f_in[2], aux[num_clauses], aux[k])
circuit.ccx(f_in[0], f_in[1], aux[num_clauses])
for literal in clause:
if literal < 0:
circuit.x(f_in[-literal-1])
# -- end function
# ## Inversion about the mean
#
# Another important procedure in Grover search is to have an operation that perfom the *inversion-about-the-mean* step, namely, it performs the following transformation:
#
# $$
# \sum_{j=0}^{2^{n}-1} \alpha_j |j\rangle \rightarrow \sum_{j=0}^{2^{n}-1}\left(2 \left( \sum_{k=0}^{k=2^{n}-1} \frac{\alpha_k}{2^n} \right) - \alpha_j \right) |j\rangle
# $$
#
# The above transformation can be used to amplify the probability amplitude $\alpha_s$ when s is the solution and $\alpha_s$ is negative (and small), while $\alpha_j$ for $j \neq s$ is positive. Roughly speaking, the value of $\alpha_s$ increases by twice the mean of the amplitudes, while others are reduced. The inversion-about-the-mean can be realized with the sequence of unitary matrices as below:
#
# $$
# H^{\otimes n} \left(2|0\rangle \langle 0 | - I \right) H^{\otimes n}
# $$
#
# The first and last $H$ are just Hadamard gates applied to each qubit. The operation in the middle requires us to design a sub-circuit that flips the probability amplitude of the component of the quantum state corresponding to the all-zero binary string. The sub-circuit can be realized by the following function, which is a multi-qubit controlled-Z which flips the probability amplitude of the component of the quantum state corresponding to the all-one binary string. Applying X gates to all qubits before and after the function realizes the sub-circuit.
def n_controlled_Z(circuit, controls, target):
"""Implement a Z gate with multiple controls"""
if (len(controls) > 2):
raise ValueError('The controlled Z with more than 2 ' +
'controls is not implemented')
elif (len(controls) == 1):
circuit.h(target)
circuit.cx(controls[0], target)
circuit.h(target)
elif (len(controls) == 2):
circuit.h(target)
circuit.ccx(controls[0], controls[1], target)
circuit.h(target)
# -- end function
# Finally, the inversion-about-the-mean circuit can be realized by the following function:
def inversion_about_mean(circuit, f_in, n):
"""Apply inversion about the mean step of Grover's algorithm."""
# Hadamards everywhere
for j in range(n):
circuit.h(f_in[j])
# D matrix: flips the sign of the state |000> only
for j in range(n):
circuit.x(f_in[j])
n_controlled_Z(circuit, [f_in[j] for j in range(n-1)], f_in[n-1])
for j in range(n):
circuit.x(f_in[j])
# Hadamards everywhere again
for j in range(n):
circuit.h(f_in[j])
# -- end function
# Here is a circuit of the inversion about the mean on three qubits.
qr = QuantumRegister(3)
qInvAvg = QuantumCircuit(qr)
inversion_about_mean(qInvAvg, qr, 3)
circuit_drawer(qInvAvg)
# ## Grover Search: putting all together
#
# The complete steps of Grover search is as follow.
#
# 1. Create the superposition of all possible solutions as the initial state (with working qubits initialized to zero)
# $$ \sum_{j=0}^{2^{n}-1} \frac{1}{2^n} |j\rangle |0\rangle$$
# 2. Repeat for $T$ times:
#
# * Apply the `blackbox` function
#
# * Apply the `inversion-about-the-mean` function
#
# 3. Measure to obtain the solution
#
# The code for the above steps is as below:
# +
"""
Grover search implemented in Qiskit.
This module contains the code necessary to run Grover search on 3
qubits, both with a simulator and with a real quantum computing
device. This code is the companion for the paper
"An introduction to quantum computing, without the physics",
<NAME>, https://arxiv.org/abs/1708.03684.
"""
def input_state(circuit, f_in, f_out, n):
"""(n+1)-qubit input state for Grover search."""
for j in range(n):
circuit.h(f_in[j])
circuit.x(f_out)
circuit.h(f_out)
# -- end function
# Make a quantum program for the n-bit Grover search.
n = 3
# Exactly-1 3-SAT formula to be satisfied, in conjunctive
# normal form. We represent literals with integers, positive or
# negative, to indicate a Boolean variable or its negation.
exactly_1_3_sat_formula = [[1, 2, -3], [-1, -2, -3], [-1, 2, 3]]
# Define three quantum registers: 'f_in' is the search space (input
# to the function f), 'f_out' is bit used for the output of function
# f, aux are the auxiliary bits used by f to perform its
# computation.
f_in = QuantumRegister(n)
f_out = QuantumRegister(1)
aux = QuantumRegister(len(exactly_1_3_sat_formula) + 1)
# Define classical register for algorithm result
ans = ClassicalRegister(n)
# Define quantum circuit with above registers
grover = QuantumCircuit()
grover.add(f_in)
grover.add(f_out)
grover.add(aux)
grover.add(ans)
input_state(grover, f_in, f_out, n)
T = 2
for t in range(T):
# Apply T full iterations
black_box_u_f(grover, f_in, f_out, aux, n, exactly_1_3_sat_formula)
inversion_about_mean(grover, f_in, n)
# Measure the output register in the computational basis
for j in range(n):
grover.measure(f_in[j], ans[j])
# Execute circuit
backend = Aer.get_backend('qasm_simulator')
job = execute([grover], backend=backend, shots=1000)
result = job.result()
# Get counts and plot histogram
counts = result.get_counts(grover)
visualization.plot_histogram(counts)
# -
# ## Running the circuit in real devices
#
# We have seen that the simulator can find the solution to the combinatorial problem. We would like to see what happens if we use the real quantum devices that have noise and imperfect gates.
#
# However, due to the restriction on the length of strings that can be sent over the network to the real devices (there are more than sixty thousands charactes of QASM of the circuit), at the moment the above circuit cannot be run on real-device backends. We can see the compiled QASM on real-device `ibmqx5` backend as follows.
IBMQ.load_account()
# +
# get ibmq_16_rueschlikon configuration and coupling map
backend = IBMQ.get_backend('ibmq_16_melbourne')
backend_config = backend.configuration()
backend_coupling = backend_config['coupling_map']
# compile the circuit for ibmq_16_rueschlikon
grover_compiled = compile(grover, backend=backend, coupling_map=backend_coupling, seed=1)
grover_compiled_qasm = grover_compiled.experiments[0].header.compiled_circuit_qasm
print("Number of gates for", backend.name(), "is", len(grover_compiled_qasm.split("\n")) - 4)
# -
# The number of gates is in the order of thousands which is above the limits of decoherence time of the current near-term quantum computers. It is a challenge to design a quantum circuit for Grover search to solve large optimization problems.
#
# ### Free flow
#
# In addition to using too many gates, the circuit in this notebook uses auxiliary qubits. It is left as future work to improve the efficiency of the circuit as to make it possible to run it in the real devices. Below is the original circuit.
circuit_drawer(grover)
# ## References
#
# [1] "[A fast quantum mechanical algorithm for database search](https://arxiv.org/abs/quant-ph/9605043)", <NAME>, Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC 1996)
#
# [2] "[Tight bounds on quantum searching](https://arxiv.org/abs/quant-ph/9605034)", Boyer et al., Fortsch.Phys.46:493-506,1998
| awards/teach_me_quantum_2018/TeachMeQ/Week_6-Quantum_Search/exercises/w6_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mindaeng/rep1/blob/main/Untitled0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="qJ2ANzBahf_5"
import numpy as np
x = np.array(12)
x
# + id="fjLkSycsjB6J"
x.ndim
# + id="nWuz6YJLj603"
x = np.array([12, 3, 6, 14, 7])
x
# + id="rnd2YKVzj643"
x.ndim
# + id="fSEXH6cJkAyu"
x = np.array([[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]])
x.ndim
# + id="oInPYL7tkA1F"
x = np.array([[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]],
[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]],
[[5, 78, 2, 34, 0],
[6, 79, 3, 35, 1],
[7, 80, 4, 36, 2]]])
x.ndim
# + id="TRfg75AzkA3M"
import numpy as np
X = np.random.random((32, 10))
y = np.random.random((10,))
# + id="QAxnYSSdkA5U"
y = np.expand_dims(y, axis=0)
# + id="RrO_vCOqkA-t"
Y = np.concatenate([y] * 32, axis=0)
# + id="26-sy95TkbeE"
def naive_add_matrix_and_vector(x, y):
assert len(x.shape) == 2
assert len(y.shape) == 1
assert x.shape[1] == y.shape[0]
x = x.copy()
for i in range(x.shape[0]):
for j in range(x.shape[1]):
x[i, j] += y[j]
return x
# + id="2Ovt6AxGkbie"
import numpy as np
x = np.random.random((64, 3, 32, 10))
y = np.random.random((32, 10))
z = np.maximum(x, y)
| Untitled0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Observations and Insights
#
# ## Dependencies and starter code
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combined_study_data=pd.merge(study_results,mouse_metadata,how='outer', on="Mouse ID")
combined_study_data.head(20)
# -
# ## Summary statistics
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
Drug_Data=combined_study_data.groupby("Drug Regimen")["Tumor Volume (mm3)"]
Tumor_Data=Drug_Data.agg(['mean','median','var','std','sem'])
Tumor_Data
# -
# ## Bar plots
# +
# Generate a bar plot showing number of data points for each treatment regimen using pandas
multi_plot= Tumor_Data.plot(kind="bar", figsize=(20,5))
# +
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
x_axis = np.arange(len(Tumor_Data))
drug_regimen=[value for value in x_axis]
Tumor_Data.plot(x="Drug Regimen", y=['mean','median','var','std','sem'], kind="bar")
#plt.figure(figsize=(20,3))
#plt.bar(x_axis, Tumor_Data['mean'], color='r', alpha=0.7, align="center")
#plt.xticks(drug_regimen, Tumor_Data["Drug Regimen"], rotation="vertical")
#plt.tight_layout()
# -
# ## Pie plots
# +
# Generate a pie plot showing the distribution of female versus male mice using pandas
# +
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# -
# ## Quartiles, outliers and boxplots
# +
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# +
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
# -
# ## Line and scatter plots
# +
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# +
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# +
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
# -
| Pymaceuticals/.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## What's SQL?
#
# SQL (Structured query language): a language used for relational database management and data manipulation.
#
# SQL can be read as 'S-Q-L', or 'sequel'.
#
# SQL is a non-procedural language. The reasons can be found from [Quora discussions](https://www.quora.com/Why-is-SQL-called-a-structured-and-a-non-procedural-language). It's simple, and cannot be used to develop applications, but it's very powerful.
#
# SQL is all about data. It can 1) read/retreive data, 2) write data-add data to a table, and 3) updat data-insert new data.
#
# ## Popular relational database management systems
#
# Microsfot SQL server; MySQL; IBM DB2 Oracle; Sybace ASE; PostgreSQL; SQLite
#
# In the following sections and notebooks, I will use SQLite.
# # 03_data-models-part-1-thinking-about-your-data
# Think before you code, that is important.
| machine-learning/coursera/sql-for-data-science/course01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python Intel Pytorch
# language: python
# name: pytorch
# ---
# !pip install pandas
import pandas as pd
train = pd.read_csv('mobilenet_mixed_train2.csv')
# eval = pd.read_csv('mobilenet5_pedraza__test_features.csv')
test = pd.read_csv('mobilenet_mixed_test2.csv')
all = pd.concat([train,test])
train.shape
all_feat = all.drop(['idx','prob0','target'],axis=1)
test_feat = test.drop(['idx','prob0','target'],axis=1)
all_feat.describe()
feats = all_feat.values
positive = all['target'] == 1
positive
negative = all['target'] == 0
negative
from sklearn.decomposition import PCA
pca = PCA(5)
X_pca = pca.fit_transform(feats)
test_pca = pca.transform(test_feat)
import matplotlib.pyplot as plt
# +
colors = ['navy', 'turquoise', 'darkorange']
X_transformed, title = X_pca, "PCA"
plt.figure(figsize=(8, 8))
plt.scatter(X_pca[positive,0],X_pca[positive,1],color='r')
plt.scatter(X_pca[negative,0],X_pca[negative,1],color='b')
# plt.scatter(test_pca[:,0],test_pca[:,1],color='g')
plt.title(title + " of iris dataset")
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.show()
# -
from sklearn.manifold import TSNE
tsne = TSNE(2, perplexity=15)
X_tsne = tsne.fit_transform(feats)
# test_tsne = tsne.transform(test_feat)
# +
X_transformed, title = X_pca, "t-SNE"
plt.figure(figsize=(8, 8))
plt.scatter(X_tsne[positive,1],X_tsne[positive,0],color='r')
plt.scatter(X_tsne[negative,1],X_tsne[negative,0],color='b')
# plt.scatter(X_tsne[-46:,0],X_tsne[-46:,1],color='g')
plt.title(title + " of iris dataset")
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.show()
# -
test.shape
eval.columns
eval = pd.read_csv('mobilenet_mixed_train2.csv')
test = pd.read_csv('mobilenet_mixed_test2.csv')
from sklearn.metrics import accuracy_score
eval['prob1'] = eval['prob0']
test['prob1'] = test['prob0']
from sklearn.metrics import roc_auc_score
roc_auc_score(eval['target'], eval['prob1'])
roc_auc_score(test['target'], test['prob1'])
eval[['prob1','target']].describe()
test[['prob0','target']].describe()
from sklearn.metrics import classification_report
threshold = 0.1
eval['predt'] = (eval['prob0'] > threshold)
test['predt'] = (test['prob0'] > threshold)
classification_report(eval['target'],eval['predt'])
classification_report(test['target'],test['predt'])
accuracy_score(eval['target'],eval['predt'])
accuracy_score(test['target'],test['predt'])
utsw_test = pd.read_csv('resnet_mixed_utsw_test1.csv')
utsw_test.columns
roc_auc_score(utsw_test['target'], utsw_test['prob0'])
threshold = 0.21
utsw_test['predt'] = (utsw_test['prob0'] > threshold)
accuracy_score(utsw_test['target'],utsw_test['predt'])
| model/thyroidemb/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import astropy.coordinates as coord
import astropy.units as u
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
# gala
import gala.coordinates as gc
import gala.dynamics as gd
import gala.potential as gp
from gala.units import galactic
# -
galcen_frame = coord.Galactocentric()
sun_w0 = gd.PhaseSpacePosition(
pos=[-8.2, 0, 0.02] * u.kpc,
vel=galcen_frame.galcen_v_sun
)
# +
w0s = []
for rv in np.linspace(-100, 100, 8):
c = coord.SkyCoord(
ra="17:51:40.2082",
dec="-29:53:26.502",
unit=(u.hourangle, u.degree),
distance=1.58*u.kpc,
pm_ra_cosdec=-4.36*u.mas/u.yr,
pm_dec=3.06*u.mas/u.yr,
radial_velocity=rv*u.km/u.s
)
w0 = gd.PhaseSpacePosition(c.transform_to(galcen_frame).data)
w0s.append(w0)
w0s = gd.combine(w0s)
# -
c.galactic
(c.galactic.pm_l_cosb * c.distance).to(u.km/u.s, u.dimensionless_angles())
# +
pot = gp.MilkyWayPotential()
orbits = pot.integrate_orbit(w0s, dt=-1, t1=0, t2=-4*u.Gyr)
sun_orbit = pot.integrate_orbit(sun_w0, t=orbits.t)
fig, ax = plt.subplots(figsize=(8, 8))
_ = orbits.cylindrical.plot(['rho', 'z'], axes=[ax], color='k', alpha=0.5)
_ = sun_orbit.cylindrical.plot(['rho', 'z'], axes=[ax], color='tab:red')
ax.set_xlim(6, 10)
ax.set_ylim(-1, 1)
# -
| notebooks/adrian_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: ''
# name: pysparkkernel
# ---
spark
df = sqlContext.read.csv('s3a://linear-regression-mlc/train.csv', header=True, inferSchema=True)
df = df.limit(10_000_000)
df = df.dropna()
df.cache()
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression
# +
# NYC lies between 73 and 75 degrees West, and 40 and 42 degrees north.
TOP, BOTTOM, LEFT, RIGHT = 42, 40, -75, -73
df = df.filter(df['pickup_latitude'] >= BOTTOM)
df = df.filter(df['pickup_latitude'] <= TOP)
df = df.filter(df['pickup_longitude'] <= RIGHT)
df = df.filter(df['pickup_longitude'] >= LEFT)
df = df.filter(df['dropoff_latitude'] >= BOTTOM)
df = df.filter(df['dropoff_latitude'] <= TOP)
df = df.filter(df['dropoff_longitude'] <= RIGHT)
df = df.filter(df['dropoff_longitude'] >= LEFT)
# -
df = df.filter(df['passenger_count'] > 0)
df = df.filter(df['fare_amount'] > 0)
df = df.withColumn('datetime', df['pickup_datetime'].substr(0, 19))
# df.select('datetime').show(truncate=False)
from pyspark.sql.functions import to_timestamp
df = df.withColumn('timestamp', to_timestamp(df['datetime']))
# df.select('timestamp').show(truncate=False)
from pyspark.sql.functions import from_utc_timestamp
df = df.withColumn('NYTime', from_utc_timestamp(df['timestamp'], 'EST'))
# df.select('NYTime').show()
# +
from pyspark.sql.functions import year, month, dayofweek, hour
df = df.withColumn('year', year(df['NYTime']))
df = df.withColumn('month', month(df['NYTime']))
df = df.withColumn('dayofweek', dayofweek(df['NYTime']))
df = df.withColumn('hour', hour(df['NYTime']))
# +
x1 = df['pickup_longitude']
y1 = df['pickup_latitude']
x2 = df['dropoff_longitude']
y2 = df['dropoff_latitude']
from pyspark.sql.functions import abs as psabs
df = df.withColumn('l1', psabs(x1 - x2) + psabs(y1 - y2))
# -
inputCols = [
'pickup_latitude',
'pickup_longitude',
'dropoff_latitude',
'dropoff_longitude',
'passenger_count', 'year', 'month', 'dayofweek', 'hour', 'l1'
]
assembler = VectorAssembler(inputCols=inputCols, outputCol='features')
dataset = assembler.transform(df)
train, test = df.randomSplit([0.66, 0.33])
trainDataset = assembler.transform(train)
testDataset = assembler.transform(test)
lr = LinearRegression(featuresCol='features', labelCol='fare_amount')
model = lr.fit(trainDataset)
summary = model.evaluate(testDataset)
summary.r2
| lab-solution2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MaritzaBelmont/ms-learn-ml-crash-course-python/blob/master/01.%20Introduction%20to%20Jupyter%20Notebooks%20and%20Data%20-%20Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BypGDN2ya8cb"
# # Welcome to Azure Notebooks!
#
# Python is a free, open source programming language which is extremely popular for statistical analysis and AI.
#
# Here, we will give you a taste of what using python is like.
#
# Let's get started. We’ve provided the data for you, and cleaned it up so it’s ready for analysis. You can __move through the steps by clicking on the run button__ just above this notebook.
# + [markdown] id="GCZE7CZEa8cd"
# Exercise 1 - Introduction To Jupyter Notebooks
# ==========================
#
# The purpose of this exercise is to get you familiar with using Jupyter Notebooks. Don't worry if you find the coding difficult - this is not a Python course. You will slowly learn more as you go and you definitely don't need to understand every line of code.
#
# Step 1
# --------
#
# These notebooks contain places where you can execute code, like below.
#
# Give it a go. Click on the code below, then press `Run` in the toolbar above (or press __Shift+Enter__) to run the code.
# + id="7uUZwMdAdcXG" outputId="ea712e7d-f919-4652-9b05-8279b8b785ed" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + id="cQlK8_nha8ce"
print("The code ran successfully!")
# + [markdown] id="EjVQOsyoa8cj"
# If all went well, the code should have printed a message for you.
#
# At the start of most programming exercises we have to load things to help us do things easily, like creating graphs.
#
# Click on the code below, then __hit the `Run` button to load graphing capabilities for later in the exercise__.
# + id="TcSCTPrea8cj"
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
# + [markdown] id="wib0GPQva8cn"
# Step 2
# --------
#
# Let's get it to print a message you choose this time.
#
# #### Below, write a message between the quotation marks then run the cell.
#
# It is okay to use spaces, numbers, or letters. Your message should look red. For example, `print("this is my message")`.
# + id="3BUtg9-qa8cw" outputId="bdd93323-3a58-491a-d8ae-1ea6e7cf80c4" colab={"base_uri": "https://localhost:8080/", "height": 34}
###
# WRITE A MESSAGE BETWEEN THE SPEECH MARKS IN THE LINE BELOW, THEN PRESS RUN
###
my_message = ""
###
print(my_message)
# + id="c9L6rXhGa8co"
###
# WRITE A MESSAGE BETWEEN THE SPEECH MARKS IN THE LINE BELOW, THEN HIT RUN.
###
print("type something here!")
###
# It's ok to use spaces, numbers, or letters. Your message should look red.
# For example: print("this is my message")
# + [markdown] id="5dbieF1Ja8cs"
# You will notice hash symbols (`#`). Anything after a `#` is ignored by the computer. This lets us leave notes for you to read so that you understand the code better.
# + [markdown] id="pX10jRgca8cu"
# Step 3
# --------
#
# Python lets us save things and use them later. In this exercise we will save your message
# + [markdown] id="36SF-lHIa8c1"
# Okay, what's happened here?
#
# In the real world we might put something in an envelope (like a letter, or picture). On the envelope we write something (give it a name), like "my_letter_for_alice".
#
# In a computer, we do something similar. The thing that holds information (like the envelope) is called a **variable**. We also give each one a name.
#
# Actually, you've already done this.
#
# First, you made a message, then you saved it to a **variable** called 'my_message':
# ```
# my_message = "this is my message!"
# ↑↑↑
# the message you made
#
# my_message = "this is my message!"
# ↑↑↑
# the equals sign means to save it to the variable on the left
#
# my_message = "this is my message!"
# ↑↑↑
# this is the name of your variable. They must never have spaces in them.
# ```
# + [markdown] id="ln16_3cKa8c2"
# Step 4
# -------
#
# Let's try using variables again, but save a number inside our variable this time. Remember, the variable is on the *left hand side* of the `=` assignment symbol and is the equivalent of a labelled box. The information on the *right hand side* is the information we want to store inside the variable (or a box in our analogy).
#
# #### In the cell below replace `<addNumber>` with any number you choose.
#
# Then __run the code__.
# + id="_SJslS_0a8c2"
###
# REPLACE <addNumber> BELOW WITH ANY NUMBER
###
my_first_number = <addNumber>
###
print(my_first_number)
print(my_first_number)
# + [markdown] id="Iv5aKLWJa8c5"
# What happened here?
#
# In the real world, we might then do something with this information. For example, we might choose to read it. We can read it as many times as we like.
#
# On the computer, we can also do things with this information. Here, you asked the computer to print the message to the screen twice.
#
# ```
# print(my_first_number)
# print(my_first_number)
# ```
# + [markdown] id="FUOVva22a8c6"
# How did you do this though?
#
# ```
# print(....)
# ↑↑↑
# ```
# this is what you are asking the computer to do. It is a **method** called print. There are many methods available. Soon, we will use methods that make graphs.
# ```
# print(....)
# ↑ ↑
# ```
# methods have round brackets. What you write here between these is given to the method.
# ```
# print(my_first_number)
# ↑↑↑
# ```
# In this case, we gave it 'my_first_number', and it took it and printed it to the screen.
#
#
# Step 5
# -------
#
# Ok, let's make a graph from some data.
#
# #### In the cell below replace the `<addNumber>`'s with any number you choose
#
# Then __run the code__ to make a graph.
# + id="ffEJjSpUa8c7" outputId="feffa2c0-23e1-4e35-fafd-620b08889702" colab={"base_uri": "https://localhost:8080/", "height": 282}
# These are our x values
x_values = [1, 2, 3]
###
# BELOW INSIDE THE SQUARE BRACKETS, REPLACE THE <addNumber>'S WITH EACH WITH A NUMBER
###
y_values = [3, 9, 10]
###
# When you've done that, run the cell
# For example, you could change like this: y_values = [3, 1, 7]
# This makes a bar graph. We give it our x and y values
graph.bar(x_values, y_values)
# + [markdown] id="EEHX2JFPa8dA"
# This is very simple, but here x and y are our data.
#
# If you'd like, have a play with the code:
# * change x and y values and see how the graph changes. Make sure they have the same count of numbers in them.
# * change `graph.bar` to `graph.scatter` to change the type of graph
#
#
# Step 6
# ----------------
#
# From time to time, we will load data from text files, rather than write it into the code. You can't see these text files in your browser because they are saved on the server running this website. We can load them using code, though. Let's load one up, look at it, then graph it.
#
# #### In the cell below write `print(data.head())` then __run the code__.
# + id="2Wxhgj_8a8dB" outputId="026231d9-273a-4fd1-e0a4-d674236365de" colab={"base_uri": "https://localhost:8080/", "height": 119}
import pandas as pd
# The next line loads information about chocolate bars and saves it in a variable called 'data'
dataset = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Data/chocolate data.txt', index_col = False, sep = '\t')
###
# WRITE print(dataset.head()) BELOW TO PREVIEW THE DATA ---###
###
print(dataset.head())
###
# + [markdown] id="Yls8v_R7a8dE"
# Each row (horizontal) shows information about one chocolate bar. For example, the first chocolate bar was:
# * 185 grams
# * 65% cocoa
# * 11% sugar
# * 24% milk
# * and a customer said they were 47% happy with it
#
# We would probably say that our chocolate bar features were weight, cocoa %, sugar % and milk %
#
# Conclusion
# ----------------
#
# __Well done__ that's the end of programming exercise one.
#
# You can now go back to the course and click __'Next Step'__ to move onto some key concepts of AI - models and error.
#
#
# Optional Step 7
# ----------------
# When we say "optional" we mean exercises that might help you learn, but you don't have to do.
#
# We can graph some of these features in scatter plot. Let's put cocoa_percent on the x-axis and customer happiness on the y axis.
#
# #### In the cell below replace `<addYValues>` with `customer_happiness` and then __run the code__.
# + id="NLXWBBoAa8dE" outputId="7f3df553-a3d3-4307-d944-ca62ae068597" colab={"base_uri": "https://localhost:8080/", "height": 282}
x_values = dataset.cocoa_percent
###
# REPLACE <addYValues> BELOW WITH customer_happiness
###
y_values = dataset.customer_happiness
###
graph.scatter(x_values, y_values)
# + [markdown] id="oHfLk-LBa8dM"
# In this graph, every chocolate bar is one point. Later, we will analyse this data with AI.
| 01. Introduction to Jupyter Notebooks and Data - Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import qrcode
from PIL import Image
img_bg = Image.open('data/src/lena.jpg')
qr = qrcode.QRCode(box_size=2)
qr.add_data('I am Lena')
qr.make()
img_qr = qr.make_image()
pos = (img_bg.size[0] - img_qr.size[0], img_bg.size[1] - img_qr.size[1])
img_bg.paste(img_qr, pos)
img_bg.save('data/dst/qr_lena.png')
face = Image.open('data/src/lena.jpg').crop((175, 90, 235, 150))
qr_big = qrcode.QRCode(
error_correction=qrcode.constants.ERROR_CORRECT_H
)
qr_big.add_data('I am Lena')
qr_big.make()
img_qr_big = qr_big.make_image().convert('RGB')
pos = ((img_qr_big.size[0] - face.size[0]) // 2, (img_qr_big.size[1] - face.size[1]) // 2)
img_qr_big.paste(face, pos)
img_qr_big.save('data/dst/qr_lena2.png')
| notebook/qrcode_image_paste.ipynb |