code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Base Python)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-southeast-2:452832661640:image/python-3.6
# ---
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# SPDX-License-Identifier: MIT-0
# # Video metadata extraction and knowledge graph workshop
# # Objectives:
# This repository contains a series of 4 jupyter notebooks demonstrating how AWS AI Services like Amazon Rekognition, Amazon Transcribe and Amazon Comprehend can help you extract valuable metadata from your video assets and store that information in a Graph database like Amazon Neptune for maximum query performance and flexibility.
# At the end of the workshop you'll typically be able to search for a specific label or entity and return a list of 1min video segments related to your search across your videos.
#
# To extract metadata from a video, we'll use a the following AWS AI services:
# - Amazon Rekognition to cut the video in scenes and detect label from the video itself
# - Amazon Transcribe to convert audio into text
# - Amazon Comprehend to extract entities and topics from the transcribed text via Topic Modelling and Named Entity recognition.
#
# The metadata related to the video, segments, scenes, entities, labels will be stored in Amazon Neptune.
# Amazon Neptune is a fully managed low latency graph database service that will allow us to store metadata as nodes (aka vertices) and branches (aka edges) to represent relationships between the nodes.
# https://aws.amazon.com/neptune/
#
# The diagram below summarises the workflow:
#
# 
#
# Topics addressed within the different notebooks:
#
# Part 0:<br>
# Create the environment (S3 bucket, IAM roles/polices, SNS topic, etc) and upload your sample video
#
# Part 1:<br>
# Use Amazon Rekognition to detect scenes and labels from your video
#
# Part 2:<br>
# Use Amazon Transcribe and Amazon Comprehend to respectively transcibe audio to text and extract metadata (topics, Named Entities) from transcripts.
#
# Part 3:<br>
# Store all the previously extracted metadata in Amazon Neptune and query the graph.
#
# Part 4:<br>
# Resources clean-up
# ## Costs
# Please note that you might incur costs by running those notebooks. Most of those AI services have free tier but depending on how much you've already used or depending on the size of the video assets you're using, it might go over the limit.
#
# Finally, if you're not planning to use those resources anymore at the end of the workshop, don't forget to shutdown/delete your Amazon Neptune instance, your Sagemaker studio notebook instances and run the part4-cleanup notebook to delete all the other resources created throughout the notebooks (S3 buckets, IAM roles, SNS topics, etc).
#
# Before proceeding, please check the related services pricing pages:
#
# https://aws.amazon.com/transcribe/pricing/
#
# https://aws.amazon.com/comprehend/pricing/
#
# https://aws.amazon.com/rekognition/pricing/
#
# https://aws.amazon.com/neptune/pricing/
# # Part 0 - Environment setup - S3 Bucket creation, SNS topic and IAM role
# In the steps below we're going to create the S3 bucket where we'll upload our video, the SNS topic that some AWS services will use to publish outcomes of the jobs as well as the required policies/roles for the various AWS services to access those objects.<br>
#
# <b>Please note that you will need to provide an valid .mp4 video stored in a S3 bucket as input for this workshop. It is NOT included in the github repo assets.</b>
#
# This video will be used for the different metadata extraction steps. We suggest you use ~5min editorial video or video trailer for which you have the required copyrights.
#
# The example we used to run the various jobs and generate the graphs is a video trailer from an Amazon Studios production.
# !pip install boto3
# !pip install sagemaker
# +
import boto3
import sagemaker
import random
import json
import time
import os
import shutil
import logging
import sys
logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
log = logging.getLogger('knowledge-graph-logger')
s3 = boto3.client('s3')
# -
# IMPORTANT:<br>
#
# Make sure before you start executing this notebook that the execution role you've configured for your notebook or studio instance has the following permissions:
# - read/write permission to your S3 buckets
# - IAM permission to create the policy/role
# - SNS permission to create a SNS topic
# - permissions to invoke Amazon Rekognition, Amazon Comprehend, Amazon Transcribe APIs (e.g. AmazonRekognitionFullAccess, ComprehendFullAccess, AmazonTranscribeFullAccess)
#
# You'll get "AuthorizationErrorException" messages otherwise.
# +
iam = boto3.client("iam")
#get sagemaker execution role Arn
sagemaker_role = sagemaker.get_execution_role()
#get the role's name
sagemaker_role_name = sagemaker_role.split('/')[-1]
print(f'sagemaker role name:{sagemaker_role_name} \n')
# -
# The below cell will list all managed iam policies associated with your sagemaker execution role. Check that it has the required permission before proceeding. Note that this cell will not run if your sagemaker execution role doesn't have the required IAM rights.
# +
#retrieve associated managed iam policies
paginator = iam.list_attached_role_policies(RoleName=sagemaker_role_name)
#listing
for policy in paginator['AttachedPolicies']:
print(policy)
# -
# ### SNS topic creation
# We're creating a simple topic that will later be used by Amazon Rekognition notably to publish the outcome/status of the video analysis jobs.
# +
sns = boto3.client('sns')
def create_sns_topic(name):
try:
topic = sns.create_topic(Name=name)
except:
log.exception("Couldn't create topic %s.", name)
raise
else:
return topic['TopicArn']
sns_topic_arn = create_sns_topic('knowledge-graph-lab-rek-sns-topic')
print(sns_topic_arn)
# -
# ### S3 bucket creation
# Amazon S3 bucket names are globally unique. To create a unique bucket name, we're appending your account ID and a random int at the end of the bucket name.
# +
region = 'ap-southeast-2' #specify the region of your choice
#retrieving your account ID
account_id = boto3.client('sts').get_caller_identity().get('Account')
#bucket name
bucket = 'sagemaker-knowledge-graph-' + region + '-' + account_id + '-' + str(random.randint(0,100000))
log.info(f'bucket name: {bucket}')
#create the bucket
s3.create_bucket(
Bucket=bucket,
CreateBucketConfiguration={'LocationConstraint': region}
)
# -
# Creating the bucket
# ### Create IAM policy
# Amazon Rekognition, Transcribe and Comprehend will need to be able to read the contents of your S3 bucket. So add a bucket policy which allows that.
# +
s3_bucket_policy = {
"Version": "2012-10-17",
"Id": "KnowledgeGraphS3BucketAccessPolicy",
"Statement": [
{
"Sid": "KnowledgeGraphS3BucketAccessPolicy",
"Effect": "Allow",
"Principal": {
"Service": "rekognition.amazonaws.com",
"Service": "transcribe.amazonaws.com",
"Service": "comprehend.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::{}".format(bucket),
"arn:aws:s3:::{}/*".format(bucket)
]
}
]
}
s3.put_bucket_policy(Bucket=bucket, Policy=json.dumps(s3_bucket_policy));
# -
# ### IAM Role creation
# We create the role that Amazon Rekognition, Comprehend, Transcribe will need to run jobs.
# +
role_name = account_id+"-knowledgeGraphLab"
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rekognition.amazonaws.com",
"Service": "transcribe.amazonaws.com",
"Service": "comprehend.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
try:
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
);
except iam.exceptions.EntityAlreadyExistsException as e:
print('Warning: role already exists:', e)
create_role_response = iam.get_role(
RoleName = role_name
);
role_arn = create_role_response["Role"]["Arn"]
# Pause to allow role to be fully consistent
time.sleep(10)
print('IAM Role: {}'.format(role_arn))
# -
# <br>
# We create 2 policies, for S3 and SNS, that we attach to the role we created above.
#
# +
s3_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::{}".format(bucket),
"arn:aws:s3:::{}/*".format(bucket)
]
}
]
}
#creating the s3 policy
s3_policy_response = iam.create_policy(
PolicyName='s3AccessForRekCompTrans',
PolicyDocument=json.dumps(s3_policy),
)
s3_policy_arn = s3_policy_response['Policy']['Arn']
print(s3_policy_arn)
# +
#attaching the above policy to the role
attach_s3_policy_response = iam.attach_role_policy(
RoleName = role_name,
PolicyArn = s3_policy_response['Policy']['Arn'])
print('Response:{}'.format(attach_s3_policy_response['ResponseMetadata']['HTTPStatusCode']))
# +
sns_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:*"
],
"Effect": "Allow",
"Resource": sns_topic_arn
}
]
}
#creating the sns policy
sns_policy_response = iam.create_policy(
PolicyName='snsAccessForRekognition-' + str(random.randint(0,1000)),
PolicyDocument=json.dumps(sns_policy),
)
sns_policy_arn = sns_policy_response['Policy']['Arn']
print(sns_policy_arn)
# +
#attaching the built-in AmazonSNSFullAccess
attach_sns_policy_response = iam.attach_role_policy(
RoleName = role_name,
PolicyArn = sns_policy_arn)
print('Response:{}'.format(attach_sns_policy_response['ResponseMetadata']['HTTPStatusCode']))
# -
# ### Uploading the video to the newly created S3 bucket
# Please specify below the S3 bucket where you've stored the video file you'll use to run the notebooks. Please keep in mind that it needs to be a valid .mp4 and that your sagemaker execution role has access to your S3 bucket. You'll get an access denied exception otherwise.
# +
#S3 URL where you have uploaded your video
your_s3_original_video = 's3://< your s3 bucket>/< path to the .mp4 file>'
#extracting video names and prefix
your_s3_bucket = your_s3_original_video.split('/')[2]
your_s3_prefix = '/'.join(your_s3_original_video.split('/')[3:])
video_file = your_s3_original_video.split('/')[-1]
video_name = video_file.split('.')[0]
# -
# Downloading the file locally from the public S3 bucket to your notebook instance and uploading it to the target S3 bucket for processing.
#creating a temporary folder on your instance to store the video locally.
tmp_local_folder = './tmp'
if not os.path.exists(tmp_local_folder):
#create folder
os.makedirs(tmp_local_folder)
else:
#remove folder and files
shutil.rmtree(tmp_local_folder)
#wait for deletion to finish
while os.path.exists(tmp_local_folder): # check if it exists
pass
#create folder
os.makedirs(tmp_local_folder)
# +
#download the file locally
s3.download_file(your_s3_bucket, your_s3_prefix, os.path.join(tmp_local_folder, video_file))
#upload the video file to the target S3 bucket
s3_video_input_path = 'input'
s3.upload_file(os.path.join(tmp_local_folder, video_file), bucket, os.path.join(s3_video_input_path, video_file))
# -
# ## Amazon Neptune
#
# For part3 of the workshop, you will need to create a Neptune DB cluster.
#
# <b>IMPORTANT: please make sure you create a brand new Neptune instance for this workshop as we'll be cleaning it of its content</b>
#
# The easiest is to create your db via the console.
#
# Make sure you are in the same region where you previously created your jupyter notebook instance.
#
# Engine options: at the time when I developed this workshop, the 1.0.5.1.R2 version was the latest.
#
# DB cluster identifier: specify a relevant name
#
# Templates: "Development and Testing"
# 
# DB instance size: db.t3.medium
#
# Multi-AZ deployment: No
#
# Connectivity: make sure you choose the same VPC as the one you're using for your notebook instance. In my case I am using the default one.
#
# 
# Notebook configuration: uncheck the "Create notebook". we are going to create a separate notebook in sagemaker.
#
# leave the rest as default and click "Create Database"
# 
# Once your cluster's status is "Available", retrieve the endpoint url and port and update the endpoint variable below.
# 
your_neptune_endpoint_url = 'wss://<your neptune endpoint>:<port>/gremlin'
# Defining some variable we'll use later for the different metadata extraction jobs
# %store tmp_local_folder
# %store bucket
# %store s3_video_input_path
# %store video_file
# %store video_name
# %store role_arn
# %store role_name
# %store sns_topic_arn
# %store s3_policy_arn
# %store sns_policy_arn
# %store your_neptune_endpoint_url
| notebooks/part0-setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Mandelbrot set - Sinusoidal
# Import necessary libraries
from PIL import Image
import numpy as np
import colorsys
from tqdm import tqdm
import math
# +
def rgb_conv(i):
"""
Function to return the tuple of colors as an integer value of rgb
:param i: a random natural number
:return: the tuple of rgb colors
"""
color = 255 * np.array(colorsys.hsv_to_rgb(i*2 / 50, 100, 255.0))
return tuple(color.astype(int))
def mandelbrot(x, y, n):
"""
Function that defines the Mandle-Brot Set
:param x: the x value of the complex number
:param y: the y value of the complex number
:return: the rbg color code
"""
if not isinstance(x, (float, int)):
raise TypeError('Input x can only be floats or integers are allowed')
if not isinstance(y, (float, int)):
raise TypeError('Input y can only be floats or integers are allowed')
c0 = complex(x, y)
ct = 1
c = 0
for i in range(1, n):
if abs(c) > 2:
return rgb_conv(i)
if ct == 1:
c = c0 * complex(math.sin(x) * math.cosh(y), math.cos(x) * math.sinh(y))
else:
c = c * complex(math.sin(c.real) * math.cosh(c.imag), math.cos(c.real) * math.sinh(c.imag))
ct += 1
return (0, 0, 0)
def getValid(prompt):
while True:
try:
# trying input with without any conditions at first
this = int(input(prompt))
except ValueError:
# Prints the user to input again since the input was not valid
print('Sorry, could not understand. Please enter again.')
continue
# For when the input is a number
if this <= 0:
print('The input must be a natural number. Please try again.')
continue
else:
# Valid input
break
return this
# +
def main():
N = getValid('Enter the number of iterations to investigate the Mandle_Brot Set -> ')
# Define the width of your image in pixels
width = getValid('Enter the width of the image in pixels -> ')
assert (width > 200), 'Choose a larger width for the visual'
# creating the new image in RGB mode
img = Image.new('RGB', (width, int(width / 2)))
pixels = img.load()
# Loading the image
print('Loading image ...')
pbar = tqdm(total=len(range(img.size[0])))
for x in range(img.size[0]):
for y in range(img.size[1]):
pixels[x, y] = mandelbrot((x - (0.75 * width)) / (width / 4),
(y - (width / 4)) / (width / 4), N)
pbar.update(1)
pbar.close()
img.save('MandelBrot_sinusoid.jpg')
if __name__ == '__main__':
main()
| notebooks/Mandelbrot_sinusoidal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # QUICK REVISION NOTES
# ## 1. INTRODUCTION TO PROGRAMMING
# ### 1.1 What is Programming Language and Operating System and why do we need them?
# A Programming Language is a set of predefined words that are combined into a program according to predefined rules(syntax). Programming languages are used to develop the application which helps in turn to get our task done.
#
# An Operating Systemis a software that allows a user to run other applications on a computing device. The main job of operating system is to manage computer’s software and hardware resources like I/O device, Network device, storage device etc. and to run the application on our system and resource allocation required for the same.
# ### 1.2 A brief introduction to computer memory
# There are mainly two types of memory in computer:-
# 1. Volatile Memory - Primary Memory - Temporary Storage - RAM, Cache, Register
# 2. Non-volatile memory - Secondary Memory - Permanent Storage - HDD, SDD, Floppy, CD, Pen Drive
#
# -> Every program runs into RAM.
# -> Secondary memory are used for permanent storage.
# ### 1.3 Types of Programming Languages
# + active=""
# There are mainly two types of programming language.
# 1. Low Level Programming Language
# I. Machine Level Language or Binary Code
# II. Assembly Language
# 2. High Level Programming Language
# I. Based on purpose
# i. General Purpose PL
# ii. Special Purpose PL
# II. Based on converters used
# i. Compiled —> C/C++/Java/Python
# ii. Interpreted —> Perl/Java Script/Python
# -
# ### 1.4 What is Source code, Byte code and Machine code in Python?
# + active=""
# Source Code:- The code, written to develop the application. In python, (.-)file is source code.
#
# Byte code:- A fixed set of instructions that represents all operations like arithmetic, comparison, memory related operations etc. The code generated after the compilation of source code. (.pyc) file is byte code. Byte code is system and platform independent. It is also called p-code, portable code or object code.
#
# PVM:- Stands for Python Virtual Machine and is a software which comes with Python. PVM comes with Interpreter.
#
# Machine code:- The code generated after interpretation of byte code. It is system and platform dependent.
# -
# ### 1.5 Type of Errors in Python
# + active=""
# Error Handling is one of the most important feature of Python. It was main deciding factor in development of Python.
# There are mainly two type of errors in Python:-
# I. Compile-time Error at compilation stage
# II. Run-time Error caught at run time or interpretation stage.
#
# Compile-time Errors:- Every Programming Language comes with a set of rules which defines that how the codes will be written and interpreted or compiled. These set of rules are called Syntax. Not following the rules may lead to errors which are called Syntax errors. In Python, at compilation stage we check for the syntax errors and IndentationError. If there are no syntax errors in your program then it will be compiled successfully and byte code will be generated. If there are some syntax error in your program then compilation will not happen and byte code will not be generated. Syntax Error happens when Python can’t understand what you are saying.
#
#
# Run Time Errors:- Run time error happens when Python can understand what you are saying but runs into problem while running your instructions. To find the list of all run time error you may execute the below code:-
#
#
# ---> print(dir(“__builtins__”))
#
#
# We can handle runtime errors using exception handling concept in Python unlike syntax errors which has to be rectified before compilation. Run time errors may cause partial implementation of your code.
#
# [Reference:-]
# 1. https://cscircles.cemc.uwaterloo.ca/1e-errors/
# -
# ### 1.6 Compiled Languages vs Interpreted Languages
# + active=""
# I. Compiled languages needs compiler to translate the code whereas Interpreted languages needs interpreter to translate the code from one form to another.
#
# II. Compiler & Interpreter both are translator and are computer programs.
#
# III. Compiler translates the whole code from one form to another form all at once while interpreter is instruction by instruction execution.
#
# IV. In Python, Compiler is used to translate Source code to byte code whereas Interpreter is used to translate the Byte code to Machine code.
#
# V. All Syntactical errors are caught at compilation stage in Python whereas all run time errors are caught at interpretation stage.
#
# NOTE:- Originally, Python is Interpreted Language. But over the time, Python is made compiled and Interpreted both as python has established itself as full fledged programming language and is acceptable across wide area of application.
#
#
# [References:-]
# 1.https://en.wikipedia.org/wiki/Interpreted_language
# 2.https://www.freecodecamp.org/news/compiled-versus-interpreted-languages/
# -
# ### 1.7 Why do we need translator?
# + active=""
# Computer's hardware are electronic devices and they can not understand human language, rather they communicate with each other with the help of signals. Signals are nothing but binary code and is completely low level language. To translate High level languages in human readable format to machine readable format(binary code), we need translators.
# -
# ### 1.8 Why do we need Byte code?
# + active=""
# Byte code are also called object code or p-code or portable code which may be be directly shipped to your client. As well as it is platform independent so it can run on any platform. Which makes service provider's and developer's life easy as they do not have to write the code for the same application to be used on different platform.
# -
# ### 1.9 Why Python is compiled and interpreted both? Can’t we make it either compiled or interpreted?
# + active=""
# Traditional programming languages like C or C++ were compiled languages. If you execute any source code in C or C++, it first creates an executable file which is platform dependent and then you run this file to get the output on that particular platform.
#
# Source Code —> Compiler —> Executable Code(.exe file) —> Output
#
#
# You can see here that the intermediate code which are generated is not platform independent. But remember that the Source code written in these languages are portable.
#
# So, to counter this problem, languages like Java and Python were developed which are first compiled to object code or Byte code or p-code(portable code). These object codes are completely platform independent. Now when you move this object code to other platform then you need to translate it to machine code. For translation at this stage, you may either use compiler or interpreter, any translator which again translates the byte code to machine code. For Instance, Java, PyPI, JPython uses JIT compiler, where CPython uses interpreter.
#
# Source Code —> Compiler —> Byte Code —> Interpreter or JIT compiler(PVM/JVM) —> Machine Code —> Output
#
# So basically, we may say that to bridge the gap between portable programming languages and platform independent programming languages we came up with this idea to have two translators. First time it is used to generate shippable code and second time it is used to convert the byte code to machine code.
#
#
# [References:-]
# 1.https://en.wikipedia.org/wiki/Python_(programming_language)
# 2.https://www.geeksforgeeks.org/history-of-python/
# 3.http://effbot.org/pyfaq/why-was-python-created-in-the-first-place.htm
# 4.https://www.artima.com/intv/python.html
# -
# ## 2. WHAT IS PYTHON?
# + active=""
# Python is General purpose, High level programming language.
# -
# ### 2.1 What are some good features of Python?
# + active=""
# 1.Simple & easy to learn:-Python is one of the simplest language ever. Syntaxes are simple, easy to remember and quite expressive. When it comes to learning, it has been found that the learning curve for python is quite steeper compared to other programming languages.
#
#
# Sample Program in Java to add two numbers:-
# public class add{
# public static void main(string[]args){
# int a = 10;
# int b = 20;
# int c = a+b;
# System.out.println(c);
# }
# }
#
# Sample Program in Python to add two numbers:-
# a = 10
# b = 20
# c = a+b
# print(c)
#
#
# 2.Most expressive language:-Being one of the most expressive language is easy to write the code in fewer lines, which leads to less cluttered program, faster execution and easy to debug and maintain.
#
# 3.Freeware and open source:-Python being freeware, you don’t have to spend on licensing. And since it is open source so its original source code is freely available and can be redistributed and modifiable.
#
#
# 4.Compiled and Interpreted both:- Originally, Python was developed to bridge the gap between C and shell scripting and also include the feature of exception handling from ABC language. So we can say that, initially Python was interpreted language. But later it was made compiled and interpreted both.
#
# Generally, the steps involved in execution of any python program are -
# -
# ### 2.2 How a python program runs on our system?
# + active=""
# Source code ---> compiler ---> byte code ---> interpreter(PVM) ---> Machine code ---> Hardware ---> output
#
# In the above flow diagram, compiler is responsible to compile the source code. In the compilation stage all the syntactical errors are checked whereas interpreter is responsible to interpret the byte code and check the run time error in this stage.
# Compiler and interpreter both are translator. Compiler translates the source code to byte code(.pyc file) all at once whereas interpreter translates byte code to machine code one instruction at a time which interacts with hardware to give the final output.
#
#
# Few important commands:-
# 1.python File_Name.py —> To run the file
# 2.python -m py_compile File_Name.py —> To compile the file
# 3.python -m dis File_Name.py —> To see the byte code.
# 4.python File_Name.cpython-38.pyc —> To run the compiled code.
#
#
# 5.Portable and Platform Independent:- Python is portable and platform independent both. Source code is portable whereas byte code(.pyc file) is platform independent. C is portable but not platform independent.
#
# 6.Dynamically typed programming language:- In python, everything is an object. Objects get stored in the heap area and it’s memory address is referenced by the variable or name with which it is binded. Unlike other programming languages, In python, memory location does not work like a container rather it works on reference model. No need to specify the data type explicitly in python.
#
# 7.Extensible and Embedded both - Extensibility - Extending the code from other programming languages into Python code. Embedded - Embedding Python code to other programming languages code.
#
#
# 8.Batteries included:- Python comes with huge library support required for full usability.
#
#
# 9.Community Support:- Huge Python community support available for beginner, intermediate and advance level programmers.
# 10.CBSE 10th standard syllabus has Python now.
# 11.The growth and acceptance python has shown over the years, is exceptional. Click Here for more details.
#
# [References:-]
# 1.https://en.wikipedia.org/wiki/Python_(programming_language)
# -
# ### 2.3 Garbage Collection in Python
# + active=""
# Python supports automatic garbage collection. It uses a combination of reference counting and cycle-detecting garbage collector for memory management an optimisation. gc module in python stands for garbage collector which runs from time to time and which ever object’s reference count reaches to zero, that gets garbage collected automatically. As a user you are not allowed to interfere in memory management. It is task of the PVM to manage the memory. Yes, but if you want to check whether gc module is enabled or disabled or if you want to enable or disable it then you may use it like below:-
# -
import gc
print(dir(gc),'\n')
print(gc.isenabled(),'\n')
gc.disable()
print(gc.isenabled(),'\n')
gc.enable()
print(gc.isenabled(),'\n')
# ### 2.4 Different Flavours of Python
# + active=""
# There are different flavours or implementation of python are available. Out of which the most common are:-
#
# 1.Cpython:- C implementation of Python, developed to run C application on PVM, which was designed using C.
# 2.Jython:- Java implementation of Python, developed to run Java application on PVM,which was designed using Java.
# 3.PyPI:- Python implementation of Python. Implements JIT compiler, to improve the performance of python app.
# 4.Ruby Python:- Ruby implementation of Python and was developed to run the Ruby application.
# 5.Stackless Python:- Was developed to implement concurrency in Python.
# 6.Anaconda Python:- Was developed to handle and process the big data.
# -
# ## 3. COMPONENTS OF PYTHON
# + active=""
# 1.Literals
# 2.Constants
# 3.Identifiers
# 4.Variables
# 5.Reserved words
# 6.Expressions
# 7.Statements
# 8.Comments
# 9.Suites
# 10.Block and identation
# -
# ### 3.1 Literals in Python
# + active=""
# ---> a = 15
#
# -Data or constant value stored in a variable.
# -In the above example, constant value ’15’ is stored in a variable ’a’. 15, here is literal.
# -5 is integer value so it is also called integer literal.
#
# Python supports different type of literals:-
# 1.Numeric Literals
# - Integer Literals - 200, -15
# - Binary Literals - 0b1010
# - Octal Literals - 0o12
# - Hexadecimal Literals - 0xa
# - Float Literals - 10.20, -20.6
# - Complex Literals - 10+20j, 10-20j
#
# 2.Boolean Literals
# - True, False
#
# 3.Special Literal
# - None
#
# 4.String Literals
# - Sequence of characters enclosed between single quotation, double quotation or triple quotation.
# -
# ### 3.2 Identifiers in Python
# + active=""
# -A name given to a variable, function, class or object.
# -Allowed Characters in Python:-2. Identifiers in Python:-
# - Numbers - 0-9
# - Underscore - (_)
# - Alphabets Capital(A to Z) and small (a to z)
# -Not Allowed:-
# - Special Symbols are not allowed - ($,#,@)
# -Rules to define an Identifier:-
# - Identifier should not start with a number.
# - Identifiers are case sensitive.
# - Never use reserved words as identifier.
# - Not recommended to take lengthy identifier.
# - Identifier starting with _ - Private
# - Identifier starting with __ - Strongly Private
# - Identifier starting and ending with __ - Language defined special identifier
# -
# ### 3.3 Variables in Python
# + active=""
# -A variable in python is a name which may change the data associated with it over time in a program as and when required.
# -Rules to define a variable is same as that of an identifier.
# -It is just a name which is used to create a reference for the data or object associated with in a given program.
# -Variable always refers to the memory location in heap where the data associated with it is stored.
# -Once user changes the data associated with a variable then the memory address of the variable also changes.
# -Variable are used to store the data in the memory and pass them to processor to process the data.
# -
# ### 3.4 Constants in Python
# + active=""
# -A constant value is similar to variable with one exception that it can not be changed once it is set.
# -In Python, You may change the value associated with a constant.
# -So, how constant is different from variable in Python:-
# - Constants in Python should follow the same rule used to define an identifier.
# - Constants in Python should use only capital letter.
# - Do not use generic name like NUM, you may use MAX_NUM or MIN_NUM.
# - Use a different “constant.py” file to define all of your constants in your application and use them by importing this module into your main module.
# -
# ### 3.5 Reserved words in Python
# + active=""
# -Words with special meaning and task associated with it.
# -There are total 35 reserved words in python.
# -There are two types of reserved words:-
# - Reserved Literals:- True, False, None
# - Keywords:- Reserved words associated with some functionality. Apart from reserved literals, all reserved words are keywords.
# -All reserved words contains only alphabet characters.
# -All keywords contains only lowercase letters.
# -
import keyword
print(keyword.kwlist)
# ### 3.6 Comments in Python
# + active=""
# -Writing comments in python is a very good programming practice.
# -Writing comments in your program helps your peer coder to understand the reason of including the part of your code in your program.
# -To create a single line comment we use #.
# -Multiline comments are written inside triple quotation.
# -Triple quotations are also used for writing the doctoring.
# -
# ### 3.7 Expressions in Python
# + active=""
# -An expression is a combination of values, variables, operators and call to functions.
# -Expressions needs to be evaluated.
# -If you use print function for an expression then it evaluates the expressions and prints the result.
# -Expression generally evaluates to a value, which is why expression are written on the right hand side in an assignment statement.
# -A single value itself is a simple expression.
#
# Example:
#
# 10
# 10.20
# a=100
# a+80
# import math
# math.sqrt(10)
#
# ---> Here each and every line is executable and it is called as expression
# -
# ### 3.8 Statements in Python
# + active=""
# -Instructions written in the source code for execution are called statements.
# -Different types of statements in python are:-
# - Assignment Statements
# - Compound Assignment Statements
# - Conditional statements
# - Loop Statements
# - Statements in python can be extended to one or more lines using parenthesis(), braces{}, square brackets[], semi colon ;, continuation character \.
#
# Example:
#
# a = 10 + 80 ---> assignment operation statement
#
# for i in range(10):
# print(i) ---> loop operation statement
#
# if a == 100:
# print(a) ---> conditional operator statement
# -
# ### 3.9 Blocks or suites and indentation in Python
# + active=""
# -A combination of statements in python is called block or suites.
# -In other programming languages like C, C++, Java we use flower brackets to make a block in python.
# -We use whitespaces to make indentation and indentations are used to make block or suite in python.
#
# Example:
#
# ___________________________
# | x = 10 |----> Block of code
# | y = 20 |
# | _______________ |
# | |if x < y: | |
# | | print(True)| -------|----> Block/suite inside
# | |_______________| |
# | else: |
# | ___ |
# Indendation space<-----|--|___|print(False) |
# |___________________________|
# -
# ### 3.10 Escape sequence
# + active=""
# 1 ---> \ ---> Using this you may write a single line string into multi line string.
#
# 2 ---> \\ ---> To print \
#
# 3 ---> \n ---> New line
#
# 4 ---> \t ---> Horizantal tab
#
# 5 ---> \v ---> Vertical tab
#
# 6 ---> \' ---> To consider single quote(‘) as string character
#
# 7 ---> \'' ---> To consider double quote(") as string character
# -
# ## 4. ZEN OF PYTHON
import this
# ## 5. FUNDAMENTAL DATATYPES
# + active=""
# ---> int:
# Whole number without decimal point is integer ---> decimal->(int(10))
# binary(0&1)->(bin(0b))
# octal(0-7)->(oct(0o))
# hexa(0-9 and a-f)-> hex(0x))
#
# ---> float:
# any number with decimal point is a float ---> print(float(10))
#
# ---> complex:
# combination of real and imaginary part ---> 6+12j --> 6 is real part & 12j is imaginary part & j is imaginary value
#
# ---> string:
# combination of characters kept inside quote-->print('python class')& print("python's class")& print('''python's & "java"''')
#
# ---> boolean:
# represented by True or False (1 & 0) --> every thing other than 0 is True, 0 and empty string is False
#
# ---> None:
# it represents null/no value, null is not same as 0, It can be added anywere for future changes, ID of null is same everywere.
#
# -
# ## 6. SEQUENCE, INDEXING, SLICING, CONCATENATION AND REPETITION
# + active=""
# ---> Sequence:
# Sequence is a ordered collection of elements --> string,list,tuple,byte,bytearray,range.
# Indexing and slicing is allowed.
#
# ---> Index:
# Index is the represent of position of the element in python. Python allows both positive and negative index
#
# -6 -5 -4 -3 -2 -1
# Negative index (right to left) <-------------------------- starts from -1
# p y t h o n
# Positive index (left to right) ---------------------------> starts from 0
# 0 1 2 3 4 5
#
# ---> Indexing:
# Indexing is a concept of accessing single element from list. only one element can be accessed at a time.
# syntax :
# --> list_name[index_number] --> to access the element
# --> list_name.index('element',start,stop) --> to access the index of first occurance of element
#
# ---> Slicing:
# Slicing is a concept of accessing multiple element from list at a time.
# syntax :
# --> list_name[start:stop:step] --> to access multiple elements
# start --> position to start slicing , default = 0
# stop --> position one more than were we want to end slicing, default = length of the string
# step --> step of slicing , default = 1.
# step = 1 --> slices from left to right.
# step = -1 --> slices from right to left.
#
# ---> Concatenation:
# Process of combining two sequence of same datatype. print('python' + ' ' + 'is awesome')
#
# ---> Repetition:
# It is the process of repeating a string multiple times. It should have one string and one integer datatype.
# ex: print('python is cool, '*3)
#
#
# -
# ## 7. OPERATORS
# + active=""
# There are 8 operators in python. They are,
#
# - Arithematic operator.
# - Comparision operator.
# - Equality operator.
# - Logical operator.
# - Bitwise operator.
# - Assignment operator.
# - Memership and identity operator.
# -
# ### 7.1 Arithematic operator:
# + active=""
# Addition ----------> +
# Subtraction -------> -
# Multiplication ----> *
# Float division ----> / (True division)
# Floor division ----> // (Floor value or quotient)
# Modulo operation --> % (Reminder)
# Exponent ----------> ** (Power of 'n')
#
# Floor and ceil value:
# --> Floor value returns the nearest lowest whole number.
# --> Ceil returns the nearest largest whole number.
# --> math module has to be imported to perform these operations.
# +
# EXAMPLE
import math
print('Floor value of 56.82 is', math.floor(56.82))
print('Ceil value of 56.82 is', math.ceil(56.82))
# -
# ### 7.2 Comparision operator (<, <=, >, >=)
# + active=""
# - Comparision operator return boolean value based on the outputs.
# - Comparision between different datatypes is not possible except int and float.
# - Sequence comparision can be done
# - Length of the string
# - Chae by char & using ASCII code
# +
# EXAMPLE
print(10<20.1) # comparing int and float
print(10>20) # comparing int with int
print('python'>'kris') # comparing length of string
print('Suresh'>='suresh') # comparing the ASCII value by alphabet
print(True<321) # comparing int with boolean
print()
# To convert ASCII code to value and vise vera
print(ord('F')) # character to ordinal
print(chr(97)) # ordinal to character
# -
# ### 7.3 Equality operator (== and !=)
# + active=""
# - It compares the content and return the boolean value.
# - Equality operator can be used betweeb differet data types.
# +
# EXAMPLE
print(20!=20.0) # Checking equality between int and float
print('python'!='python') # Checking equality by length
print('P'!='p') # Checking equality by ASCII
print('India'=='India') # Checking equality by both length and ASCII
# -
# ### 7.4 Logical operator (and, or, not)
# + active=""
# - and operator --> Returns A if A is False else it returns B
#
# A operator B Output
# ________________________________________
#
# True and True True
# True and False False
# False and True False
# False and False False
#
# - or operator --> Returns A if A is True else returns B
#
# A operator B Output
# ________________________________________
#
# True or True True
# True or False True
# False or True True
# False or False False
#
# - not operator --> Reverses the bolean value of the output
# +
# Example
# and operator
print((10<20) and (5!=100)) #---> Returns boolean incase of comparision and equality operator
print((10>20) and (5!=100)) #---> Returns boolean incase of comparision and equality operator
print((10+20) and (20+3)) #---> Returns B since A is not False
print((80+1) and (60*0)) #---> Returns B since A is not False
print(0 and 0.0) #---> Returns A since A is False
print()
# or operator
print((10<20) or (10==100)) #---> Returns boolean incase of comparision and equality operator
print((10>20) or (10!=10)) #---> Returns boolean incase of comparision and equality operator
print((20*0) or (10+30)) #---> Returns B since A is not True
print(False or True) #---> Returns B since A is not True
print('hai' or 'bye') #---> Returns A since A is True
print()
# not operator
print(not((10<20) or (10==100))) #--> Reverses the output to False
print(not('hai' and 0)) #--> reverses the output to True
# -
# ### 7.5 Bitwise operator
# + active=""
# - These operators works on bits or binary level.
#
# - Bitwise and operator (&)
#
# Bit operator Bit Output
# ________________________________________
#
# 1 & 1 1
# 1 & 0 0
# 0 & 1 0
# 0 & 0 0
#
# - Bitwise or operator (|)
#
# Bit operator Bit Output
# ________________________________________
#
# 1 | 1 1
# 1 | 0 1
# 0 | 1 1
# 0 | 0 0
#
# - Bitwise xor operator (^)
#
# Bit operator Bit Output
# ________________________________________
#
# 1 ^ 1 0
# 1 ^ 0 1
# 0 ^ 1 1
# 0 ^ 0 0
#
# - Bitwise negation operator (~)
#
# Bitwise negation runs on formula n = -(n+1)
#
# - Bitwise left shift (<<) and right shift (>>)
#
# Bitwise left shift and right shift works on binary level.
# +
# EXAMPLE
# Bitwise and operator (&)
print('Bitwise and of 25 & 30 is', 25 & 30)
# 25 = 11001
# 30 = 11110
# -------
# & = 11000 ---> 24
# Bitwise or operator (|)
print('Bitwise or of 25 & 30 is', 25 | 30)
# 25 = 11001
# 30 = 11110
# -------
# & = 11111 ---> 31
# Bitwise xor operator (^)
print('Bitwise xor of 25 & 30 is', 25 ^ 30)
# 25 = 11001
# 30 = 11110
# -------
# & = 00111 ---> 7
# Bitwise negation operator (~)
print('Bitwise negation of True is', ~True) # --> -(n+1) ---> -(True+1) = -2
print('Bitwise negation of False is', ~False) # --> -(n+1) ---> -(False+1) = -1
print('Bitwise negation of 8 is', ~8) # --> -(n+1) ---> -(8+1) = -9
print('Bitwise negation of -16 is', ~-16) # --> -(n+1) ---> -(-16+1) = 15
# 25 = 11001
# 30 = 11110
# -------
# & = 00111 ---> 7
# Bitwise left shift operator (<<)
print('Bitwise left shift of 25 is', 25 << 3)
# 25 = 11001
# <<3 = 11001000 ---> 200
# Bitwise right shift operator (>>)
print('Bitwise right shift of 25 is', 25 >> 3)
# 25 = 11001
# >>3 = 11 ---> 3
# -
# ### 7.6 Assignment operator
# + active=""
# --> a = a+2
#
# - In the above example the value of variable 'a' is assigned back to it and an arithematic operation is performed and then the result is assigned back to the variable 'a'. In python same line of code can be written as 'a += 2'.
#
# ie. (a = a+2) is equal to (a += 2)
# (b = b/2) is equal to (b /= 2)
#
# - Other similar assignment operators are
# (+=, -=, *=, /=, //=, **=, %=, &=, |=, ^=, >>=, <<=)
# -
# ### 7.7 Membership & identity operator
# + active=""
# --> Membership operator (in, not in)
# - This operator checks an element is a member of sequence or not.
#
# --> Identity operator (is, is not)
# - This operator checks both the operands has same ID or not
#
#
# +
# EXAMPLE
# Membership operator:
print('p' in 'python')
print('th' in 'python')
print('tn' in 'python')
print('R' not in 'rat')
print('t' not in 'rat')
print()
# Identity operator:
a = 101
b = 101
print(a is b)
a = 1000
b = 1000
print(a is b) # Doesnt obey object reusability
a = 210
b = 220
print(a is not b)
a = 1010
b = 1010
print(a is not b) # Doesnt obey object reusability
# -----> object between -5 to 256 in heap area obey object reusability
# -
# ## 8. DERIVED DATATYPES
# + active=""
# ---> list:
# Collection of hetrogeneous elements written inside the square bracket --> [20,{30:40},(56,96)]
# It is a mutable datatype and a sequence so indexing and slicing is possible.
# list_name.append('element') ---> to add element at the end of the list.
# list_name.remove('element') ---> to remove the element from the list.
#
#
# ---> tuple:
# Collection of ordered and immutable data written inside round brackets. --> (20,{30:40},(56,96))
# It is sequence so indexing and slicing is possible.
#
# ---> set:
# Collection of unordered unique elements written inside curly brackets --> {10,20,30,40}
# Set is mutable but it wont allow mutable datatype as element.
# It is not a sequence so indexing and slicing is not possible.
# Set doesnot allow to duplicate elements inside it .
# set_name.add('immutable_element') --> to add element inside the set.
# set_name.remove('immutable_element') --> to remove element from the set.
#
# ---> frozenset:
# Collection of elements written inside round brackets with prefix 'frozenset' --> frozenset({10,20,30,40})
# frozenset is immutable datatype. Indexing and slicing is not possible.
#
# ---> dictionary:
# Collection of unordered mutable combination of key-value pair written inside curly brackets --> {1:10,2:20,3:30}
# Keys can't be duplicated and it doesnot allow mutable datatypes as key.
# Values can be duplicated and it allows mutable datatype as values.
# dict_name.update({key:value}) --> adds a pair to the end of dictionary.
# dict_name.pop(key,'return_this_if_key_not_found') --> removes the element with specified key.
# dict_name.popitem() --> removes the last item from the dictionary.
#
# ---> range:
# Creates a sequence of numbers based in pattern.
# Syntax :
# range(start,stop,step)
# range(stop) ---> stop = (n-1)
# range can be wraped in list,tuple,set to get range in particular datatype.
#
# ---> byte and bytearray:
# They are used to store images,pdf or soundclips.
# allowed characters are 0-256.
# byte --> immutable --> print(bytes([10,20,30]))
# bytearrray --> mutable --> print(bytearray([10,20,30]))
# -
# ## 9. Input, output and print functions
# + active=""
# ---> Input function
#
# - Input function is used to take input from user.
# - It converts everything to string so it has to be type casted for the needful.
# - Typecasting is allowed only for fundamental datatypes.
# - 'eval' is used to evaluate the input and find the appropriate datatype.
# - 'eval' can be used for both fundamental and derived datatypes.
#
# ---> Output function (formating output)
#
# - Prefix 'f' followed by the required output string will do the output formatting.
# - The required variable has to be passed inside the paranthesis '{}'.
#
# ---> Print function
#
# - Print function can take multiple arguments.
# - 'sep' and 'end' attributes can be used to modify the display format of the output.
# - For user defined seperator 'sep' can be used. (default seperator is space)
# - To display the multiline output in a single line 'end' can be used. (default seperator is \n)
# +
# EXAMPLE
# Input function:
inp = input('Enter the number: ')
print(inp, type(inp)) # Type will be string (we can typecaste to int)
inp_int = int(input('Enter a number: ')) # Type casting to int (similarly other fundamental datatypes can be used)
print(inp_int, type(inp_int))
inp_eval = eval(input('Enter something: '))
print(inp_eval, type(inp_eval)) # Evaluating the appropriate datatype
print()
# Output function (Formating the output):
num_1 = 23
num_2 = 56
res = num_1 * num_2
print(f'Multiplication of {num_1} and {num_2} is {res}')
# This can also be written as
print('Multiplication of {} and {} is {}'.format(num_1, num_2, res))
print()
# Print function:
print(10,20,30,sep=' and ') # using 'sep' attribute. Other escape sequence can also be used.
print(10, end=' ')
print(20, end=' and ')
print(30) # using 'end' attribute. Other escape sequence can also be used.
# -
# ## 10. Conditional statement
# + active=""
# ---> if - elif - else
#
# - An if statement is written using 'if' keyword. Every conditional statement should have a if statement.
# - The 'elif' keyword is used to try the next condition if the previous condition is not met or satisfied.
# - 'else' keyword catches anything which isnt caught by preceeding conditions.
#
# +
# EXAMPLE:
# To display the user entered number in string
num = int(input('Enter a number between 0 to 9: '))
if num==0:
print('The number entered is Zero')
elif num==1:
print('The number entered is One')
elif num==2:
print('The number entered is Two')
elif num==3:
print('The number entered is Three')
elif num==4:
print('The number entered is Four')
elif num==5:
print('The number entered is Five')
elif num==6:
print('The number entered is Six')
elif num==7:
print('The number entered is Seven')
elif num==8:
print('The number entered is Eight')
elif num==9:
print('The number entered is Nine')
else:
print('Number not in range. Try between 0 to 9')
# +
# Write a program for calculator to perform arithematic operation for two numbers taken from user.
# Operation has to defined by user
x = int(input('Enter the first number: '))
y = int(input('Enter the second number: '))
op = int(input('''Enter a operation by choosing between 1 to 7 \n
\t 1. Addition
\t 2. Subtraction
\t 3. Multipliation
\t 4. Division
\t 5. Floor division
\t 6. Modulo division
\t 7. Addition
\t Enter here: '''))
print()
if op == 1:
print(f'Addition of {x} and {y} is {x+y}')
elif op == 2:
print(f'Subtraction of {x} and {y} is {x-y}')
elif op == 3:
print(f'Multiplication of {x} and {y} is {x*y}')
elif op == 4:
if y == 0:
print('Number not divsible by zero')
else:
print(f'Division of {x} and {y} is {x/y}')
elif op == 5:
if y == 0:
print('Number not divsible by zero')
else:
print(f'Floor division of {x} and {y} is {x//y}')
elif op == 6:
if y == 0:
print('Number not divsible by zero')
else:
print(f'Modulo division of {x} and {y} is {x%y}')
elif op == 7:
print(f'{x} to the power of {y} is {x**y}')
else:
print('Operation not in range enter between 1 to 7')
# +
# Find the largest of three numbers. Take input from user.
num1 = int(input('Enter the first number: '))
num2 = int(input('Enter the second number: '))
num3 = int(input('Enter the third number: '))
print()
if num1>num2 and num1>num3:
print(f'{num1} is greater than {num2} and {num3}')
elif num2>num3:
print(f'{num2} is greater than {num1} and {num3}')
else:
print(f'{num3} is greater than {num1} and {num2}')
# The above program can be written using ternary operator
# syntax - print(exp_1 if cond_1 else exp_2 if cond_2 else exp_3 if cond_3)
print(f'{num1} is greater than {num2} and {num3}' if num1>num2\
else f'{num2} is greater than {num1} and {num3}' if num2>num3\
else f'{num3} is greater than {num1} and {num2}')
# +
# To find wheather the user entered year is aleap year or not
# Rules to be a leap year:
# 1. The year should be divisible by 4
# 2. The year should be divisible by 4 but not 100
# 3. The year should be divisible by 4 and 100 but not 400
# 4. The year should by divisible by 4, 100 and 400
year = int(input('Enter a year: '))
if year%4==0:
if year%100==0:
if year%400==0:
print(f'{year} is a leap year')
else:
print(f'{year} is not a leap year')
else:
print(f'{year} is a leap year')
else:
print(f'{year} is not a leap year')
# -
# ## 11. Concept of loops
# + active=""
# ---> for loop (if number of itereation is known and when the data is iterable and have generator function)
# ---> while loop (if number of iteration is not known and data is not iterable and doesnt have generator function)
#
# - for loop syntax
# for variable in sequence:
# code
# code
# - while expression (True/False):
# code
# code
# increment/decrement
#
# ---> loop control
#
# - break --> to come out of the loop.
# - continue --> to skip current itteration.
# +
# EXAMPLE:
sen = 'This sentence will be itered over loop'
# for loop
for i in sen:
print(i,end='')
print()
c=0
for i in sen:
if i in ['a','e','i','o','u']:
c += 1
print('Number of vowels in the sentence =',c)
for i in range(len(sen)):
if i%2 != 0:
print(sen[i], end='')
print('\n')
#while loop
step=0
while step<len(sen):
print(sen[step], end='')
step+=1
step=0
c=0
while step<len(sen):
if sen[step] in ['a','e','i','o','u']:
c+=1
step+=1
print('\nNumber of vowels in the sentence =',c)
step=0
while step<len(sen):
if step%2 != 0:
print(sen[step], end='')
step+=1
# Loop control
# Break
print('\n')
for i in list(range(10)):
if i == 6:
print('Coming out of loop')
break
else:
print(i)
# Continue
print()
for i in list (range(10)):
if i == 6 or i == 8:
print('skipping iteration')
continue
else:
print(i)
| Python/001. Introduction to python and essentials.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Método de aceptación rechazo
#
# Este método surgió debido a que muchas distribuciones continuas, no era factible aplicar el método de transformación inversa porque $x= F^{-1}(U)$ no se puede calcular (o al menos no es computacionalmente eficientemente).Con frecuencia, estos métodos son considerablemente más rápidos que el método de transformación inversa. Ahora ilustramos el **método de aceptación y rechazo** en un ejemplo simple.
# Suponga que tenemos una función de densidad de probabilidad (PDF) de una distribución beta, la cual viene dada:
# $$f(x)=\frac{x^{\alpha_1-1}(1-x)^{\alpha_2-1}}{B(\alpha_1,\alpha_2)} \quad x\in[0,1] \longrightarrow B(\alpha_1,\alpha_2)\equiv \int_{0}^{1}x^{\alpha_1-1}(1-x)^{\alpha_2-1}, \ \alpha_1,\alpha_2>1$$
#
# **Hablar de las desventajas**
# Ahora definiremos formalmente el método:
#
# *Note que $f(x)$ debe ser una función acotada y con dominio finito* $a\leq x \leq b$ como se muestra a continuación:
# 
#
# De acuerdo a esta función $f(x)$ el método propone los siguientes pasos. Asuma que podemos encontrar una función $t(x)$ tal que
# $$t(x)\geq f(x), \quad \forall x$$
# Note que la función $t(x)\geq 0$ no es una PDF debido a
# $$\int_{-\infty}^{\infty}t(x)dx\geq \int_{-\infty}^{\infty}f(x)dx =1$$
# Tomemos
# $$c=\int_{-\infty}^{\infty}t(x)\geq 1$$
# Donde podemos asumir $c$ como el máximo de la función $f(x)$. Definamos la función $g(x)=t(x)/c \rightarrow g(x)$ **es una densidad**. Resultando entonces
# $$\frac{f(x)}{g(x)}\leq c,\quad \forall x$$
# El siguiente algoritmo genera una variable aleatoria $X$, distribuida de acuerdo a la densidad $f(x)$
# 1. Generar $R_1$ teniendo densidad $g(x)$
# 2. Generar $R_2 \rightarrow U \sim U(0,1)$ independiente de $R_1$ del paso 1 .
# 3. Evaluar la función de probabilidad en $R_1$.
# 4. Determinar si la siguiente desigualdad se cumple: $$R_2\leq \frac{f(R_1)}{t(R_1)}$$
# Si la respuesta es afirmativa se utiliza $X=R_1$, de lo contrario es necesario pasar nuevamente al paso 1, tantas veces como sea necesario.
#
# > Se puede demostrar que la $P(aceptar)=1/c$
# ### Ejemplo 1: Función beta
#
# $$f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1}
# (1 - x)^{\beta - 1}$$
# ### a). Caso particular: $\alpha=\beta=3$
# Con estos valores la PDF es
# $$f(x)=30(x^2-2x^3+x^4)$$
# Librería de optimización
from scipy import optimize
from scipy.stats import beta
import matplotlib.pyplot as plt
import numpy as np
# # %matplotlib notebook
# %matplotlib inline
# Función de aceptación y rechazo usando for
def Acep_rechazo2(R2:'Variables distruidas U~U(0,1)',
R1:'Variables distribuidas como g(x)',
f:'función objetivo a generar',
t:'función que mayora a f'):
# R1 = np.random.rand(N)
f_x = f(R1)
t_x = t(R1)
condition = np.multiply(R2,t_x)<=f_x
for i in range(len(R1)):
if condition[i]:
plt.plot(R1[i],R2[i]*t_x[i],'ob')
else:
plt.plot(R1[i],R2[i]*t_x[i],'o')
plt.show()
# Función de aceptación y rechazo usando compresión de listas
def Acep_rechazo(R2:'Variables distruidas U~U(0,1)',
R1:'Variables distribuidas como g(x)',
f:'función objetivo a generar',
t:'función que mayora a f'):
# R1 = np.random.rand(N)
f_x = f(R1)
t_x = t(R1)
condition = np.multiply(R2,t_x)<=f_x
[plt.plot(R1[i],R2[i]*t_x[i],'ob') if condition[i] else plt.plot(R1[i],R2[i]*t_x[i],'o') \
for i in range(len(R1))]
plt.show()
# +
# Ilustración del método de aceptación y rechazo cuando se toma t(x) constante
# Función objetivo
f = lambda x:30*(x**2-2*x**3+x**4)
# Máximo de la función f
max_f = f(optimize.fmin(lambda x:-f(x),0,disp=False))
# Función t -> Función constante
t = lambda x: max_f*np.ones(len(x)) # función constante
x = np.arange(0,1,0.01) # Rango donde se graficará las funciones
print('El máximo de f es:',max_f)
# Gráficas de las funciones
plt.plot(x,f(x),label='f(x)')
plt.plot(x,t(x),label='t(x)')
plt.legend()
# Validación del método
N = 200 # número de puntos a simular
# Como estoy tomando t(x) constante solo es necesario generar valores aleatorios U~(0,1)
R2 = np.random.rand(N)
R1 = np.random.rand(N)
Acep_rechazo(R2,R1,f,t)
# -
# ### b). Caso general: $\alpha,\beta>0$
# +
# Parámetros de la función beta
a =10; b=3
N = 500 # número de puntos
# Función objetivo
f = lambda x: beta.pdf(x,a,b)
x = np.arange(0,1,0.01)
plt.plot(x,f(x),'k')
# Encuentro el máximo de la función f
c = float(f(optimize.fmin(lambda x:-f(x),0,disp=False)))
print('El máximo de la función es:',c)
t = lambda x: c*np.ones(len(x))
plt.plot(x,f(x),'k')
plt.plot(x,t(x),'b')
R2 = np.random.rand(N)
R1 = np.random.rand(N)
Acep_rechazo(R2,R1,f,t)
plt.show()
# -
# # Tarea
# Partiendo que se desea generar variables aleatorias para la siguiente función de densidad
# $$f(x)=30(x^2-2x^3+x^4)$$
# Responda los siguientes literales:
# 1. Usar como función que mayora a $f(x)$ a $t(x)=a \sin(\pi x)$ donde a es el máximo de la función $f(x)$ y graficarlas en una misma gráfica, para validar que en realidad si cumple la condición $t(x)\geq f(x)$.
# 2. Encontrar la función de densidad $g(x)$ según lo visto en clase. Reportar todos los cálculos realizados para encontrar dicha función usando Markdown (Latex).
# 3. Usar la función encontrada en el punto 2 y utilizar el método de la transformada inversa visto en la clase 9, para generar variables aleatorias que sigan la distribución $g(x)$. **Nota:** Recuerde que el método de la transformada inversa funciona con la distribución de probabilidad acumulada y no con su densidad. Nuevamente similar al punto anterior reportar todos los cálculos usando Markdown (Latex).
# 4. Según el punto 3, generar 10000 puntos aleatorios que sigan la distribución $g(x)$ y comparar con su histograma para validar que los puntos generados siguen la distribución deseada. El resultado debe ser como sigue:
# 
# 5. Genere 500 puntos aleatorios usando el método de aceptación y rechazo y las funciones $f(x)$ y $t(x)$ para validar que todos los cálculos anteriores están correctamente realizados. El resultado debe de ser como sigue:
# 
# 6. Comparar el porcentaje de puntos de aceptación cuando se usa $t(x)$ constante y $t(x)$ un pulso senoidal. Concluir
# 7. Genere una variable aleatoria $X$ a partir de la siguiente PDF
# $$f(x)=20x(1-x)^3$$
# usando el método de aceptación y rechazo
# ## Parámetros de entrega
# Voy a habilitar un link en moodle donde deben de subir su cuaderno de python con la sulución de los problemas planteados de manera individual. La podrán entregar a mas tardar el Jueves 26 de octubre a las 11 pm.
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>.
# </footer>
| TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# ## Preprocessing
# +
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("../Resources/charity_data.csv")
application_df.head()
# -
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(columns = ['EIN', 'NAME'])
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
application_df['APPLICATION_TYPE'].value_counts(ascending=True)
# +
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = ['T17','T15','T29','T14','T25','T2','T12','T13','T9']
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# -
# Look at CLASSIFICATION value counts for binning
application_df['CLASSIFICATION'].value_counts(ascending=True)
# You may find it helpful to look at CLASSIFICATION value counts >1
class_edit = pd.DataFrame(application_df['CLASSIFICATION'].value_counts())
# +
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
class_edit = class_edit.reset_index()
class_edit = class_edit.loc[class_edit['CLASSIFICATION'] < 1500]
classifications_to_replace = class_edit['index'].tolist()
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
# -
application_cat = application_df.dtypes[application_df.dtypes == "object"].index.tolist()
# +
from sklearn.preprocessing import OneHotEncoder
# Convert categorical data to numeric with `pd.get_dummies`
enc = OneHotEncoder(sparse=False)
encode_df = pd.DataFrame(enc.fit_transform(application_df[application_cat]))
encode_df.columns = enc.get_feature_names(application_cat)
encode_df.head()
# -
application_df = application_df.merge(encode_df,left_index=True, right_index=True)
application_df = application_df.drop(application_cat,1)
application_df.head()
# +
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(columns = 'IS_SUCCESSFUL').values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42)
# +
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# -
# ## Compile, Train and Evaluate the Model
# +
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
nn_model = tf.keras.models.Sequential()
# First hidden layer
nn_model.add(tf.keras.layers.Dense(units=80, activation = 'relu', input_dim=43))
# Second hidden layer
nn_model.add(tf.keras.layers.Dense(units=30, activation = 'relu'))
# Output layer
nn_model.add(tf.keras.layers.Dense(units=1, activation = 'sigmoid'))
# Check the structure of the model
nn_model.summary()
# -
# Compile the model
nn_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
# Train the model
fit_model = nn_model.fit(X_train_scaled, y_train, epochs = 50)
fit_model_df = pd.DataFrame(fit_model.history)
fit_model_df = fit_model_df.iloc[::5, :]
fit_model_df
# Evaluate the model using the test data
model_loss, model_accuracy = nn_model.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss:.3f}, Accuracy: {model_accuracy:.3f}")
# Export our model to HDF5 file
nn_model.save_weights('output/nn_model_weights.h5')
| Starter_Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Machine Learning Model Building Pipeline: Feature Selection
#
# In the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline, which we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps:
#
# 1. Data Analysis
# 2. Feature Engineering
# 3. Feature Selection
# 4. Model Building
#
# **This is the notebook for step 3: Feature Selection**
#
#
# We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.
#
# ===================================================================================================
#
# ## Predicting Sale Price of Houses
#
# The aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses.
#
# ### Why is this important?
#
# Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated.
#
# ### What is the objective of the machine learning model?
#
# We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse).
#
# ### How do I download the dataset?
#
# To download the House Price dataset go this website:
# https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data
#
# Scroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice.
#
# **Note the following:**
# - You need to be logged in to Kaggle in order to download the datasets.
# - You need to accept the terms and conditions of the competition to download the dataset
# - If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here.
#
# ====================================================================================================
# ## House Prices dataset: Feature Selection
#
# In the following cells, we will select a group of variables, the most predictive ones, to build our machine learning model.
#
# ### Why do we select variables?
#
# - For production: Fewer variables mean smaller client input requirements (e.g. customers filling out a form on a website or mobile app), and hence less code for error handling. This reduces the chances of introducing bugs.
#
# - For model performance: Fewer variables mean simpler, more interpretable, better generalizing models
#
#
# **We will select variables using the Lasso regression: Lasso has the property of setting the coefficient of non-informative variables to zero. This way we can identify those variables and remove them from our final model.**
#
#
# ### Setting the seed
#
# It is important to note, that we are engineering variables and pre-processing data with the idea of deploying the model. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.
#
# This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**.
#
# Let's go ahead and load the dataset.
# +
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# to build the models
from sklearn.linear_model import Lasso
from sklearn.feature_selection import SelectFromModel
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# +
# load the train and test set with the engineered variables
# we built and saved these datasets in the previous lecture.
# If you haven't done so, go ahead and check the previous notebook
# to find out how to create these datasets
X_train = pd.read_csv('xtrain.csv')
X_test = pd.read_csv('xtest.csv')
X_train.head()
# +
# capture the target (remember that the target is log transformed)
y_train = X_train['SalePrice']
y_test = X_test['SalePrice']
# drop unnecessary variables from our training and testing sets
X_train.drop(['Id', 'SalePrice'], axis=1, inplace=True)
X_test.drop(['Id', 'SalePrice'], axis=1, inplace=True)
# -
# ### Feature Selection
#
# Let's go ahead and select a subset of the most predictive features. There is an element of randomness in the Lasso regression, so remember to set the seed.
# +
# We will do the model fitting and feature selection
# altogether in a few lines of code
# first, we specify the Lasso Regression model, and we
# select a suitable alpha (equivalent of penalty).
# The bigger the alpha the less features that will be selected.
# Then we use the selectFromModel object from sklearn, which
# will select automatically the features which coefficients are non-zero
# remember to set the seed, the random state in this function
sel_ = SelectFromModel(Lasso(alpha=0.005, random_state=0))
# train Lasso model and select features
sel_.fit(X_train, y_train)
# +
# let's visualise those features that were selected.
# (selected features marked with True)
sel_.get_support()
# +
# let's print the number of total and selected features
# this is how we can make a list of the selected features
selected_feats = X_train.columns[(sel_.get_support())]
# let's print some stats
print('total features: {}'.format((X_train.shape[1])))
print('selected features: {}'.format(len(selected_feats)))
print('features with coefficients shrank to zero: {}'.format(
np.sum(sel_.estimator_.coef_ == 0)))
# -
# print the selected features
selected_feats
# ### Identify the selected variables
# +
# this is an alternative way of identifying the selected features
# based on the non-zero regularisation coefficients:
selected_feats = X_train.columns[(sel_.estimator_.coef_ != 0).ravel().tolist()]
selected_feats
# -
pd.Series(selected_feats).to_csv('selected_features.csv', index=False)
# That is all for this notebook. In the next video, we will go ahead and build the final model using the selected features. See you then!
| Section-2-Machine-Learning-Pipeline-Overview/Machine-Learning-Pipeline-Step3-Feature-Selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:testing-zd]
# language: python
# name: conda-env-testing-zd-py
# ---
# +
import imp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
import axelrod as axl
import axelrod.interaction_utils as iu
import testzd as zd
C, D = axl.Action.C, axl.Action.D
# -
parameters = imp.load_source('parameters', 'data/raw/parameters.py')
# # Extortionate zero determinant.
#
# In [1], given a match between 2 memory one strategies the concept of Zero Determinant strategies is introduced. It was showed that a player $p\in\mathbb{R}^4$ against a player $q\in\mathbb{R}^4$ could force a linear relationship between the scores.
#
# Assuming the following:
#
# - The utilities for player $p$: $S_x = (R, S, T, P)$ and for player $q$: $S_y = (R, T, S, P)$.
# - The normalised long run score for player $p$: $s_x$ and for player $q$: $s_y$.
# - Given $p=(p_1, p_2, p_3, p_4)$ a transformed (but equivalent) vector: $\tilde p=(p_1 - 1, p_2 - 1, p_3, p_4)$, similarly: $\tilde q=(1 - q_1, 1 - q_2, q_3, q_4)$
#
# The main result of [1] is that:
#
# if $\tilde p = \alpha S_x + \beta S_y + \gamma 1$ **or** if $\tilde q = \alpha S_x + \beta S_y + \gamma 1$ then:
#
# $$
# \alpha s_x + \beta s_y + \gamma 1 = 0
# $$
#
# where $\alpha, \beta, \gamma \in \mathbb{R}$
#
# As an example consider the `extort-2` strategy defined in [2]. This is given by:
#
# $$p=(8/9, 1/2, 1/3, 0)$$
#
# Let us use the `Axelrod` library [4, 5] to simulate some matches, here it is against some of the best strategies in the Axelrod library:
extort2 = axl.ZDExtort2()
players = (extort2, axl.EvolvedFSM16())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
scores = match.final_score_per_turn()
np.round((scores[0] - 1) / (scores[1] - 1), 3)
players = (extort2, axl.EvolvedANN5())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
scores = match.final_score_per_turn()
np.round((scores[0] - 1) / (scores[1] - 1), 3)
players = (extort2, axl.PSOGamblerMem1())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
scores = match.final_score_per_turn()
np.round((scores[0] - 1) / (scores[1] - 1), 3)
players = (extort2, extort2)
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
scores = match.final_score_per_turn()
(scores[0] - 1) / (scores[1] - 1)
# We see that `extort2` beats all these strategies but gets a low score against itself.
#
# In [1], in fact a specific type of Zero determinant strategy is considered, indeed if: $\gamma=-(\alpha + \beta)P$ then the relationship $\chi = S_X / S_Y$ holds where $\chi = \frac{-\beta}{\alpha}$ so that the $S_X - P$ will be at $\chi$ times bigger than $S_Y - P$ as long as $\chi > 1$. We can obtain a simple linear equation and an inequality that checks if a strategy is of this form:
p = np.array([8 / 9, 1 / 2, 1 / 3, 0])
zd.is_ZD(p)
np.round(p, 3)
# Note however that even if there is a slight measurement error then these equations will fail:
np.random.seed(0)
approximate_p = p + 10 ** -5 * np.random.random(4)
np.round(np.max(np.abs(p - approximate_p)), 3)
zd.is_ZD(approximate_p)
# Thus, this work proposes a statistical approach for recognising extortionate behaviour. This uses a least squares minimisation approach for the underlying linear algebraic problem being solved.
x, SSError = zd.compute_least_squares(approximate_p)
alpha, beta = x
chi = -beta / alpha
np.round(chi, 3)
# In the paper, exact algebraic expressions for these measure have also been obtained:
x, SSError = zd.get_least_squares(approximate_p)
alpha, beta = x
chi = -beta / alpha
np.round(chi, 3)
# Using the large data set of collected matches we can confirm the obtained formulae:
try:
df = pd.read_csv("./data/processed/full/std/overall/main.csv")
assert (np.all(np.isclose(df["residual"], df["computed_residual"])) and
np.all(np.isclose(df["alpha"], df["computed_alpha"])) and
np.all(np.isclose(df["beta"], df["computed_beta"])))
except FileNotFoundError:
pass
# We see that in the case of an approximation of `extort2` we recover the value of $\chi=2$ (to the third decimal place).
#
# The value that is in fact being minimised is called: $\text{SSError}$. This in fact gives us a measure of how far from being an extortionate strategy a given strategy vector $p$ is.
#
# While all strategies are not necessarily memory one: so do not necessarily have a representation as a 4 dimensional vector. Their transition rates from all states to any action can still be measured.
#
# Let us see how this works, using the 3 strategies above:
def get_p_from_interactions(interactions):
vectors = []
cooperations = iu.compute_cooperations(interactions)
for player, (coop_count, state_counter) in enumerate(zip(
cooperations,
iu.compute_state_to_action_distribution(interactions)
)):
p = []
for state in ((C, C), (C, D), (D, C), (D, D)):
if player == 1:
state = state[::-1]
try:
p.append(state_counter[(state, C)] / (state_counter[(state, C)] + state_counter[(state, D)] ) )
except ZeroDivisionError:
p.append(coop_count / len(interactions))
vectors.append(p)
return np.array(vectors)
players = (extort2, axl.EvolvedFSM16())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
p = get_p_from_interactions(interactions=interactions)[1]
np.round(p, 3)
x, SSError = zd.get_least_squares(p)
np.round(SSError, 3)
players = (extort2, axl.EvolvedANN5())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
p = get_p_from_interactions(interactions=interactions)[1]
x, SSError = zd.get_least_squares(p)
SSError
# This particular strategy in fact does not visit all states:
iu.compute_normalised_state_distribution(interactions=interactions)
# but the overall cooperation rate is used for the missing values:
iu.compute_normalised_cooperation(interactions=interactions)[1]
p
players = (extort2, axl.PSOGambler2_2_2())
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
p = get_p_from_interactions(interactions=interactions)[1]
x, SSError = zd.get_least_squares(p)
np.round(SSError, 3)
# So it seems that the `PSOGambler2_2_2` is "less" extortionate than `EvolvedANN5`. Note: it is certainly not an extortionate strategy as $p_4 > 0$:
np.round(p, 3)
# We can actually classify all potential extortionate strategies which is Figure 1 of the paper.
#
# The paper extends this work to consider a LARGE number of strategies, and identifies if and when strategies actually exhibit extortionate behaviour.
#
# We note that the strategies that exhibit strong evolutionary fitness are ones that are able to adapt their behaviour: they do not extort strong strategies (thus cooperation evolves) but they do extort weaker ones. For example, here is a list of strategies against which `EvolvedANN5` is close to being ZD (\\(\text{SS}_{\text{error}} < 0.05\\)):
for opponent in parameters.PLAYER_GROUPS["full"]:
players = (axl.EvolvedANN5(), opponent)
axl.seed(0)
match = axl.Match(players, turns=parameters.TURNS)
interactions = match.play()
p = get_p_from_interactions(interactions=interactions)[0]
x, SSError = zd.compute_least_squares(p)
if SSError < 0.05:
alpha, beta = x
scores = match.final_score_per_turn()
print(f"vs {opponent}, chi={round(-beta / alpha, 2)}, (S_X - 1)/(S_Y - 1)={round((scores[0] - 1) / (scores[1] - 1), 2)}")
# This work shows here that not only is there a mathematical basis for suspicion: the calculation of $\text{SSError}$ but that some high performing strategies seem to exhibit suspicious behaviour that allows them to adapt.
# ## References
#
# [1] Press, <NAME>., and <NAME>. "Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent." Proceedings of the National Academy of Sciences 109.26 (2012): 10409-10413
#
# [2] Stewart, <NAME>., and <NAME>. "Extortion and cooperation in the Prisoner’s Dilemma." Proceedings of the National Academy of Sciences 109.26 (2012): 10134-10135.
#
# [3] Golub, <NAME>., and <NAME>. Matrix computations. Vol. 3. JHU Press, 2012.
#
# [4] The Axelrod project developers. Axelrod: v4.2.0. 2016. http://doi.org/10.5281/zenodo.1252994
#
# [5] Knight, Vincent, et al. "An Open Framework for the Reproducible Study of the Iterated Prisoner’s Dilemma." Journal of Open Research Software 4.1 (2016).
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + id="TCVgYdgSOD31" jupyter={"outputs_hidden": true} outputId="52078a97-2954-4be7-9ddd-3f666518eac0"
# ! pip install transformers
# # ! pip install scipy sklearn
# # ! pip install farasapy
# # ! pip install pyarabic
# # ! git clone https://github.com/UBC-NLP/marbert
# # ! git clone https://github.com/aub-mind/arabert
# ! pip install datasets
# + id="qrUK9Oby058W" outputId="7fae207f-1b13-4c3a-efbc-8cc9fb35b5b3"
# # ! pip install huggingface_hub
# # ! apt install git-lfs
# # ! git config --global user.email "<EMAIL>"
# # ! git config --global user.name "jabalov"
# + id="kXAU4Sr501vm" outputId="8d92cd31-d54e-434a-d94a-505d95380b77"
# from huggingface_hub import notebook_login
# notebook_login()
# + id="qSF-J0ZoQ1HK"
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
# + id="GVCZJ9USPVGO"
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
from transformers import TFAutoModelForSequenceClassification
import tensorflow as tf
from transformers import create_optimizer
from datasets import list_datasets, load_dataset, Dataset
from pprint import pprint
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
# -
# ## Loading Cleaned dataset, splitting it into train test
# + id="KrpP49hkpLz6"
df_cl = pd.read_csv("../input/cleaned-df/cleaned_df.csv", engine="python")
df_cl["label"] = df_cl["dialect"]
df_cl.drop(df_cl[df_cl.text == '[]'].index, inplace=True, axis=0)
df_cl.dropna(inplace=True)
df_cl.label = LabelEncoder().fit_transform(df_cl.label)
df_train, df_test = train_test_split(df_cl, test_size=0.3, random_state=911, shuffle=True)
# -
df_train[["label", "text"]].to_csv("df_train.csv", encoding="utf-8-sig", index=False)
# ## Loading the training dataset using hugging-face dataset
# + id="0fGDjDWuQi6l" outputId="f81313aa-bfab-4a07-98a9-3431b44601b9"
df = load_dataset('csv', script_version="master", data_files=["./df_train.csv"], delimiter=",", split="train")
df = df.train_test_split()
# + id="bxykhl5PoN1m" outputId="770f3dbc-a21e-455d-b007-786a2646a8c8"
df
# -
# ## Using MARBERT Tokenizer
# + id="-1eSJJdjSpOb" outputId="e5cae811-c812-47b3-b476-9f6e3e20244a"
tokenizer = AutoTokenizer.from_pretrained("UBC-NLP/MARBERT")
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True)
# + id="KX9Mgz4dk6sV" outputId="b74fe640-314b-4e32-c734-b9bc6582e79e"
tokenized_df = df.map(preprocess_function, batched=True)
tokenized_df = tokenized_df.remove_columns(["text"])
# + id="YYIoHUEbznvE"
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
# + id="giXjsXUyo5tQ"
tf_train_dataset = tokenized_df["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "label"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_validation_dataset = tokenized_df["test"].to_tf_dataset(
columns=["attention_mask", "input_ids", "label"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
)
# + id="SB8kYU_ow_-P"
batch_size = 16
num_epochs = 5
batches_per_epoch = len(tokenized_df["train"]) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
# + id="llxfZ10lxa8r" outputId="772c73db-8c5f-43c5-e4e6-3c4c7fe15b20"
model = TFAutoModelForSequenceClassification.from_pretrained("UBC-NLP/MARBERT", num_labels=18)
# + id="s-GcjHqq1lHe" outputId="91f63d2e-f7ce-4ae8-91d1-8d5b83140ac4"
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath='./',
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
callbacks = [model_checkpoint_callback]
# -
# ## Model Training
# + id="MC6a879HxEsP" outputId="03092922-1edc-484d-ff7b-65b1efcdba8e"
model.compile(optimizer=optimizer)
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=3
)
# + id="lrf40mEAxSr2"
# model.save_pretrained("./")
model.load_weights("./tf_model.h5")
# +
# predict_input = tokenizer.encode(df_cl["text"][0],
# truncation=True,
# padding=True,
# return_tensors="tf")
# -
# ## Encoding the test data text, and doing the evaluation
input_seq_test = [tokenizer.encode(lst, truncation=True, padding=True, return_tensors="tf")
for lst in df_test["text"]]
len(input_seq_test)
input_seq_test[0]
tf_output = [np.argmax(model.predict(lst)[0], axis=1) for lst in input_seq_test]
prediction = [lst[0] for lst in tf_output]
dialects_dict = {
3: "EG",
11: "PL",
6: "KW",
8: "LY",
12: "QA",
5: "JO",
7: "LB",
13: "SA",
0: "AE",
1: "BH",
10: "OM",
15: "SY",
2: "DZ",
4: "IQ",
9: "MA",
17: "YE",
16: "TN",
14: "SD"
}
# +
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(df_test["label"], prediction))
# -
# ## Tuned MARBERT got 58% f1 score, and this is better than LinearSVC
| Notebooks/MARBERT-FineTuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# _Python_ package [neurolab](https://github.com/zueve/neurolab) provides a working environment for ANN.
# # Perceptrons
# We can implement a neural network having a single layer of perceptrons (apart from input units) using _neurolab_ package as an instance of the class `newp`. In order to do so we need to provide the following parameters:
# * `minmax`: a list with the same length as the number of input neurons. The $i$-th element on this list is a list of two numbers, indicating the range of input values for the $i$-th neuron.
# * `cn`: number of output neurons.
# * `transf`: activation function (default value is threshold).
#
# Therefore, when we choose 1 as the value of parameter `cn`, we will be representing a simple perceptron having as many inputs as the length of the list associated to `minmax`.
# Let us start by creating a simple perceptron with two inputs, both of them ranging in $[0, 1]$, and with threshold activation function.
# +
from neurolab import net
perceptron = net.newp(minmax=[[0, 1], [0, 1]], cn=1)
# -
# The instance that we just created has the following attributes:
# * `inp_minmax`: range of input values.
# * `co`: number of output neurons.
# * `trainf`: training function (the only one specific for single-layer perceptrons is the Delta rule).
# * `errorf`: error function (default value is half of SSE, _sum of squared errors_)
#
# The layers of the neural network (input layer does not count, thus in our example there is only one) are stored in a list associated with the attribute `layers`. Each layer is an instance of the class `Layer` and has the following attributes:
# * `ci`: number of inputs.
# * `cn`: number of neurons on it.
# * `co`: number of outputs.
# * `np`: dictionary with an element `'b'` that stores an array with the neurons' biasses (terms $a_0 w_0$, default value is 0) and an element `'w'` that stores an array with the weights associated with the incoming connections arriving on each neuron (default value is 0).
print(perceptron.inp_minmax)
print(perceptron.co)
print(perceptron.trainf)
print(perceptron.errorf)
layer = perceptron.layers[0]
print(layer.ci)
print(layer.cn)
print(layer.co)
print(layer.np)
# Next, let us train the perceptron so that it models the logic gate _and_.
#
# First of all, let us define the training set. We shall do it indicating on one hand an array or list of lists with the imput values corresponding to the examples, and on the other hand a different array or list of lists with the expected ouput for each example.
# +
import numpy
input_values = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
expected_outcomes = numpy.array([[0], [0], [0], [1]])
# -
# The method `step` allows us to calculate the output of the neural network for a single example, and the method `sim` for all the examples.
perceptron.step([1, 1])
perceptron.sim(input_values)
# Let us check which is the initial error of the perceptron, before the training.
#
# __Important__: the arguments of the error function must be arrays.
perceptron.errorf(expected_outcomes, perceptron.sim(input_values))
# Let us next proceed to train the perceptron. We shall check that, as expected (since the training set is linearly separable), we are able to decrease the value of the error down to zero.
#
# __Note__: the method `train` that runs the training algorithm on the neural network returns a list showing the value of the network error after each of the _epochs_. More precisely, an epoch represents the set of operations performed by the training algorithm until all the examples of the training set have been considered.
perceptron.train(input_values, expected_outcomes)
print(perceptron.layers[0].np)
print(perceptron.errorf(expected_outcomes, perceptron.sim(input_values)))
# # Feed forward perceptrons
# Package _neurolab_ implements a feed forward artificial neural network as an instance of the class `newff`. In order to do so, we need to provide the following parameters:
# * `minmax`: a list with the same length as the number of input neurons. The $i$-th element on this list is a list of two numbers, indicating the range of input values for the $i$-th neuron.
# * `cn`: number of output neurons.
# * `transf`: activation function (default value is threshold).
#
# * `size`: a list with the same length as the number of layers (except the input layer). The $i$-th element on this list is a number, indicating the number of neurons for the $i$-th layer.
# * `transf`: a list with the same length as the number of layers (except the input layer). The $i$-th element on this list is the activation function (default value is [hyperbolic tangent](https://en.wikipedia.org/wiki/Hyperbolic_functions) for the neurons of the $i$-th layer.
# Next, let us create a neural network with two inputs ranging over $[0, 1]$, one hidden layer having two neurons and an output layer with only one neuron. All neurons should have the sigmoid function as activation function (you may look for further available activation functions at https://pythonhosted.org/neurolab/lib.html#module-neurolab.trans).
# +
from neurolab import trans
sigmoid_act_fun = trans.LogSig()
my_net = net.newff(minmax=[[0, 1], [0, 1]], size=[2, 1], transf=[sigmoid_act_fun]*2)
# -
# The instance that we just created has the following attributes:
# * `inp_minmax`: range of input values.
# * `co`: number of output neurons.
# * `trainf`: training function (default value is [Broyden–Fletcher–Goldfarb–Shanno algorithm](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm)).
# * `errorf`: error function (default value is half of SSE, _sum of squared errors_)
#
# The layers of the neural network (input layer excluded) are stored in a list associated with the attribute `layers`. Each layer is an instance of the class `Layer` and has the following attributes:
# * `ci`: number of inputs.
# * `cn`: number of neurons on it.
# * `co`: number of outputs.
# * `np`: dictionary with an element `'b'` that stores an array with the neurons' biasses (terms $a_0 w_0$) and an element `'w'` that stores an array with the weights associated with the incoming connections arriving on each neuron. Default values for the biasses and the weights are calculated following the [Nguyen-Widrow initialization algorithm](https://web.stanford.edu/class/ee373b/nninitialization.pdf).
print(my_net.inp_minmax)
print(my_net.co)
print(my_net.trainf)
print(my_net.errorf)
hidden_layer = my_net.layers[0]
print(hidden_layer.ci)
print(hidden_layer.cn)
print(hidden_layer.co)
print(hidden_layer.np)
output_layer = my_net.layers[1]
print(output_layer.ci)
print(output_layer.cn)
print(output_layer.co)
print(output_layer.np)
# It is possible to modify the initialization of the biases and weights, you may find available initialization options at https://pythonhosted.org/neurolab/lib.html#module-neurolab.init.<br>
# Let us for example set all of them to zero, using the following instructions:
# +
from neurolab import init
for l in my_net.layers:
l.initf = init.init_zeros
my_net.init()
print(hidden_layer.np)
print(output_layer.np)
# -
# It is also possible to modify the training algorithm, you may find available implemented options at https://pythonhosted.org/neurolab/lib.html#module-neurolab.train.<br>
# Let us for example switch to the _gradient descent backpropagation_, using the following instructions:
# +
from neurolab import train
my_net.trainf = train.train_gd
# -
# Finally, we can also modify the error function to be used when training, you may find available options at https://pythonhosted.org/neurolab/lib.html#module-neurolab.error.<br>
# Let us for example choose the _mean squared error_, using the following instructions:
# +
from neurolab import error
my_net.errorf = error.MSE()
# -
# Next, let us train our neural network so that it models the behaviour of the _xor_ logic gate.
#
# First, we need to split our training set into two components: on one hand an array or a list of lists with the input data corresponding to each example, *xor_in* , and on the other hand an array or list of lists with the correct expected ouput for each example, *xor_out* (remember that this time the training set is **not** linearly separable).
xor_in = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
xor_out = numpy.array([[0], [1], [1], [0]])
# Let us measure which is the error associated to the initial neural network before the training starts:
print(my_net.sim(xor_in))
print(my_net.errorf(xor_out, my_net.sim(xor_in)))
# Let us now proceed to run the training process on the neural network. The functions involved in the training work over the following arguments:
# * `lr`: _learning rate_, default value 0.01.
# * `epochs`: maximum number of epochs, default value 500.
# * `show`: number of epochs that should be executed between two messages in the output log, default value 100.
# * `goal`: maximum error accepted (halting criterion), default value 0.01.
my_net.train(xor_in, xor_out, lr=0.1, epochs=50, show=10, goal=0.001)
my_net.sim(xor_in)
# Let us now try a different setting. If we reset the neural network and we choose random numbers as initial values for the weights, we obtain the following:
numpy.random.seed(3287426346) # we set this init seed only for class, so that we always get
# the same random numbers and we can compare
my_net.reset()
for l in my_net.layers:
l.initf = init.InitRand([-1, 1], 'bw') # 'b' means biases will be modified,
# and 'w' the weights
my_net.init()
my_net.train(xor_in, xor_out, lr=0.1, epochs=10000, show=1000, goal=0.001)
my_net.sim(xor_in)
# # _Iris_ dataset
# _Iris_ is a classic multivariant dataset that has been exhaustively studied and has become a standard reference when analysing the behaviour of different machine learning algorithms.
#
# _Iris_ gathers four measurements (length and width of sepal and petal) of 50 flowers of each one of the following three species of lilies: _Iris setosa_, _Iris virginica_ and _Iris versicolor_.
#
# Let us start by reading the data from the file `iris.csv` that has been provided together with the practice. It suffices to evaluate the following expressions:
# +
import pandas
iris = pandas.read_csv('iris.csv', header=None,
names=['Sepal length', 'sepal width',
'petal length', 'petal width',
'Species'])
iris.head(10) # Display ten first examples
# -
# Next, let us move to use a numerical version of the species instead.<br>
# Then, we should distribute the examples into two groups: training and test, and split each group into two components: input and expected output (goal).
# +
#this piece of code might cause an error if wrong version of sklearn
#from sklearn import preprocessing
#from sklearn import model_selection
#iris_training, iris_test = model_selection.train_test_split(
# iris, test_size=.33, random_state=2346523,
# stratify=iris['Species'])
#ohe = preprocessing.OneHotEncoder(sparse = False)
#input_training = iris_training.iloc[:, :4]
#goal_training = ohe.fit_transform(iris_training['Species'].values.reshape(-1, 1))
#input_training = iris_test.iloc[:, :4]
#goal_training = ohe.transform(iris_test['Species'].values.reshape(-1,1))
#################
#try this instead if the previous does not work
import pandas
from sklearn import preprocessing
from sklearn import model_selection
iris2 = pandas.read_csv('iris_enc.csv', header=None,
names=['Sepal length', 'sepal width',
'petal length', 'petal width',
'Species'])
#iris2.head(10) # Display ten first examples
iris_training, iris_test = model_selection.train_test_split(
iris2, test_size=.33, random_state=2346523,
stratify=iris['Species'])
ohe = preprocessing.OneHotEncoder(sparse = False)
input_training = iris_training.iloc[:, :4]
goal_training = ohe.fit_transform(iris_training['Species'].values.reshape(-1, 1))
goal_training[:10] # this displays the 10 first expected output vectors (goal)
# associated with the training set examples
# -
input_test = iris_test.iloc[:, :4]
goal_test = ohe.transform(iris_test['Species'].values.reshape(-1,1))
print(input_training.head(10))
print(goal_training[:10])
print(input_test.head(10))
print(goal_test[0:10])
# __Exercise 1__: define a function **lily_species** that, given an array with three numbers as input, returns the position where the maximum value is.
# +
def lily_species1(a):
valM = max(a)
for i in range(len(a)):
if(a[i] == valM):
return i
def lily_species2(a):
b=list(a)
return b.index(max(b))
def lily_species3(a):
return numpy.argmax(a)
print(lily_species1(numpy.array([2, 5, 0])))
print(lily_species2(numpy.array([2, 5, 0])))
print(lily_species3(numpy.array([2, 5, 0])))
# -
# __Exercise 2__: Create a feed forward neural network having the following features:
# 1. Has four input neurons, one for each attribute of the iris dataset.
# 2. Has three output neurons, one for each species.
# 3. Has one hidden layer with two neurons.
# 4. All neurons of all layers use the sigmoid as activation function.
# 5. The initial biases and weights are all equal to zero.
# 6. Training method is gradient descent backpropagation.
# 7. The error function is the mean squared error.
#
# Once you have created it, train the network over the sets `input_training` and `goal_training`.
# +
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx2 = net.newff(minmax=[[4.0, 8.5], [1.5, 5.0], [0.5, 7.5], [0.0, 3.0]], size=[2,3],
transf=[sigmoid_act_fun]*2)
for l in netEx2.layers:
l.initf = init.init_zeros
netEx2.init()
netEx2.trainf = train.train_gd
netEx2.errorf = error.MSE()
# -
# __Exercise 3__: Calculate the performance of the network that was trained on the previous exercise, using to this aim the sets `input_test` and `goal_test`. That is, calculate which fraction of the test set is getting the correct classification predicted by the network.<br>
# __Hint:__ In order to translate the output of the network and obtain which is the species predicted, use the function from exercise 1.
# +
netEx2.init()
netEx2.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx2.sim(input_test)
res = 0
for i in range(len(flist)):
if str(lily_species1(flist[i].tolist())) == str(lily_species1(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
# -
# __Exercise 4__: try to create different variants of the network from exercise 2, by modifying the number of hidden layers and/or the amount of neurons per layer, in such a way that the performance over the test set is improved.
# +
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx4 = net.newff(minmax=[[4.0, 4.5], [1.5, 5.0], [3.5, 7.5], [0.0, 3.0]], size=[3,3],
transf=[sigmoid_act_fun]*2)
for l in netEx4.layers:
l.initf = init.init_zeros
netEx4.init()
netEx4.trainf = train.train_gd
netEx4.errorf = error.MSE()
netEx4.init()
netEx4.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx4.sim(input_test)
res = 0
for i in range(len(flist)):
if str(lily_species1(flist[i].tolist())) == str(lily_species1(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
# +
#################################################################
# +
# Importing the dataset
# +
import pandas
from sklearn import preprocessing
from sklearn import model_selection
transfusion = pandas.read_csv('transfusion.csv', header=None,
names=['Recency (months)', 'Frequency (times)',
'Monetary (c.c. blood)', 'Time (months)',
'Whether he/she donated blood in March 2007'])
transfusion.head(10) # Display ten first examples
# +
transfusion_training, transfusion_test = model_selection.train_test_split(
transfusion, test_size=.33, random_state=2346523,
stratify=transfusion['Whether he/she donated blood in March 2007'])
ohe = preprocessing.OneHotEncoder(sparse = False)
input_training = transfusion_training.iloc[:, :4]
goal_training = ohe.fit_transform(transfusion_training['Whether he/she donated blood in March 2007'].values.reshape(-1, 1))
goal_training[:10]
# -
input_test = transfusion_test.iloc[:, :4]
goal_test = ohe.transform(transfusion_test['Whether he/she donated blood in March 2007'].values.reshape(-1,1))
# +
# Exercise 1
# +
def transfusions(a):
return numpy.argmax(a)
print(transfusions(numpy.array([2, 5, 0])))
# +
# Exercise 2
# +
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx2 = net.newff(minmax=[[0.0, 75.5], [1.0, 50.0], [250.0, 12500.5], [2.0, 98.5]], size=[2,2],
transf=[sigmoid_act_fun]*2)
for l in netEx2.layers:
l.initf = init.init_zeros
netEx2.init()
netEx2.trainf = train.train_gd
netEx2.errorf = error.MSE()
# +
# Exercise 3
# +
netEx2.init()
netEx2.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx2.sim(input_test)
res = 0
for i in range(len(flist)):
if str(transfusions(flist[i].tolist())) == str(transfusions(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
# +
# Exercise 4
# +
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx4 = net.newff(minmax=[[0.0, 75.5], [1.0, 50.0], [250.0, 12500.5], [2.0, 98.5]], size=[300,2],
transf=[sigmoid_act_fun]*2)
for l in netEx4.layers:
l.initf = init.init_zeros
netEx4.init()
netEx4.trainf = train.train_gd
netEx4.errorf = error.MSE()
netEx4.init()
netEx4.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx4.sim(input_test)
res = 0
for i in range(len(flist)):
if str(transfusions(flist[i].tolist())) == str(transfusions(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
| Practica 3/Practice_03_ANN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# ### Egeria Hands-On Lab
# # Welcome to the Configuring Egeria Servers Lab
# ## Introduction
#
# Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.
#
# In this hands-on lab you will learn how to configure the metadata servers used by [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).
# ## The scenario
#
# <img src="https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png" style="float:left">
#
# Coco Pharmaceuticals is going through a major business transformation that requires them to drastically reduce their cycle times, collaborate laterally across the different parts of the business and react quickly to the changing needs of their customers. (See [this link](https://opengovernance.odpi.org/coco-pharmaceuticals/) for the background to this transformation).
#
# Part of the changes needed to the IT systems that support the business is the roll out of a distributed open metadata and governance capability that is provided by Egeria.
#
# [<NAME>](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at Coco Pharmaceuticals.
#
# In this hands-on lab Gary is configuring the servers that support this open ecosystem. These servers are collectively called Open Metadata and Governance (OMAG) Servers.
#
# Gary's userId is `garygeeke`.
# +
import requests
adminUserId = "garygeeke"
# -
# He needs to define the OMAG servers for Coco Pharmaceuticals.
organizationName = "Coco Pharmaceuticals"
# ## Open Metadata and Governance (OMAG) management landscape
#
# At the heart of an open metadata and governance landscape are the servers that store and exchange metadata in a peer-to-peer exchange called the
# [open metadata repository cohort](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/cohort-member.html).
# These servers are collectively called **cohort members**. There are three types of cohort member that Gary needs to consider:
#
# * A [Metadata Server](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/metadata-server.html) that uses
# a native Egeria repository to store open metadata. There should be at least one of these servers in a cohort. It used to support
# either a community of users that are using the Egeria functions directly or to fill in any gaps in the metadata support provided by the
# third party tools that are connected to the cohort.
#
# * A [Metadata Access Point](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/metadata-access-point.html) that
# has no metadata repository of its own and uses federated queries to retrieve and store metadata in the other repositories connected to the cohort.
#
# * A [Repository Proxy](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/repository-proxy.html) that connects
# in a thrid party metadata server.
#
# Gary has decided to deploy a separate cohort member server for each part of the organization that owns
# [assets](https://egeria.odpi.org/open-metadata-implementation/access-services/docs/concepts/assets/).
# You can think of each of these servers as supporting a community of users within Coco Pharmaceuticals. The servers are as follows:
#
# * cocoMDS1 - Data Lake Operations - a **metadata server** used to manage the data in the data lake.
# * cocoMDS2 - Governance - a **metadata server** used by all of the governance teams to operate the governance programs.
# * cocoMDS3 - Research - a **metadata server** used by the research teams who are developing new treatments.
# * cocoMDS4 - Data Lake Users - a **metadata access point** used by general business users and the executive team to access data
# from the data lake.
# * cocoMDS5 - Business Systems - a **repository proxy** used to connect to the existing ETL tool that manages data movement amongst the
# business systems. It has a metadata record of the operational business systems such as procurements, sales, human resources and
# finance and the movement of data between them. This tool is also loading data from the business systems into the data lake. Its metadata
# is critical for providing lineage for the data used to run the business.
# * cocoMDS6 - Manufacturing - a **metadata server** used by the supplies warehouse, manufacturing and distribution teams.
# * cocoMDSx - Development - a **metadata server** used by the software development teams building new IT capablity.
# * cocoEDGEi - Manufacturing sensors edge node servers (many of them) - these **metadata servers** catalog the collected sensor data.
#
# In addition, Coco Pharmaceuticals needs additional servers to support Egeria's user interface and automated metadata processing:
#
# * cocoView1 - a [View Server](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/view-server.html)
# that runs the services for the user interface.
# * exchangeDL01 - an [Integration Daemon](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/integration-daemon.html)
# server that supports the automatic exchange of metadata with third party technologies.
# * governDL01 - an [Engine Host](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/engine-host.html)
# server that runs governance functions that monitor, validate, correct and enrich metadata for use by all of the technologies in the connected
# open metadata ecosystem.
#
# These servers will each be configured in later parts of this hands-on lab, but first there are decisons to be made about the platform that the servers will run on and how they will be connected together.
# ### Open Metadata and Governance (OMAG) Server Platforms
#
# Coco Pharmaceuticals' servers must be hosted on at least one OMAG Server Platform.
# This is a single executable (application) that can be started from the command line or a script or as part of a
# pre-built container environment `kubernetes`.
#
# If you are running this notebook as part of an Egeria hands on lab then the server platforms you need are already started. Run the following command to check that the platforms are running.
#
# %run common/environment-check.ipynb
# ----
# If one of the platforms is not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). Once the platforms are running you are ready to proceed.
#
# ----
# Most of the servers are supporting a pretty stable environment and can share an OMAG Server Platform because the workload they are supporting is predicable.
# The data lake however requires a lot of active governance and is evolving rapidly.
# To isolate this churn, Gary chooses to put all of the metadata and governance servers for the data lake on to their own platform.
# The development team requested that their infrastructure is completely separate from the operational systems,
# so they are given their own server platform.
# Finally each of the edge servers will run their own OMAG Server Platform to support their own metadata server.
#
# Figure 1 shows which servers will sit in each platform. The cohort members are shown in white,
# governance servers in orange and the view server (that supports the UI) is in green.
#
# 
# > **Figure 1:** Coco Pharmaceuticals' OMAG Server Platforms
#
#
# The sensor edge node servers used to monitor the warehouse operation and manufacturing process each have their own platform and are not yet included in this notebook.
# ### Open Metadata Repository Cohorts
#
# A metadata server, metadata access point and repository proxy can become a member of none, one or many cohorts.
# Once a server has joined a cohort it can exchange metadata with the other members of that cohort.
# So the cohorts define scopes of sharing.
#
# Gary decides to begin with three open metadata repository cohorts:
#
# * **cocoCohort** - The production cohort contains all of the servers that are used to run, coordinate and govern the business.
# * **devCohort** - The development cohort where the development teams are building and testing new capablity. Much of their metadata describes the software components under construction and the governance of the software development lifecycle.
# * **iotCohort** - The IoT cohort used to manage the sensors and robots in the manufacturing systems. The metadata produced by the sensors and robots is only of interest to the manufactuing and governance team.
#
# Figure 2 shows which servers belong to each cohort.
#
# 
# > **Figure 2:** Membership of Coco Pharmaceuticals' cohorts
#
# Below are the names of the three cohorts.
cocoCohort = "cocoCohort"
devCohort = "devCohort"
iotCohort = "iotCohort"
# At the heart of each cohort is an event topic. By default, Egeria uses [Apache Kafka](https://kafka.apache.org/) topics.
# The servers that wil ljoin a cohort will need to be configured with the host name and port where Kafka is running.
# The command below pulls the value from an environment variable called `eventBusURLroot` with a default value of
# `localhost:9092`. It is used in all of the server configuration documents to connect it to Kafka.
# +
eventBusURLroot = os.environ.get('eventBusURLroot', 'localhost:9092')
jsonContentHeader = {'content-type':'application/json'}
eventBusBody = {
"producer": {
"bootstrap.servers": eventBusURLroot
},
"consumer":{
"bootstrap.servers": eventBusURLroot
}
}
# -
# ## Access services
#
# [The Open Metadata Access Services (OMAS)](https://egeria.odpi.org/open-metadata-implementation/access-services/) provide domain-specific services for data tools, engines and platforms to integrate with open metadata. These are the different types of access service.
# +
getAccessServices(cocoMDS1PlatformName, cocoMDS1PlatformURL)
# -
# The table below shows which access services are needed by each server.
#
#
# | Access Service | cocoMDS1 | cocoMDS2 | cocoMDS3 | cocoMDS4 | cocoMDS5 | cocoMDS6 | cocoMDSx | cocoEDGE*i* |
# | :------------------- | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :---------: |
# | asset-catalog | Yes | Yes | Yes | Yes | No | Yes | Yes | No |
# | asset-consumer | Yes | Yes | Yes | Yes | No | Yes | Yes | No |
# | asset-owner | Yes | Yes | Yes | No | No | Yes | Yes | No |
# | community-profile | Yes | Yes | Yes | Yes | No | Yes | Yes | No |
# | glossary-view | Yes | Yes | Yes | Yes | No | Yes | Yes | No |
# | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | data-science | No | No | Yes | Yes | No | Yes | Yes | No |
# | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | subject-area | No | Yes | Yes | No | No | Yes | Yes | No |
# | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | governance-program | No | Yes | No | No | No | No | No | No |
# | data-privacy | No | Yes | No | No | No | No | No | No |
# | security-officer | No | Yes | No | No | No | No | No | No |
# | asset-lineage | No | Yes | No | No | No | No | No | No |
# | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | discovery-engine | Yes | No | Yes | No | No | Yes | Yes | No |
# | governance-engine | Yes | Yes | Yes | No | No | Yes | Yes | No |
# | asset-manager | Yes | No | Yes | No | No | Yes | Yes | No |
# | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | data-engine | Yes | No | No | No | No | Yes | No | Yes |
# | data-manager | Yes | No | No | No | No | Yes | No | Yes |
# | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | it-infrastructure | No | Yes | No | No | No | Yes | Yes | No |
# | project-management | No | Yes | Yes | No | No | Yes | Yes | No |
# | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# | software-developer | No | No | No | No | No | No | Yes | No |
# | devops | No | No | No | No | No | No | Yes | No |
# | digital-architecture | Yes | Yes | No | No | No | No | Yes | No |
# | design-model | No | No | No | No | No | No | Yes | No |
# | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
# ## Egeria Server Configuration Overview
#
# Open metadata servers are configured using REST API calls to an OMAG Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.
#
# As each configuration call is made, the OMAG Server Platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.
#
# The configuration document will then be deployed with the OMAG Server Platform that is to host the server. When a request is made to this OMAG Server Platform to start the server, it reads the configuration document and initializes the server with the appropriate services.
#
# ## Configuration Set Up
#
# A server can be configured by any OMAG Server Platform - it does not have to be the same platform where the server will run. For this hands on lab we will use the development team's OMAG Server Platform to create the servers' configuration documents and then deploy them to the platforms where they will run.
adminPlatformURL = devPlatformURL
# The URLs for the configuration REST APIs have a common structure and begin with the following root:
adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/'
# Many of Coco Pharmaceuticals' metadata servers need a local repository to store metadata about the data and processing occuring in the data lake.
#
# Egeria includes two types of repositories natively. One is an **in-memory repository** that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the **local graph repository**.
#
# The choice of local repository is made by specifying the local repository mode. The variables below show the two options. The `metadataRepositoryType` identfies which one is going to be used in the configuration.
# +
inMemoryRepositoryOption = "in-memory-repository"
graphRepositoryOption = "local-graph-repository"
# Pick up which repo type to use from environment if set, otherwise default to inmemory
metadataRepositoryType = os.environ.get('repositoryType', inMemoryRepositoryOption)
# -
# Egeria supports instance based security. These checks can be customized through an
# [Open Metadata Security Connector](https://egeria.odpi.org/open-metadata-implementation/common-services/metadata-security/).
# Coco Pharaceuticals have written their own connector to support the specific rules of their industry.
# The Connection definition below tells a server how to load this connector. It needs to be included in each server's configuration document.
serverSecurityConnectionBody = {
"class": "Connection",
"connectorType": {
"class": "ConnectorType",
"connectorProviderClassName": "org.odpi.openmetadata.metadatasecurity.samples.CocoPharmaServerSecurityProvider"
}
}
# Finally, to ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
maxPageSize = '600'
# ## Configuring cocoMDS1 - Data Lake Operations metadata server
#
# This section configures the `cocoMDS1` server. The server name is passed on every configuration call to identify which configuration document to update with the new configuration. The configuration document is created automatically on first use.
# +
mdrServerName = "cocoMDS1"
mdrServerUserId = "cocoMDS1npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = dataLakePlatformURL
metadataCollectionName = "Data Lake Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType)
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
accessServiceOptions = {
"SupportedZones": ["quarantine", "clinical-trials", "research", "data-lake", "trash-can"]
}
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions)
accessServiceOptions["DefaultZones"] = [ "quarantine" ]
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'digital-architecture', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions)
print("\nDone.")
# -
# ----
#
# ## Configuring cocoMDS2 - Governance metadata server
#
# This section configures the `cocoMDS2` server. This server is configured in a similar way to cocoMDS1 except that is has different Open Metadata Access Services (OMASs) enabled and it joins all of the cohorts.
#
# The code below covers the basic set up of the server properties, security, event bus and local repository.
# +
mdrServerName = "cocoMDS2"
mdrServerUserId = "cocoMDS2npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = corePlatformURL
metadataCollectionName = "Governance Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType)
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
# Note: cohort membership is configured for all of the cohorts here
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-program', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-privacy', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'digital-architecture', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'security-officer', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-lineage', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', {})
print("\nDone.")
# -
# ----
#
# ## Configuring cocoMDS3 - Research
#
# Server cocoMDS3 is used by the research teams who are developing new treatments.
# These teams are working with their own assets as well as assets coming from the data lake.
# So they have their own repository and connector to the core cohort to access all of the
# operational metadata.
#
# This is one of the big changes brought by Coco Pharaceuticals' business transformation.
# In their old business model, the research teams were completely separate from the operational
# part of the organization. Now they need to be an active member of the day to day running of
# the organization, supporting the development of personalized medicines and their use in
# treating patients.
# +
mdrServerName = "cocoMDS3"
mdrServerUserId = "cocoMDS3npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = corePlatformURL
metadataCollectionName = "Research Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType)
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
accessServiceOptions = {
"SupportedZones": ["personal-files", "clinical-trials", "research", "data-lake", "trash-can"]
}
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions)
accessServiceOptions["DefaultZones"] = [ "personal-files" ]
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions)
print("\nDone.")
# -
# ----
# ## Configuring cocoMDS4 - Data Lake Users
#
# Server cocoMDS4 used by general business users and the executive team to access data from the data lake.
# It does not have a repository of its own. Instead it issues federated queries to the other repositories in the `cocoCohort`.
# +
mdrServerName = "cocoMDS4"
mdrServerUserId = "cocoMDS4npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = dataLakePlatformURL
metadataCollectionName = "Data Lake Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
# Note: no metadata repository or collection configuration here
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
accessServiceOptions = {
"SupportedZones": [ "data-lake" ]
}
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions)
print("\nDone.")
# -
# ----
#
# ## Configuring cocoMDS5 - Business Systems
#
# Server cocoMDS5 is a repository proxy to an ETL tool called `iisCore01`. This ETL tool is well established in Coco Pharmaceuticals and has a built-in metadata repository that contains information about their operational business systems such as procurement, sales, human resources and finance.
#
# This ETL tool has its own user interface and services so the OMASs are not enabled.
# +
mdrServerName = "cocoMDS5"
mdrServerUserId = "cocoMDS5npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = corePlatformURL
metadataCollectionName = "Business Systems Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureRepositoryProxyDetails(adminPlatformURL, adminUserId, mdrServerName, "org.odpi.openmetadata.adapters.repositoryservices.readonly.repositoryconnector.ReadOnlyOMRSRepositoryConnectorProvider")
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
# Note: no access service configuration here
# Still need to add startup Archive
print("\nDone.")
# -
# ----
#
# ## Configuring cocoMDS6 - Manufacturing
#
# Server cocoMDS6 is the repository server used by the warehouse, manufacturing and distribution teams. It supports the systems for this part of the organization and acts as a hub for monitoring the IoT environment.
# +
mdrServerName = "cocoMDS6"
mdrServerUserId = "cocoMDS6npa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = corePlatformURL
metadataCollectionName = "Manufacturing Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType)
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
accessServiceOptions = {
"SupportedZones": [ "manufacturing" ],
"DefaultZones" : [ "manufacturing"]
}
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions)
print("\nDone.")
# -
# ----
#
# ## Configuring cocoMDSx - Development
#
# Server cocoMDSx is used by the development teams building new IT capablity. It will hold all of the software component assets and servers used for development and devOps. The development teams have their own OMAG Server Platform and cohort called 'devCohort'.
# +
mdrServerName = "cocoMDSx"
mdrServerUserId = "cocoMDSxnpa"
mdrServerPassword = "<PASSWORD>"
mdrServerPlatform = devPlatformURL
metadataCollectionName = "Development Catalog"
print("Configuring " + mdrServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, mdrServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId)
configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody)
configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType)
configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName)
configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort)
print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...")
accessServiceOptions = {
"SupportedZones": [ "sdlc" ],
"DefaultZones": [ "sdlc" ]
}
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {})
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'software-developer', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'devops', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'digital-architecture', accessServiceOptions)
configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'design-model', accessServiceOptions)
print("\nDone.")
# -
# ----
# ## Configuring the exchangeDL01 Integration Daemon
#
# The **exchangeDL01** integration daemon server supports the automatic exchange of metadata with third party technologies.
# It runs [integration connectors](https://egeria.odpi.org/open-metadata-implementation/governance-servers/integration-daemon-services/docs/integration-connector.html)
# that each connect to a particular third party technology to exchange metadata.
#
# Egeria offers the following Open Metadata Integration Services (OMIS), or integration services for short. These integration services provide specialist services for an integration connector. The command below lists the different types of integration services.
# +
getIntegrationServices(exchangeDL01PlatformName, exchangeDL01PlatformURL)
# -
# An integration connector depends on a single integration service.
# Gary plans to use two integration connectors supplied by Egeria:
#
# The **DataFilesMonitorIntegrationConnector** maintains a DataFile asset for each file in the directory (or any subdirectory).
# When a new file is created, a new DataFile asset is created. If a file is modified, the lastModified property
# of the corresponding DataFile asset is updated. When a file is deleted, its corresponding DataFile asset is also deleted.
#
# The **DataFolderMonitorIntegrationConnector** maintains a DataFolder asset for the directory. The files and directories
# underneath it are assumed to be elements/records in the DataFolder asset and so each time there is a change to the
# files and directories under the monitored directory, it results in an update to the lastModified property
# of the corresponding DataFolder asset.
#
# They will be used to automatically catalog data files provided by the different partner hospitals and move them from the
# landing area to the data lake once the cataloguing is complete.
#
# Figure 3 shows the integration daemon with its two connectors.
# It uses cocoMDS1 to store and retrieve metadata, since that is where the assets for the data lake are catalogued.
#
# 
# > **Figure 3:** exchangeDL01 with its partner metadata server
#
#
# ### Configuring the server
#
# The commands below configure the integration daemon with the Files Integrator OMIS and the two connectors.
# +
daemonServerName = "exchangeDL01"
daemonServerPlatform = dataLakePlatformURL
daemonServerUserId = "exchangeDL01npa"
daemonServerPassword = "<PASSWORD>"
mdrServerName = "cocoMDS1"
mdrServerPlatform = dataLakePlatformURL
OakDeneConnectorName = "OakDeneLandingAreaFilesMonitor"
OakDeneConnectorUserId = "onboardDL01npa"
OakDeneConnectorSourceName = "HospitalLandingArea"
OakDeneConnectorFolder = fileSystemRoot + '/landing-area/hospitals/oak-dene/clinical-trials/drop-foot'
OakDeneConnectorConnection = {
"class" : "Connection",
"connectorType" :
{
"class" : "ConnectorType",
"connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider"
},
"endpoint" :
{
"class" : "Endpoint",
"address" : OakDeneConnectorFolder
}
}
OldMarketConnectorName = "OldMarketLandingAreaFilesMonitor"
OldMarketConnectorUserId = "onboardDL01npa"
OldMarketConnectorSourceName = "HospitalLandingArea"
OldMarketConnectorFolder = fileSystemRoot + '/landing-area/hospitals/old-market/clinical-trials/drop-foot'
OldMarketConnectorConnection = {
"class" : "Connection",
"connectorType" :
{
"class" : "ConnectorType",
"connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider"
},
"endpoint" :
{
"class" : "Endpoint",
"address" : OldMarketConnectorFolder
}
}
folderConnectorName = "DropFootClinicalTrialResultsFolderMonitor"
folderConnectorUserId = "monitorDL01npa"
folderConnectorSourceName = "DropFootClinicalTrialResults"
folderConnectorFolder = fileSystemRoot + '/data-lake/research/clinical-trials/drop-foot/weekly-measurements'
folderConnectorConnection = {
"class" : "Connection",
"connectorType" :
{
"class" : "ConnectorType",
"connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFolderMonitorIntegrationProvider"
},
"endpoint" :
{
"class" : "Endpoint",
"address" : folderConnectorFolder
}
}
print("Configuring " + daemonServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, daemonServerName, daemonServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, daemonServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, daemonServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, daemonServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, daemonServerName, daemonServerUserId)
configurePassword(adminPlatformURL, adminUserId, daemonServerName, daemonServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, daemonServerName, serverSecurityConnectionBody)
configureDefaultAuditLog(adminPlatformURL, adminUserId, daemonServerName)
print("\nConfiguring " + daemonServerName + " integration connectors ...")
connectorConfigs = [
{
"class" : "IntegrationConnectorConfig",
"connectorName" : OakDeneConnectorName,
"connectorUserId" : OakDeneConnectorUserId,
"connection" : OakDeneConnectorConnection,
"metadataSourceQualifiedName" : OakDeneConnectorSourceName,
"refreshTimeInterval" : 10,
"usesBlockingCalls" : "false"
},
{
"class" : "IntegrationConnectorConfig",
"connectorName" : OldMarketConnectorName,
"connectorUserId" : OldMarketConnectorUserId,
"connection" : OldMarketConnectorConnection,
"metadataSourceQualifiedName" : OldMarketConnectorSourceName,
"refreshTimeInterval" : 10,
"usesBlockingCalls" : "false"
},
{
"class" : "IntegrationConnectorConfig",
"connectorName" : folderConnectorName,
"connectorUserId" : folderConnectorUserId,
"connection" : folderConnectorConnection,
"metadataSourceQualifiedName" : folderConnectorSourceName,
"refreshTimeInterval" : 10,
"usesBlockingCalls" : "false"
}]
configureIntegrationService(adminPlatformURL, adminUserId, daemonServerName, mdrServerName, mdrServerPlatform, "files-integrator", {}, connectorConfigs)
print ("\nDone.")
# -
# ----
# ## Configuring governDL01 Governance Engine Hosting Server
#
# The Engine Host OMAG server is a special kind of governance server that hosts one or more governance engines.
#
# A governance engine is a set of specialized services that perform specific functions to manage the digital landscape and the metadata that describes it.
#
# ### Automated metadata discovery
#
# One example of a type of governance engine is a discovery engine. The discovery engine runs discovery services. Discovery services analyze the content of a real-world artifact or resource. For example, a discovery service may open up a data set and assess the quality of the data inside.
#
# The result of a discovery service's analysis is stored in a metadata server as a discovery analysis report that is chained off of the asset's definition. This report can be retrieved either through the engine host server's API or through the metadata server's APIs, specifically the Discovery Engine OMAS and the Asset Owner OMAS.
#
# The interfaces used by discovery services are defined in the [Open Discovery Framework (ODF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/open-discovery-framework/). This framework enables new implementations of discovery services to be deployed to the discovery engines.
#
#
#
# ### Automated governance
#
# Another type of governance engine is a governance action engine. The governance action engine runs governance action services.
# Governance action services monitor the asset metadata and verify that it is set up correctly, determin how to fix anomolies, errors and ommisions,
# make the necessary changes and provision real-world artifacts and resources beased on the resulting metadata.
#
#
#
# ### Understanding the engine services
#
# Coco Pharmaceuticals runs one engine host server for its data lake. It is called `governDL01` and it runs on the data lake platform.
# Within the engine host server there are engine services. Each engine service supports a specific type of governance engine.
# The command below shows you the different types of engine services
#
# +
getEngineServices(governDL01PlatformName, governDL01PlatformURL)
# -
# ----
# The governDL01 server is the Engine Host server that runs governance functions that monitor, validate, correct and enrich metadata for use by all of the technologies in the connected open metadata ecosystem.
#
# The **Asset Analysis** Open Metadata Engine Service (OMES) is responsible for running discovery engines from the
# [Open Discovery Framework (ODF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/open-discovery-framework/docs/).
# Coco Pharmaceuticals has two discovery engines:
#
# * **AssetDiscovery** - extracts metadata about different types of assets on request.
# * **AssetQuality** - assesses the quality of the content of assets on request.
#
# The **Governance Action** Open Metadata Engine Service (OMES) is responsible for running governance action engines from the
# [Governance Action Framework (GAF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/governance-action-framework/).
# Coco Pharmaceuticals has one governance action engine:
#
# * **AssetGovernance** - monitors for new assets in the landing areas, automatically curates them and provisions them in the data lake.
#
# Figure 4 shows the integration daemon with its two connectors.
# It uses cocoMDS1 to store and retrieve metadata, since that is where the assets for the data lake are catalogued.
#
# 
# > **Figure 4:** Metadata servers for governDL01
#
# ### Configuring the server
#
# The commands below configure the engine host server with the Asset Analysis OMES and Governance Action OMES.
# The definitions of the named governance engines and their services are retrieved from the `cocoMDS1` metadata server through its Governance Engine OMAS.
#
# +
engineServerName = "governDL01"
engineServerPlatform = dataLakePlatformURL
engineServerUserId = "governDL01npa"
engineServerPassword = "<PASSWORD>"
engineServerMDRName = "cocoMDS2"
engingServerMDRPlatform = corePlatformURL
mdrServerName = "cocoMDS1"
mdrServerPlatform = dataLakePlatformURL
print("Configuring " + engineServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, engineServerName, engineServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, engineServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, engineServerName)
configureOwningOrganization(adminPlatformURL, adminUserId, engineServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, engineServerName, engineServerUserId)
configurePassword(adminPlatformURL, adminUserId, engineServerName, engineServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, engineServerName, serverSecurityConnectionBody)
configureDefaultAuditLog(adminPlatformURL, adminUserId, engineServerName)
print("\nConfiguring " + engineServerName + " engines ...")
configureEngineDefinitionServices(adminPlatformURL, adminUserId, engineServerName, engineServerMDRName, engingServerMDRPlatform)
discoveryEngines = [
{
"class" : "EngineConfig",
"engineQualifiedName" : "AssetDiscovery",
"engineUserId" : "findItDL01npa"
},
{
"class" : "EngineConfig",
"engineQualifiedName" : "AssetQuality",
"engineUserId" : "findItDL01npa"
}]
governanceActionEngines = [
{
"class" : "EngineConfig",
"engineQualifiedName" : "AssetGovernance",
"engineUserId" : "findItDL01npa"
}]
configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "asset-analysis", discoveryEngines)
configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "governance-action", governanceActionEngines)
print ("\nDone.")
# -
# ----
# # Configuring the View Server and View Services
# Egeria's UI allows Coco Pharmaceutical's employees to understand more
# about their metadata environment.
# This UI uses special services, called view services, that run in an Egeria View Server.
# +
getViewServices(cocoView1PlatformName, cocoView1PlatformURL)
# -
# This is an initial version of an example to configure the view services.
# Since this area is still in development the configuration is likely to change, and so all of the
# functions are in this section of the notebook rather than consolidated with our common functions.
#
# The new UI is deployed in the k8s environment
#
# The tenant (`coco` in this case) must be explicitly provided in the URL, as must navigation to the login page
# For example if the UI is on port 18091,login at https://localhost:18091/coco/login
#
# Further docs will be added in future releases. Please use http://slack.lfai.foundation to get further help
# +
# Common functions
def configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName):
adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/'
print (" ... configuring the " + viewService + " Governance Solution View Service for this server...")
url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService
jsonContentHeader = {'content-type':'application/json'}
viewBody = {
"class": "ViewServiceConfig",
"omagserverPlatformRootURL": remotePlatformURL,
"omagserverName" : remoteServerName
}
postAndPrintResult(url, json=viewBody, headers=jsonContentHeader)
def configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, configBody):
adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/'
print (" ... configuring the " + viewService + " Integration View Service for this server...")
url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService
jsonContentHeader = {'content-type':'application/json'}
postAndPrintResult(url, json=configBody, headers=jsonContentHeader)
# A view server supports the presentation server UI (a node based app). Here we run it on the datalake platform
viewServerName = "cocoView1"
viewServerUserId = "cocoView1npa"
viewServerPassword = "<PASSWORD>"
viewServerPlatform = dataLakePlatformURL
viewServerType = "View Server"
# Configuration is similar to most servers
print("Configuring " + viewServerName + "...")
configurePlatformURL(adminPlatformURL, adminUserId, viewServerName, viewServerPlatform)
configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize)
clearServerType(adminPlatformURL, adminUserId, viewServerName)
configureServerType(adminPlatformURL,adminUserId,viewServerName,viewServerType)
configureOwningOrganization(adminPlatformURL, adminUserId, viewServerName, organizationName)
configureUserId(adminPlatformURL, adminUserId, viewServerName, viewServerUserId)
configurePassword(adminPlatformURL, adminUserId, viewServerName, viewServerPassword)
configureSecurityConnection(adminPlatformURL, adminUserId, viewServerName, serverSecurityConnectionBody)
configureEventBus(adminPlatformURL, adminUserId, viewServerName, eventBusBody)
configureDefaultAuditLog(adminPlatformURL, adminUserId, viewServerName)
# The governance solution view services currently only consist of glossary author
print ("Configuring the Governance Solution View Services")
remotePlatformURL=corePlatformURL
remoteServerName="cocoMDS2"
viewService="glossary-author"
configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName)
print ("Configuring the Integration View Services")
# repository explorer integration view service
viewService="rex"
rexConfigBody = {
"class":"IntegrationViewServiceConfig",
"viewServiceAdminClass":"org.odpi.openmetadata.viewservices.rex.admin.RexViewAdmin",
"viewServiceFullName":"Repository Explorer",
"viewServiceOperationalStatus":"ENABLED",
"omagserverPlatformRootURL": "UNUSED",
"omagserverName" : "UNUSED",
"resourceEndpoints" : [
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Core Platform",
"platformName" : "Core",
"platformRootURL" : corePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "DataLake Platform",
"platformName" : "DataLake",
"platformRootURL" : dataLakePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Development Platform",
"platformName" : "Development",
"platformRootURL" : devPlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS1",
"description" : "Data Lake Operations",
"platformName" : "DataLake",
"serverName" : "cocoMDS1"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS2",
"description" : "Governance",
"platformName" : "Core",
"serverName" : "cocoMDS2"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS3",
"description" : "Research",
"platformName" : "Core",
"serverName" : "cocoMDS3"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS5",
"description" : "Business Systems",
"platformName" : "Core",
"serverName" : "cocoMDS5"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS6",
"description" : "Manufacturing",
"platformName" : "Core",
"serverName" : "cocoMDS6"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDSx",
"description" : "Development",
"platformName" : "Development",
"serverName" : "cocoMDSx"
},
]
}
configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, rexConfigBody)
# type-explorer has endpoints
viewService="tex"
texConfigBody = {
"class":"IntegrationViewServiceConfig",
"viewServiceAdminClass":"org.odpi.openmetadata.viewservices.tex.admin.TexViewAdmin",
"viewServiceFullName":"Type Explorer",
"viewServiceOperationalStatus":"ENABLED",
"omagserverPlatformRootURL": "UNUSED",
"omagserverName" : "UNUSED",
"resourceEndpoints" : [
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Core Platform",
"platformName" : "Core",
"platformRootURL" : corePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "DataLake Platform",
"platformName" : "DataLake",
"platformRootURL" : dataLakePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Development Platform",
"platformName" : "Development",
"platformRootURL" : devPlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS1",
"description" : "Data Lake Operations",
"platformName" : "DataLake",
"serverName" : "cocoMDS1"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS2",
"description" : "Governance",
"platformName" : "Core",
"serverName" : "cocoMDS2"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS3",
"description" : "Research",
"platformName" : "Core",
"serverName" : "cocoMDS3"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS5",
"description" : "Business Systems",
"platformName" : "Core",
"serverName" : "cocoMDS5"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS6",
"description" : "Manufacturing",
"platformName" : "Core",
"serverName" : "cocoMDS6"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDSx",
"description" : "Development",
"platformName" : "Development",
"serverName" : "cocoMDSx"
},
]
}
configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, texConfigBody)
# Dino provides insight into the operational environment of egeria - this config body allows coco's platforms & servers to be accessed
viewService="dino"
DinoConfigBody = {
"class":"IntegrationViewServiceConfig",
"viewServiceAdminClass":"org.odpi.openmetadata.viewservices.dino.admin.DinoViewAdmin",
"viewServiceFullName":"Dino",
"viewServiceOperationalStatus":"ENABLED",
"omagserverPlatformRootURL": "UNUSED",
"omagserverName" : "UNUSED",
"resourceEndpoints" : [
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Core Platform",
"platformName" : "Core",
"platformRootURL" : corePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "DataLake Platform",
"platformName" : "DataLake",
"platformRootURL" : dataLakePlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Platform",
"description" : "Development Platform",
"platformName" : "Development",
"platformRootURL" : devPlatformURL
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS1",
"description" : "Data Lake Operations",
"platformName" : "DataLake",
"serverName" : "cocoMDS1"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS2",
"description" : "Governance",
"platformName" : "Core",
"serverName" : "cocoMDS2"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS3",
"description" : "Research",
"platformName" : "Core",
"serverName" : "cocoMDS3"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS4",
"description" : "Data Lake Users",
"platformName" : "DataLake",
"serverName" : "cocoMDS4"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS5",
"description" : "Business Systems",
"platformName" : "Core",
"serverName" : "cocoMDS5"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDS6",
"description" : "Manufacturing",
"platformName" : "Core",
"serverName" : "cocoMDS6"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoMDSx",
"description" : "Development",
"platformName" : "Development",
"serverName" : "cocoMDSx"
},
{
"class" : "ResourceEndpointConfig",
"resourceCategory" : "Server",
"serverInstanceName" : "cocoView1",
"description" : "View Server",
"platformName" : "DataLake",
"serverName" : "cocoView1"
},
]
}
configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, DinoConfigBody)
print ("\nDone.")
# -
# # Deploying server configuration
# The commands that have been issued so far have created a configuration document for each server.
# These configuration documents are currently local to the Development OMAG Server Platform where the
# adminstration commands were issued (figure 3).
#
# 
# > **Figure 3:** Creating configuration documents using administration commands
#
# If servers are to be started on the other server platforms then their configuration documents
# need to be deployed (copied) to these platforms (figure 4).
#
# 
# > **Figure 4:** Deploying configuration documents
#
# However, before deploying the configuration documents, the receiving OMAG Server Platforms
# need to be running.
#
# The code below checks the Core and Data Lake OMAG Server Platforms are running.
# +
print("\nChecking OMAG Server Platform availability...")
checkServerPlatform("Data Lake Platform", dataLakePlatformURL)
checkServerPlatform("Core Platform", corePlatformURL)
checkServerPlatform("Dev Platform", devPlatformURL)
print ("\nDone.")
# -
# ----
# Make sure the each of the platforms is running.
#
# ----
# The commands below deploy the server configuration documents to the server platforms where the
# servers will run.
# +
print("\nDeploying server configuration documents to appropriate platforms...")
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS1", dataLakePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS2", corePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS3", corePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS4", dataLakePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS5", corePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS6", corePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDSx", devPlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "exchangeDL01", dataLakePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "governDL01", dataLakePlatformURL)
deployServerToPlatform(adminPlatformURL, adminUserId, "cocoView1", dataLakePlatformURL)
print("\nDone.")
# -
# ----
| open-metadata-resources/open-metadata-labs/egeria-server-config.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 32-bit
# language: python
# name: python3
# ---
# # Traffic Flow Model
# ## Goal
# The goal of this model is to create a digital twin of vehicles moving through a stretch of road. I want to measure how long it takes a vehicle to travel from the beginning to the end of the road. I want to measure how other factors affect the speed of movement such as:
# - Driving behaviors
# - Road conditions
# - Number of onramps and offramps
# - Accidents
# - Lanes of traffic
# - Number of vehicles on the road
# - Types of vehicles on the road (emergency vehicles, state troopers, semis, construction vehicles)
# - Lane closures
# - Speed limit changes
#
# I believe that the flow of traffic is a very complex system that if it can be modeled, then policy decisions can be made to improve traffic flow or assess necessary changes with more information.
#
# ## Model Abstract
# Imagine you had the ability to hover high in the sky and observe a section of road to better understand the traffic conditions of the road. What if you could see some repeated patterns that could be rectified, reducing the amount of traffic and the reduced speeds on the road?
#
# This model is going to focus on a single vehicle on a road of many different road conditions. The vehicle will begin at the starting point and the model will end when the vehicle reaches the end. Each timestep will be one movement of the focus vehicle along with all other vehicles on the road. The focus vehicle will have a desired speed, which will determine how far it makes it each time step, but this will be affected by the differing road conditions.
#
# ## Background
# There are many times when driving that I get stuck in traffic and when you finally are able to get back to your desired speed, it isn’t always clear what caused the slowdown. Sometimes it is obvious, there is an accident or a lane is closed for construction. Or it could be a heavy traffic area due to ‘rush hour’. But other times it is just slow, then it isn’t slow. One would need to measure what is happening to rectify these recurring problems.
#
# ## Simple Use Case
# The first version of this model will measure the focus vehicle with no other vehicles on the road in a single lane of traffic. The length of the road will start short. It can be expanded once the basic model is working. Once the model can be built to represent a single vehicle, then a second vehicle will be added in front of the focus vehicle that is traveling at a slower speed, to make sure that I can measure the impact.
#
# ## Model Rules
# - No two vehicles can be in the same space, but that may be violated in the future to create the occurrence of an accident.
# - A vehicle can not travel through another vehicle, it would need to go around.
# - A vehicle behind another vehicle can go no faster than the front vehicle.
# - There will be a speed limit set for the road, or for sections of the road. The driver’s personality will dictate how they react to the speed limit.
# - All vehicles on the road desire to travel from their starting point to the end point unless there are extenuating circumstances. No vehicle will treat the road as a parking lot unless an accident or breakdown of their vehicle causes this action.
#
# ## Assumptions
# N/A, all the variables of this model are known.
#
# ## Random Variables
# - The driver behavior will be randomly assigned.
# - The type of a new vehicle will be randomly determined.
#
# ## Agents
# Each vehicle will be its own agent with its own driving behavior for traveling through the road. Some agents will have behaviors that are specific to the section of road, such as an emergency vehicle trying to clear an accident, a state trooper trying to pull over dangerous drivers or a construction vehicle trying to enter or leave a construction site on the roadway.
#
# ## Initial Conditions
# - Number of Lanes: 1+ lanes for vehicles to travel on.
# - Number of onramps: 0+ lanes for vehicles to enter the roadway.
# - Number of offramps: 0+ lanes for vehicles to exit the roadway.
# - Number of existing vehicles: How many vehicles start at a location on the road.
# - Number of lanes obstructed: 0+ lanes that will be blocked. This could be due to an accident or construction.
# - Number of state troopers: 0+ police vehicles monitoring for dangerous behavior.
# - Miles or Kilometers: Determine if the model will use miles or kilometers.
# - Length of Road: How long is the road (in miles or kilometers).
#
# ## KPIs
# - How long did it take the focus vehicle to travel the road vs how long it would have taken with no road conditions at the speed limit.
# - What was the average travel speed of all vehicles compared to the speed limit.
# - - Also average speed by section of roadway. For example, was there more slowdown around an onramp compared to other sections of road?
# - Track the number of specific incidents that occurred during the simulation:
# - - Number of accidents
# - - Number of semis
# - - Number of construction vehicles
# - - Number of state patrol
# - - Number of drives assigned a dangerous driving behavior
#
# ## Model Coding Considerations
# The biggest challenge I will face in programming this model is learning how to handle the spatial element of different agents on a plane. Once I learn how to do this, I think the rest of the logic will be easier to program.
#
# ## Model Elements
# TBD
#
# ## Model Flow Chart
# TBD
#
# ## Experiments
#
# |Experiment Number|Experiment Name|Length of Road|Number of Lanes|Number of Onramps|Number of Offramps|Num Additional Vehicles|Vehicle Types|Number of lanes obstructed|Num State Troopers|Description & Hypothesis|
# |---|---|---|---|---|---|---|---|---|---|---|
# |1|POC|1 Mile|1|0|0|0|N/A|0|0|Create a model that can track the location of a vehicle along an area going a set speed for a set distance.|
# |2|One Other Vehicle|1 Mile|1|0|0|1|Car|0|0|Create the interaction between another vehicle traveling at a slower speed.|
# |3|New Lane|1 Mile|2|0|0|1|Car|0|0|Add the ability for the focus car to determine that it wants to pass and move into the next lane.|
# |4|More Vehicles|1 Mile|2|0|0|4|Car|0|0|Create multiple interactions as the focus vehicle travels down the road.|
#
# ## Adding Complexity
# There are a lot of areas for adding complexity to this model. After the simple version is done, a single new driving behavior, road condition or vehicle type will be added like you are leveling up in a video game.
#
# Elements of complexity:
# - Driving behaviors
# - Road conditions
# - Number of onramps and offramps
# - Accidents
# - Lanes of traffic
# - Number of vehicles on the road
# - Types of vehicles on the road (emergency vehicles, state troopers, semis, construction vehicles)
# - Lane closures
# - Speed limit changes
#
#
| .ipynb_checkpoints/Traffic_Flow_Model-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + executionInfo={"elapsed": 8976, "status": "ok", "timestamp": 1617340917499, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="T6qHImS99p3j"
import os
import pandas as pd
from sklearn.model_selection import GroupKFold
# -
data_path = './data'
# + colab={"base_uri": "https://localhost:8080/", "height": 221} executionInfo={"elapsed": 1102, "status": "ok", "timestamp": 1617341015733, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="MIW27Mnb_hMS" outputId="28404f58-8e48-403a-b43f-b987f55b9304"
train = pd.read_csv(os.path.join(data_path,'train_set.csv')).sort_values(by=['user_id','checkin'])
test = pd.read_csv(os.path.join(data_path,'test_set.csv')).sort_values(by=['user_id','checkin'])
print(train.shape, test.shape)
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 289} executionInfo={"elapsed": 1791, "status": "ok", "timestamp": 1617341054349, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="dIY6E4DhCSZF" outputId="163244b3-cde1-4b42-e401-c92d63596edc"
train['istest'] = 0
test['istest'] = 1
raw = pd.concat([train,test], sort=False )
raw = raw.sort_values( ['user_id','checkin'], ascending=True )
raw.head()
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3636, "status": "ok", "timestamp": 1617341154183, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="ODTQb-viCfT_" outputId="414e4532-9250-4ff2-aa19-6980566e66c1"
raw['fold'] = 0
group_kfold = GroupKFold(n_splits=5)
for fold, (train_index, test_index) in enumerate(group_kfold.split(X=raw, y=raw, groups=raw['utrip_id'])):
raw.iloc[test_index,10] = fold
raw.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 609} executionInfo={"elapsed": 6085, "status": "ok", "timestamp": 1617341277517, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="4g8QYhAdC3TG" outputId="b59d9c19-eb3d-49d3-fde5-4cecedab8e3a"
#This flag tell which row must be part of the submission file.
raw['submission'] = 0
raw.loc[ (raw.city_id==0)&(raw.istest) ,'submission'] = 1
raw.loc[raw.submission==1].head()
# -
def add_features(data):
# number of places visited in each trip
aggs = data.groupby('utrip_id', as_index=False)['user_id'].count()
aggs.columns = ['utrip_id', 'N']
data = data.merge(aggs, on=['utrip_id'], how='inner')
data['utrip_id_'], mp = data['utrip_id'].factorize()
data['dcount'] = data.groupby(['utrip_id_']).cumcount()
data['icount'] = data['N']-data['dcount']-1
return data
# + executionInfo={"elapsed": 980, "status": "ok", "timestamp": 1617341330378, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="31xdALs3DU0Y"
add_features(raw[raw.utrip_id.isin(['29_1','65_1'])])
# + executionInfo={"elapsed": 932, "status": "ok", "timestamp": 1617341348066, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13037694610922482904"}, "user_tz": -330} id="idCN<KEY>"
# + id="cmd1fi3QEfbC"
| cases/Booking.com/reco-session-booking-02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="IvHLIW5VrjFz" outputId="3e982fe4-ae18-4f5d-c51e-152d6f5532d4"
18+24
# + colab={"base_uri": "https://localhost:8080/"} id="iQzmO98Ct_RT" outputId="314145a8-8b6c-4b24-84f2-4501dd5fed6a"
25/5
# + colab={"base_uri": "https://localhost:8080/"} id="jWBNuH07uGir" outputId="1baad804-2e31-4ae8-8dfb-3057540e017e"
12-5
# + colab={"base_uri": "https://localhost:8080/"} id="YdXAUrVOuLPm" outputId="9c2ca86b-49eb-439e-d81f-b3afc9635f8f"
12*7
# + id="rQCScvRyuOTe"
x=10
# + colab={"base_uri": "https://localhost:8080/"} id="-5cKaHVyux5L" outputId="1ff8593f-c06a-4df9-b1ec-d7a241767952"
x
# + id="ZJWgeiIFu3_N"
x=24
# + colab={"base_uri": "https://localhost:8080/"} id="xQXSMZ_0vHUF" outputId="b257ac9f-1b3d-4344-9d5a-2280c913bf3a"
x
# + colab={"base_uri": "https://localhost:8080/"} id="5uidnJowvI9V" outputId="375ad5f1-2f95-4388-cd7c-902aeda02222"
x=10
y=20
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="O4FJHNi6wM7z" outputId="f5d7f219-b925-4de7-a8af-5b53f2acc63d"
x-y
# + colab={"base_uri": "https://localhost:8080/"} id="bpy9LsV-xNRM" outputId="d54d34c6-bae1-4907-8c97-74efa625cbad"
x*y
# + colab={"base_uri": "https://localhost:8080/"} id="R7rxLQ83xRrk" outputId="2142f961-b879-4a09-824e-11094ebe00b2"
x/y
# + id="XWyjvYIJxV88"
x=10
y=20
z=30
# + colab={"base_uri": "https://localhost:8080/"} id="AIQ2SaLKyNls" outputId="3eb81d34-55d6-4cd1-fd8b-149481aabb95"
x*y-z
# + id="-ddrOjxfycvb"
p=10000
# + id="HFSt0lJKzws7"
n=5
# + id="DohEcGUFz1aD"
r=9
# + id="-BOj59Wrz6GD"
i=p*n*r/100
# + colab={"base_uri": "https://localhost:8080/"} id="xBpUSuMXz-wj" outputId="ad95c27c-4fda-44cb-abbd-1f42dc657755"
i
# + colab={"base_uri": "https://localhost:8080/"} id="RqIRYh0e11wp" outputId="6cd80763-2695-42f2-852b-9c922997382f"
p+i
# + id="tbksC9OJ2R-R"
x="india"
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="9KlHoWYs9lkT" outputId="c33d82b3-bd72-4462-995a-9d0e94336060"
x
# + id="QH3x3rQ099tF"
x=10
y=10.5
Z="india"
# + colab={"base_uri": "https://localhost:8080/"} id="7ZUzxBjP-7d9" outputId="f1a090d4-07d8-4336-afce-bd611473fef3"
type(x)
# + colab={"base_uri": "https://localhost:8080/"} id="eSUL0tIa_Bxk" outputId="d4141e4e-cdb6-409b-912c-0c047bc3a2e4"
type(y)
# + colab={"base_uri": "https://localhost:8080/"} id="fiFq0ILl__oU" outputId="3cf8e46f-58e5-4323-b0b5-ab3ce5926b5b"
type(Z)
# + colab={"base_uri": "https://localhost:8080/"} id="tZDgMq9UAG2s" outputId="35327c6d-8032-442b-d494-c3e25bcf2925"
x=input("enter a name")
print(x)
# + id="LsuA_WI7As2s" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="efcc814d-7b0e-4325-a303-e4448a305a9a"
x
# + id="0XlYgHVUEtYX"
| 01_python_basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import keras
from build_model import build_model
data_x = pd.read_csv('./data/weather/v3/2013/merged/merged_2013_h0.csv', index_col = 'Time')
data_x.columns
x = data_x[['temperature-460', 'cloudiness-460',
'temperature-481', 'cloudiness-481',
'temperature-645', 'cloudiness-645',]]
data_y = pd.read_csv('./data/energy/v5/2013/Consumption_2013.csv', index_col = 'Time')
data = x.copy()
data['cons'] = data_y['Auvergne-Rhone-Alpes']
data.drop(data.index[0], inplace = True)
print(data.isnull().values.any())
print(data.isnull().sum())
# -
data_train = data.values
x_train = data_train[:, 0:6]
y_train = data_train[:, 6]
x_train[:5, :]
y_train[:5]
# +
#data.isnull().values.any()
#data.isnull().sum()
# +
# Before
x_min = np.min(x_train)
print(x_min)
x_max = np.max(x_train)
print(x_max)
x_train = (x_train - np.min(x_train)) / (np.max(x_train) - np.min(x_train))
# After
x_min = np.min(x_train)
print(x_min)
x_max = np.max(x_train)
print(x_max)
# -
'''y_train = (y_train - np.min(y_train)) / (np.max(y_train) - np.min(y_train))'''
# # Test data
# +
data_x = pd.read_csv('./data/weather/v3/2014/merged/merged_2014_h0.csv', index_col = 'Time')
data_x.columns
x = data_x[['temperature-460', 'cloudiness-460',
'temperature-481', 'cloudiness-481',
'temperature-645', 'cloudiness-645',]]
data_y = pd.read_csv('./data/energy/v5/2014/Consumption_2014.csv', index_col = 'Time')
data = x.copy()
data['cons'] = data_y['Auvergne-Rhone-Alpes']
#data.drop(data.index[0], inplace = True)
print(data.isnull().values.any())
print(data.isnull().sum())
data_test = data.values
x_test = data_test[:, 0:6]
y_test = data_test[:, 6]
# -
seq_len = 100
def data_to_sequence(data, n_prev = seq_len):
docX, docY = [], []
for i in range(len(data)-n_prev):
docX.append(data[i:i+n_prev, 0:6])
docY.append(data[i+n_prev, 6])
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
x_train, y_train = data_to_sequence(data_train)
x_test, y_test = data_to_sequence(data_test)
# +
model = build_model(features = 6, seq_len = seq_len, out = 1)
model.summary()
from keras.callbacks import TensorBoard , ModelCheckpoint, ReduceLROnPlateau
tbCallBack = TensorBoard(log_dir ='./logs/',
histogram_freq = 0,
write_graph = True)
filepath = "best_model.hdf5"
best_model = ModelCheckpoint(filepath = filepath,
monitor = 'val_loss',
verbose = 1,
save_best_only = True,
save_weights_only = False,
mode = 'auto', period = 1)
reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.5, patience = 2)
# -
model.fit(x_train,
y_train,
batch_size = 100,
epochs = 100,
validation_data = [x_test, y_test],
callbacks = [best_model, tbCallBack, reduce_lr])
| 4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="86lINDr_mQD1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="ad6d0173-f918-4f5c-9bc6-6c8d6079456c"
# #!pip install sentinelsat
# #!pip install geopandas
# #!pip install folium
# #!pip install shapely
# #!pip install rasterio
# + id="V1wJ1rbpmeOy" colab_type="code" colab={}
from sentinelsat import SentinelAPI
user =''
password = ''
api = SentinelAPI(user,password,'https://scihub.copernicus.eu/dhus')
# + id="NvmvmPdKnnON" colab_type="code" colab={}
import geopandas as gpd
import folium
# + id="w7bvFcw8oWi-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 464} outputId="8d93be06-5e2c-4f40-9031-071056092450"
# !wget https://www.dropbox.com/s/ymxuxpcnj88mlz2/NReserve.zip
# + id="G4wTg0HlobKe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="45aec58c-7858-4a74-89cd-14129d25ff82"
# !unzip 'NReserve.zip'
# + id="xIupDXVHoeo5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="75e7973b-ebf3-4e61-eecf-1bd9d711c4db"
nReserve = gpd.read_file('/content/NReserve/NaturalReserve_Polygon.shp')
m = folium.Map([41.7023292727353, 12.34697305914639], zoom_start=12)
folium.GeoJson(nReserve).add_to(m)
# + id="KTbgyT7mor3Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 669} outputId="c65d5fc4-1fe0-4e66-95bb-e776f0c1a804"
m
# + id="VOX6b1Gqo5-E" colab_type="code" colab={}
from shapely.geometry import MultiPolygon,Polygon
footp = None
for i in nReserve['geometry']:
footp = i
# + id="5dYSLOdLpant" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0b9fb939-6db6-4d44-c6bd-fdf81d34bb28"
# by not providing date parameter, the api queries recent most image by default.
products = api.query(footp,
platformname = 'Sentinel-2',
processinglevel = 'Level-2A',
cloudcoverpercentage = (0,10)
)
# + id="juhHmNzsrlbG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="1804a7fb-7d88-47db-d122-18b953602041"
#converting product dictionary into a geodataframe/dataframe
product_gdf = api.to_geodataframe(products)
product_gdf_sorted = product_gdf.sort_values(['cloudcoverpercentage'],ascending =[True])
# + id="MaqB8Du0sdl1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="586681b7-7367-4a3e-e421-457eb4bb9fe1"
product_gdf_sorted
# + id="tVoso1Its6aj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="009ce8f3-cbd6-473a-eba3-1c1cd53fe746"
#download a image:
api.download_all("1b0ee27e-c51a-49d9-b2a4-78db128a49f9")
# + id="Od4PZTMNtNX7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="608b7b4c-58e1-43d9-8dbb-bfa6a1fc3a9b"
# !unzip /content/S2A_MSIL2A_20200215T100111_N0214_R122_T33TTG_20200215T125505.zip
# + id="DYFUr_3u3LwI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="f95b6afc-382d-418d-b59d-4c8ba10b77b9"
import rasterio as rio
# + id="_zbVTsoDwIXe" colab_type="code" colab={}
#create RGB image
R10 = '/content/S2A_MSIL2A_20200215T100111_N0214_R122_T33TTG_20200215T125505.SAFE/GRANULE/L2A_T33TTG_A024286_20200215T100547/IMG_DATA/R10m'
b4 = rio.open(R10+'/T33TTG_20200215T100111_B04_10m.jp2')
b3 = rio.open(R10+'/T33TTG_20200215T100111_B03_10m.jp2')
b2 = rio.open(R10+'/T33TTG_20200215T100111_B02_10m.jp2')
# Create an RGB image
with rio.open('RGB.tiff','w',driver='Gtiff', width=b4.width, height=b4.height,
count=3,crs=b4.crs,transform=b4.transform, dtype=b4.dtypes[0]) as rgb:
rgb.write(b2.read(1),1)
rgb.write(b3.read(1),2)
rgb.write(b4.read(1),3)
rgb.close()
# + id="utkpSdVe3RBE" colab_type="code" colab={}
nReserve_proj = nReserve.to_crs({'init': 'epsg:32633'})
with rio.open("RGB.tiff") as src:
out_image, out_transform = rio.mask.mask(src, nReserve_proj.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform})
with rasterio.open("RGB_masked.tif", "w", **out_meta) as dest:
dest.write(out_image)
# + id="V-v1VNsr3vzl" colab_type="code" colab={}
b4 = rio.open(R10+'/T33TTG_20200215T100111_B04_10m.jp2')
b8 = rio.open(R10+'/T33TTG_20200215T100111_B04_10m.jp2')
# read Red(b4) and NIR(b8) as arrays
red = b4.read()
nir = b8.read()
# Calculate ndvi
ndvi = (nir.astype(float)-red.astype(float))/(nir+red)
# Write the NDVI image
meta = b4.meta
meta.update(driver='GTiff')
meta.update(dtype=rasterio.float32)
with rasterio.open('NDVI.tif', 'w', **meta) as dst:
dst.write(ndvi.astype(rasterio.float32))
| Notebooks/1) Accessing_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
from time import time
from tqdm import tqdm_notebook as tqdm
from collections import Counter
from scipy import stats
import lightgbm as lgb
from sklearn.metrics import cohen_kappa_score
from sklearn.model_selection import StratifiedKFold, KFold, RepeatedKFold, GroupKFold, GridSearchCV, train_test_split, TimeSeriesSplit, RepeatedStratifiedKFold
from sklearn.base import BaseEstimator, TransformerMixin
from scipy.stats import kurtosis, skew
import matplotlib.pyplot as plt
import gc
import json
import copy
import time
pd.set_option('display.max_columns', 1000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
import seaborn as sns
from pathlib import Path
import sys
import re
from scripts import feature_engineering
# -
path=Path('/kaggle/data_science_bowl')
path
def read_data():
train_df = pd.read_csv(path/'train.csv')
test_df = pd.read_csv(path/'test.csv')
train_labels_df = pd.read_csv(path/'train_labels.csv')
specs_df = pd.read_csv(path/'specs.csv')
return train_df, test_df, train_labels_df, specs_df
# %%time
train_df, test_df, train_labels_df, specs_df = read_data()
train_df = feature_engineering.remove_wrong_event_codes(train_df)
test_df = feature_engineering.remove_wrong_event_codes(test_df)
train_df = feature_engineering.remove_ids_with_no_assessment(train_df)
list_of_user_activities, activities_labels, activities_map, win_code, assess_titles, list_of_event_code, \
list_of_event_id, list_of_worlds, list_of_title, list_of_event_code_world, list_of_event_code_title, list_of_event_id_world = \
feature_engineering.create_structs(train_df, test_df)
train_df = train_df.rename({'event_code_title':'title_event_code'}, axis='columns')
test_df = test_df.rename({'event_code_title':'title_event_code'}, axis='columns')
train_samples = [(installation_id, user_sample) for (installation_id, user_sample) in train_df.groupby('installation_id')]
test_samples = [(installation_id, user_sample) for (installation_id, user_sample) in test_df.groupby('installation_id')]
comp_train_df = feature_engineering.feature_generation_2(train_samples, False, assess_titles=assess_titles,
list_of_event_code=list_of_event_code, list_of_event_id=list_of_event_id,
activities_labels=activities_labels, all_title_event_code=list_of_event_code_title,
win_code=win_code,
activities_map=activities_map)
extra_training = []
comp_test_df = feature_engineering.feature_generation_2(test_samples, True, assess_titles=assess_titles,
list_of_event_code=list_of_event_code, list_of_event_id=list_of_event_id,
activities_labels=activities_labels, all_title_event_code=list_of_event_code_title,
win_code=win_code,
activities_map=activities_map,
extra_training=extra_training,
include_all=False)
comp_train_df
comp_test_df
comp_train_df, comp_test_df = feature_engineering.preprocess(comp_train_df, comp_test_df)
comp_test_df = comp_test_df.groupby(['installation_id']).last().reset_index()
plt.hist(comp_train_df['Clip_diff_mean'])
plt.hist(comp_test_df['Clip_diff_mean'])
# ### Remove zero columns
numeric_cols = comp_train_df.select_dtypes(['number']).columns
all_zeros_df = (np.sum(comp_train_df[numeric_cols], axis=0) == 0).reset_index()
for zero_col in all_zeros_df[all_zeros_df[0] == True]['index']:
del comp_train_df[zero_col]
del comp_test_df[zero_col]
comp_train_df
# ## Normalize Column Titles
# +
import re
def normalize_cols(df):
df.columns = [c if type(c) != tuple else '_'.join(c) for c in df.columns]
df.columns = [re.sub(r'\W', '_', str(s)) for s in df.columns]
normalize_cols(comp_train_df)
normalize_cols(comp_test_df)
# -
# ## Training
params = {'n_estimators':2000,
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'rsme',
'subsample': 0.75,
'subsample_freq': 1,
'learning_rate': 0.04,
'feature_fraction': 0.9,
'max_depth': 15,
'lambda_l1': 1,
'lambda_l2': 1,
'verbose': 100,
'early_stopping_rounds': 100,
'eval_metric': 'cappa',
'cat_cols': ['session_title']
}
y = comp_train_df['accuracy_group']
n_fold = 5
cols_to_drop = ['game_session', 'installation_id', 'timestamp', 'accuracy_group', 'accuracy']
# +
from functools import partial
import scipy as sp
default_coef = [0.5, 1.5, 2.25]
class OptimizedRounder(object):
"""
An optimizer for rounding thresholds
to maximize Quadratic Weighted Kappa (QWK) score
# https://www.kaggle.com/naveenasaithambi/optimizedrounder-improved
"""
def __init__(self, initial_coef = default_coef):
self.coef_ = 0
self.initial_coef = initial_coef
def _kappa_loss(self, coef, X, y):
"""
Get loss according to
using current coefficients
:param coef: A list of coefficients that will be used for rounding
:param X: The raw predictions
:param y: The ground truth labels
"""
X_p = pd.cut(X, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0, 1, 2, 3])
return -qwk(y, X_p)
def fit(self, X, y):
"""
Optimize rounding thresholds
:param X: The raw predictions
:param y: The ground truth labels
"""
loss_partial = partial(self._kappa_loss, X=X, y=y)
self.coef_ = sp.optimize.minimize(loss_partial, self.initial_coef, method='nelder-mead')
def predict(self, X, coef):
"""
Make predictions with specified thresholds
:param X: The raw predictions
:param coef: A list of coefficients that will be used for rounding
"""
return pd.cut(X, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0, 1, 2, 3])
def coefficients(self):
"""
Return the optimized coefficients
"""
return self.coef_['x']
# -
def get_class_bounds(y, y_pred, N=4, class0_fraction=-1):
"""
Find boundary values for y_pred to match the known y class percentiles.
Returns N-1 boundaries in y_pred values that separate y_pred
into N classes (0, 1, 2, ..., N-1) with same percentiles as y has.
Can adjust the fraction in Class 0 by the given factor (>=0), if desired.
"""
ysort = np.sort(y)
predsort = np.sort(y_pred)
bounds = []
for ibound in range(N-1):
iy = len(ysort[ysort <= ibound])
# adjust the number of class 0 predictions?
if (ibound == 0) and (class0_fraction >= 0.0) :
iy = int(class0_fraction * iy)
bounds.append(predsort[iy])
return bounds
# +
## Added by <NAME>
calculated_coeff = None
calculated_coeffs = []
## End
class RegressorModel(object):
"""
A wrapper class for classification models.
It can be used for training and prediction.
Can plot feature importance and training progress (if relevant for model).
"""
def __init__(self, columns: list = None, model_wrapper=None):
"""
:param original_columns:
:param model_wrapper:
"""
self.columns = columns
self.model_wrapper = model_wrapper
self.result_dict = {}
self.train_one_fold = False
def fit(self, X: pd.DataFrame, y,
X_holdout: pd.DataFrame = None, y_holdout=None,
folds=None,
params: dict = None,
eval_metric='rmse',
cols_to_drop: list = None,
adversarial: bool = False,
plot: bool = True):
"""
Training the model.
:param X: training data
:param y: training target
:param X_holdout: holdout data
:param y_holdout: holdout target
:param folds: folds to split the data. If not defined, then model will be trained on the whole X
:param params: training parameters
:param eval_metric: metric for validataion
:param cols_to_drop: list of columns to drop (for example ID)
:param adversarial
:return:
"""
if folds is None:
folds = KFold(n_splits=3, random_state=42)
self.train_one_fold = True
self.columns = X.columns if self.columns is None else self.columns
self.feature_importances = pd.DataFrame(columns=['feature', 'importance'])
self.models = []
self.folds_dict = {}
self.eval_metric = eval_metric
n_target = 1
self.oof = np.zeros((len(X), n_target))
self.n_target = n_target
X = X[self.columns]
if X_holdout is not None:
X_holdout = X_holdout[self.columns]
self.columns = X.columns.tolist()
for fold_n, (train_index, valid_index) in enumerate(folds.split(X, y, X['installation_id'])):
if X_holdout is not None:
X_hold = X_holdout.copy()
else:
X_hold = None
self.folds_dict[fold_n] = {}
if params['verbose']:
print(f'Fold {fold_n + 1} started at {time.ctime()}')
self.folds_dict[fold_n] = {}
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
if self.train_one_fold:
X_train = X[self.original_columns]
y_train = y
X_valid = None
y_valid = None
datasets = {'X_train': X_train, 'X_valid': X_valid, 'X_holdout': X_hold, 'y_train': y_train}
X_train, X_valid, X_hold = self.transform_(datasets, cols_to_drop)
self.folds_dict[fold_n]['columns'] = X_train.columns.tolist()
model = copy.deepcopy(self.model_wrapper)
if adversarial:
X_new1 = X_train.copy()
if X_valid is not None:
X_new2 = X_valid.copy()
elif X_holdout is not None:
X_new2 = X_holdout.copy()
X_new = pd.concat([X_new1, X_new2], axis=0)
y_new = np.hstack((np.zeros((X_new1.shape[0])), np.ones((X_new2.shape[0]))))
X_train, X_valid, y_train, y_valid = train_test_split(X_new, y_new)
model.fit(X_train, y_train, X_valid, y_valid, X_hold, y_holdout, params=params)
## Added by <NAME>
global calculated_coeff, calculated_coeffs
coeff_pred = model.predict(X_train)
calculated_coeff = get_class_bounds(y_train, coeff_pred)
optR = OptimizedRounder(calculated_coeff)
optR.fit(coeff_pred, y_train)
calculated_coeffs.append(optR.coefficients())
print('calculated_coeffs', calculated_coeffs)
print('get_class_bounds', get_class_bounds(y_train, coeff_pred))
## End
self.folds_dict[fold_n]['scores'] = model.best_score_
if self.oof.shape[0] != len(X):
self.oof = np.zeros((X.shape[0], self.oof.shape[1]))
if not adversarial:
self.oof[valid_index] = model.predict(X_valid).reshape(-1, n_target)
fold_importance = pd.DataFrame(list(zip(X_train.columns, model.feature_importances_)),
columns=['feature', 'importance'])
self.feature_importances = self.feature_importances.append(fold_importance)
self.models.append(model)
self.feature_importances['importance'] = self.feature_importances['importance'].astype(int)
# if params['verbose']:
self.calc_scores_()
if plot:
# print(classification_report(y, self.oof.argmax(1)))
fig, ax = plt.subplots(figsize=(16, 12))
plt.subplot(2, 2, 1)
self.plot_feature_importance(top_n=25)
plt.subplot(2, 2, 2)
self.plot_metric()
plt.subplot(2, 2, 3)
plt.hist(y.values.reshape(-1, 1) - self.oof)
plt.title('Distribution of errors')
plt.subplot(2, 2, 4)
plt.hist(self.oof)
plt.title('Distribution of oof predictions');
def transform_(self, datasets, cols_to_drop):
if cols_to_drop is not None:
cols_to_drop = [col for col in cols_to_drop if col in datasets['X_train'].columns]
datasets['X_train'] = datasets['X_train'].drop(cols_to_drop, axis=1)
if datasets['X_valid'] is not None:
datasets['X_valid'] = datasets['X_valid'].drop(cols_to_drop, axis=1)
if datasets['X_holdout'] is not None:
datasets['X_holdout'] = datasets['X_holdout'].drop(cols_to_drop, axis=1)
self.cols_to_drop = cols_to_drop
return datasets['X_train'], datasets['X_valid'], datasets['X_holdout']
def calc_scores_(self):
print()
datasets = [k for k, v in [v['scores'] for k, v in self.folds_dict.items()][0].items() if len(v) > 0]
self.scores = {}
for d in datasets:
scores = [v['scores'][d][self.eval_metric] for k, v in self.folds_dict.items()]
print(f"CV mean score on {d}: {np.mean(scores):.4f} +/- {np.std(scores):.4f} std.")
self.scores[d] = np.mean(scores)
def predict(self, X_test, averaging: str = 'usual'):
"""
Make prediction
:param X_test:
:param averaging: method of averaging
:return:
"""
full_prediction = np.zeros((X_test.shape[0], self.oof.shape[1]))
for i in range(len(self.models)):
X_t = X_test.copy()
if self.cols_to_drop is not None:
cols_to_drop = [col for col in self.cols_to_drop if col in X_t.columns]
X_t = X_t.drop(cols_to_drop, axis=1)
y_pred = self.models[i].predict(X_t[self.folds_dict[i]['columns']]).reshape(-1, full_prediction.shape[1])
# if case transformation changes the number of the rows
if full_prediction.shape[0] != len(y_pred):
full_prediction = np.zeros((y_pred.shape[0], self.oof.shape[1]))
if averaging == 'usual':
full_prediction += y_pred
elif averaging == 'rank':
full_prediction += pd.Series(y_pred).rank().values
return full_prediction / len(self.models)
def plot_feature_importance(self, drop_null_importance: bool = True, top_n: int = 10):
"""
Plot default feature importance.
:param drop_null_importance: drop columns with null feature importance
:param top_n: show top n columns
:return:
"""
top_feats = self.get_top_features(drop_null_importance, top_n)
feature_importances = self.feature_importances.loc[self.feature_importances['feature'].isin(top_feats)]
feature_importances['feature'] = feature_importances['feature'].astype(str)
top_feats = [str(i) for i in top_feats]
sns.barplot(data=feature_importances, x='importance', y='feature', orient='h', order=top_feats)
plt.title('Feature importances')
def get_top_features(self, drop_null_importance: bool = True, top_n: int = 10):
"""
Get top features by importance.
:param drop_null_importance:
:param top_n:
:return:
"""
grouped_feats = self.feature_importances.groupby(['feature'])['importance'].mean()
if drop_null_importance:
grouped_feats = grouped_feats[grouped_feats != 0]
return list(grouped_feats.sort_values(ascending=False).index)[:top_n]
def plot_metric(self):
"""
Plot training progress.
Inspired by `plot_metric` from https://lightgbm.readthedocs.io/en/latest/_modules/lightgbm/plotting.html
:return:
"""
full_evals_results = pd.DataFrame()
for model in self.models:
evals_result = pd.DataFrame()
for k in model.model.evals_result_.keys():
evals_result[k] = model.model.evals_result_[k][self.eval_metric]
evals_result = evals_result.reset_index().rename(columns={'index': 'iteration'})
full_evals_results = full_evals_results.append(evals_result)
full_evals_results = full_evals_results.melt(id_vars=['iteration']).rename(columns={'value': self.eval_metric,
'variable': 'dataset'})
sns.lineplot(data=full_evals_results, x='iteration', y=self.eval_metric, hue='dataset')
# categorical_feature plt.title('Training progress')
# -
class LGBWrapper_regr(object):
"""
A wrapper for lightgbm model so that we will have a single api for various models.
"""
def __init__(self):
self.model = lgb.LGBMRegressor()
def fit(self, X_train, y_train, X_valid=None, y_valid=None, X_holdout=None, y_holdout=None, params=None):
if params['objective'] == 'regression':
eval_metric = eval_qwk_lgb_regr
else:
eval_metric = 'auc'
eval_set = [(X_train, y_train)]
eval_names = ['train']
self.model = self.model.set_params(**params)
if X_valid is not None:
eval_set.append((X_valid, y_valid))
eval_names.append('valid')
if X_holdout is not None:
eval_set.append((X_holdout, y_holdout))
eval_names.append('holdout')
if 'cat_cols' in params.keys():
cat_cols = [col for col in params['cat_cols'] if col in X_train.columns]
if len(cat_cols) > 0:
categorical_columns = params['cat_cols']
else:
categorical_columns = 'auto'
else:
categorical_columns = 'auto'
self.model.fit(X=X_train, y=y_train,
eval_set=eval_set, eval_names=eval_names, eval_metric=eval_metric,
verbose=params['verbose'], early_stopping_rounds=params['early_stopping_rounds'],
categorical_feature=categorical_columns)
self.best_score_ = self.model.best_score_
self.feature_importances_ = self.model.feature_importances_
def predict(self, X_test):
return self.model.predict(X_test, num_iteration=self.model.best_iteration_)
# +
def convert_regr_to_cat(y_pred, zero_threshhold = 1.12232214, one_threshhold = 1.73925866, two_threshhold = 2.22506454):
y_pred[y_pred <= zero_threshhold] = 0
y_pred[np.where(np.logical_and(y_pred > zero_threshhold, y_pred <= one_threshhold))] = 1
y_pred[np.where(np.logical_and(y_pred > one_threshhold, y_pred <= two_threshhold))] = 2
y_pred[y_pred > two_threshhold] = 3
def eval_qwk_lgb_regr(y_true, y_pred):
"""
Fast cappa eval function for lgb.
"""
convert_regr_to_cat(y_pred)
return 'cappa', qwk(y_true, y_pred), True
# -
def qwk(a1, a2):
"""
Source: https://www.kaggle.com/c/data-science-bowl-2019/discussion/114133#latest-660168
:param a1:
:param a2:
:param max_rat:
:return:
"""
max_rat = 3
a1 = np.asarray(a1, dtype=int)
a2 = np.asarray(a2, dtype=int)
hist1 = np.zeros((max_rat + 1, ))
hist2 = np.zeros((max_rat + 1, ))
o = 0
for k in range(a1.shape[0]):
i, j = a1[k], a2[k]
hist1[i] += 1
hist2[j] += 1
o += (i - j) * (i - j)
e = 0
for i in range(max_rat + 1):
for j in range(max_rat + 1):
e += hist1[i] * hist2[j] * (i - j) * (i - j)
e = e / a1.shape[0]
return 1 - o / e
regressor_models = []
for i in range(5, 6):
folds = GroupKFold(n_splits=i)
regressor_model = RegressorModel(model_wrapper=LGBWrapper_regr())
regressor_model.fit(X=comp_train_df, y=y, folds=folds, params=params,
eval_metric='cappa', cols_to_drop=cols_to_drop)
regressor_models.append(regressor_model)
all_models = [(i, regressor_model.scores['valid']) for (i, regressor_model) in enumerate(regressor_models)]
max_model = max(all_models, key=lambda iv : iv[1])
print(f'best model: {max_model}')
regressor_model1 = regressor_models[max_model[0]]
plt.hist(regressor_model1.predict(comp_test_df).reshape(1000))
plt.title('Distribution of predictions');
# ## Inference
# %%time
pr1 = regressor_model1.predict(comp_train_df)
get_class_bounds_coeff = get_class_bounds(y.to_numpy(), pr1.T.reshape(-1))
get_class_bounds_coeff
calculated_coeff = np.array(calculated_coeffs).mean(axis=0)
calculated_coeff
optR = OptimizedRounder(calculated_coeff)
optR.fit(pr1.reshape(-1,), y)
coefficients = optR.coefficients();
print(coefficients);
opt_preds = optR.predict(pr1.reshape(-1, ), coefficients)
qwk(y, opt_preds)
# zero_threshhold = 1.12232214, one_threshhold = 1.73925866, two_threshhold = 2.22506454
pr1 = regressor_model1.predict(comp_test_df)
convert_regr_to_cat(pr1, zero_threshhold = coefficients[0], one_threshhold = coefficients[1], two_threshhold = coefficients[2])
pd.Series(pr1.reshape(1000)).value_counts(normalize=True)
pr2 = regressor_model1.predict(comp_test_df)
convert_regr_to_cat(pr2, zero_threshhold = get_class_bounds_coeff[0], one_threshhold = get_class_bounds_coeff[1], two_threshhold = get_class_bounds_coeff[2])
pd.Series(pr2.reshape(1000)).value_counts(normalize=True)
pr3 = regressor_model1.predict(comp_test_df)
convert_regr_to_cat(pr3)
pd.Series(pr3.reshape(1000)).value_counts(normalize=True)
sample_submission_df = pd.read_csv(path/'sample_submission.csv')
selected_prediction = pr1
sample_submission_df['accuracy_group'] = selected_prediction.astype(int)
sample_submission_df.to_csv('submission.csv', index=False)
sample_submission_df.to_csv('submission.csv', index = False)
# !head submission.csv
# ## Data Checks
valid_idx = [g.iloc[-1].name for i, g in comp_train_df.groupby("installation_id", sort=False)]
valid_ds = comp_train_df[comp_train_df.index.isin(valid_idx)].groupby('installation_id').last()['accuracy']
expected_ratios = valid_ds.apply(lambda x : feature_engineering.convert_to_accuracy_group(x)).value_counts(normalize=True)
expected_ratios
pred_ratios = sample_submission_df['accuracy_group'].value_counts(normalize=True)
pred_ratios
pred_ratios_list = np.array(pred_ratios.sort_index().tolist())
expected_ratios_list = np.array(expected_ratios.sort_index().tolist())
pred_ratios_list, expected_ratios_list
prod = ((pred_ratios_list - pred_ratios_list.mean()) * (expected_ratios_list - expected_ratios_list.mean())).mean() / (pred_ratios_list.std() * expected_ratios_list.std())
prod
plt.scatter(pred_ratios_list, expected_ratios_list);
| nbs_gil/data_science_bowl/data_science_bowl_training_19_regression_lgb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
from sympy import symbols,solve
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython.core.display import HTML
import urllib.request
HTML(urllib.request.urlopen('http://metrprof.xyz/metr4323.css').read().decode())
#HTML( open('metr4323.css').read() ) #or use this, if you have downloaded metr4233.css to your computer
# # Symbolic Math with Python
# v1.311, 26 February 2018, by <NAME>
# $\newcommand{\V}[1]{\vec{\boldsymbol{#1}}}$
# $\newcommand{\I}[1]{\widehat{\boldsymbol{\mathrm{#1}}}}$
# $\newcommand{\pd}[2]{\frac{\partial#1}{\partial#2}}$
# $\newcommand{\pdt}[1]{\frac{\partial#1}{\partial t}}$
# $\newcommand{\ddt}[1]{\frac{\D#1}{\D t}}$
# $\newcommand{\D}{\mathrm{d}}$
# $\newcommand{\Ii}{\I{\imath}}$
# $\newcommand{\Ij}{\I{\jmath}}$
# $\newcommand{\Ik}{\I{k}}$
# $\newcommand{\del}{\boldsymbol{\nabla}}$
# $\newcommand{\dt}{\cdot}$
# $\newcommand{\x}{\times}$
# $\newcommand{\dv}{\del\cdot}$
# $\newcommand{\curl}{\del\times}$
# $\newcommand{\lapl}{\nabla^2}$
#
# Demonstrates using `sympy` to solve for the mysterious coefficients we see in the Adams-Bashforth schemes and the advection schemes.
# ## Simple examples of solving linear equations
# `symbols`: The names on the left will be the Python names of a variable, the symbols on the right will be what is printed. It is a good idea to keep them the same...
# +
z = 7 # traditional python variable assignment
p = symbols('q') # a bad idea to label the symbol object other than 'p'
print("any suprises here?")
print(type(z),z)
print(type(p),p)
# -
# The "= 0" is assumed in the equation that we solve:
solve(p/2 -1)
# There was only one variable, so we didn't see the name. Show name of what we solved for:
solve(p/2 -1, dict=True)
# ### Two independent equations, two unknowns
# Normally, you should make the python variable name the same as the printed name.
x,y = symbols('x,y')
# Let's solve these for $x$ and $y$:
# $$x-y+1=0$$
# $$x+4y-5=0$$
# In `solve`, the equations that we are solving don't need the "= 0". That is assumed.
# Because we are solving for two unknowns, we get the answer as a python dictionary. `dict=True` is the default.
solve( [ x-y+1, x+4*y-5] , [x,y] )
# ### Three dependent equations
# A system of linear equations may have an infinite number of solutions if the equations are not independent.
x,y,z = symbols('x,y,z')
solve( [ x + 2*y + 3*z - 4, 5*x + 6*y +7*z -8 , 9*x + 10*y + 11*z - 12 ], [x,y,z] )
# ### Inconsistent equations
# A system of linear equations may have no solution. For example, equations for two lines that do not intersect.
x,y = symbols('x,y')
solve( [ x-y+1, x-y+2] , [x,y] )
# ## Deriving third-order upwind advection
# In `AdvectionPDE1d.ipynb`, we found that the derivative $\pd{f}{x}$ used in an equation like:
# $$
# \pdt{f} = -u\pd{f}{x}
# $$
# could be estimated in a variety of ways. Those we mentioned were "second-order centered", "first-order upwind" and
# "third-order upwind".
#
# Here we will derive the "third-order upwind" scheme for $\pd{f}{x}$. As for the claim of being "third-order" we will note that the derivative is estimated from a third-order polynomial, fit to 4 discrete points of $f$. It is "upwind" because two points upwind of $\pd{f}{x}$ are used, and one point downwind.
#
# We attempt to fit:
# $$ f(x) = f(0) + a \frac{x}{\delta} +b \frac{x^2}{\delta^2}
# +c \frac{x^3}{\delta^3} $$
#
# If we can find $a$, $b$ and $c$ that fit the three neighboring points, then
# $f'(0) = a/\delta$ may be suitable for the derivative we need in an advection scheme.
#
# $$f(\delta) = f(0) +a +b + c $$
#
# $$f(-\delta) = f(0) - a + b - c $$
#
# $$f(-2\delta) = f(0) - 2a + 4b - 8c $$
f0,fp1,fm1,fm2,a,b,c = symbols('f0,fp1,fm1,fm2,a,b,c')
# fm1 is "f at minus 1 delta", fp1 is "f at plus 1 delta", and so on
# the variable names np1, nm1, nm2 are the names of "expression objects":
np1 = f0 + a + b + c - fp1
nm1 = f0 -a + b - c - fm1
nm2 = f0 -2*a + 4*b - 8*c -fm2
soln = solve([np1,nm1,nm2],[a,b,c]) # "expression objects" are set equal to zero to be "equations"
soln
# So seeing the solution for $a$ above:
# $$ f'(0) = \frac{a}{\delta} = \frac{1}{6\delta} \left[ f(-2\delta) -6f(-\delta) + 3 f(0) + 2 f(\delta) \right] $$
#
# You should now be able to see where this python code for third-order upwind advection comes from:
#
# `dbdx[2:-2] = (b[:-4] - 6*b[1:-3] + 3*b[2:-2] + 2*b[3:-1])/(6*dx)`
#
# #### Example of the "fit" provided by the polynomial
# What is the fitted polynomial doing for us? Let's do an example with specific values of $f$ at the four points: an upside-down V or "spike".
from collections import OrderedDict # if you want OrderedDict instead of dict
fs = [fm2, fm1, f0, fp1] # list of our symbols
vals = [0,1,2,1] # the values of f showing the spike
spike = OrderedDict(zip(fs,vals))# associate symbols with specific values
#spike= dict(zip(fs,vals)) # this works too
print(spike)
# Now substitute in those specific values of fm2,fm1,f0,fp1 to
# get numbers for a,b,c
coefs={} # initialize empty dict
for key in soln:
coefs[key] = soln[key].subs(spike) # subs means subsitute
print(coefs)
# In this example $\delta=1$. For the spike, we find $a=\frac{1}{3}$. So "third-order upwind" estimate is $f'(0)=\frac{1}{3}$
#
# Let's use those coefficients, specific to this "spike" example, to plot the fitted function, and maybe see where this estimate comes from.
xa = np.linspace(-2,1,31) # this is the range of x/delta for the plot
xa
# this is the fitted f(x)
f = spike[f0] + coefs[a]*xa + coefs[b]*xa**2 + coefs[c]*xa**3
f
plt.plot(xa,f,'g-')
plt.plot([-2,-1,0,1],vals,'ro');
# Well, so what?
# You should be able to see by inspection of the above spike that a "second-order centered" scheme would produce $f'(0)=0$,
# and the "first-order upwind" scheme produces $f'(0)=1$. We haven't shown that the above third-order "fit" of $f'(0)=\frac{1}{3}$ is necesarily "better" than other alternatives when used in an advection scheme. In METR 4323, the proof about being "better" is shown by experiments.
#
# <hr/>
# ## Adams-Bashforth time step
# The universal forecast scheme is (trivially):
# $$f(t+\delta) = f(t) + \int_t^{t+\delta} f'(s) ds = f(t) + \delta \frac{1}{\delta}\int_t^{t+\delta} f'(s) ds $$
# On this side of the pond, the $s$ is called a *dummy variable* for $t$.
# Needless to say, knowing $f'(t)$ in the future is problematic, because we don't know the
# future. The simplest scheme is to assume $f'(s)$ will be $f'(t)$. That is the Euler scheme.
#
# It may be helpful to denote the average value of $f'(t)$ over the next time step as:
# $$ \overline{f'(t)} = \frac{1}{\delta}\int_t^{t+\delta} f'(s) ds $$
#
# So our universal forecast scheme is also denoted:
#
# $$f(t+\delta) = f(t) + \delta \overline{f'(t)} $$
#
#
# Let's make a better estimate of $f'(t)$ in the near future. Let's call the current time $t=0$.
# We seek $a$ and $b$ in
# $$ f'(t)=f'(0)+a\frac{t}{\delta}+ b\frac{t^2}{\delta^2}$$
# where $a$ and $b$ are determined by the requirement for $f'(t)$ to also fit the
# values of $f'(t)$ in the previous time steps:
# $$ f'(-\delta) = f'(0) - a + b$$
# $$ f'(-2\delta) = f'(0) - 2a + 4b$$
#
# The average value of $f'(t)$ between $t=0$ and $t=\delta$ is thus anticpated to be:
# $$\overline{f'(t)} =
# \frac{1}{\delta}\int_0^\delta
# \left( f'(0)+ a\frac{s}{\delta}+ b \frac{s^2}{\delta^2} \right)ds
# =\frac{1}{\delta}
# \left[ f'(0)s +
# \frac{1}{2} a\frac{s^2}{\delta}
# + \frac{1}{3} b \frac{s^3}{\delta^2}\right]_0^\delta
# =f'(0)+ \frac{1}{2} a + \frac{1}{3} b$$
#
# We next use `sympy` to find $a$ and $b$ in terms of $f'(0)$, $f'(-\delta)$ and $f'(-2\delta)$.
fp0,fpm1,fpm2,a,b = symbols('fp0,fpm1,fpm2,a,b')
nm1 = fp0 -a + b - fpm1
nm2 = fp0 -2*a + 4*b - fpm2
ab =solve([nm1,nm2],(a,b)) # the solution
ab
# So here is $\overline{f'(t)}$ in terms of $f'(0)$, $f'(-\delta)$ and $f'(-2\delta)$:
fp0+ab[a]/2+ab[b]/3
# You should see the similarity with our Python code for 3rd-order Adams-Bashforth:
#
# `(23*dqdt[0]-16*dqdt[1]+5*dqdt[2])/12.`
# <hr>
# # Fifth-order upwind advection
# $$ f(x) = f(0) + a X + b X^2 + c X^3 + d X^4 + e X^5 $$
# where $X \equiv x/\delta$.
#
# We have values $f(-3\delta)$, $f(-2\delta)$, $f(-\delta)$,
# $f(\delta)$ and $f(2\delta)$ to fit by finding the appropriate values for $a$, $b$, $c$, $d$ and $e$.
#
# | $\LaTeX\qquad$ |`python` |
# | --- | --- |
# | $f(-3\delta)$ | f0 |
# | $f(-2\delta)$ | f1 |
# | $f(-1\delta)$ | f2 |
# | $f(0)$ | f3 |
# | $f(\delta)$ | f4 |
# | $f(2\delta)$ | f5 |
f0,f1,f2,f3,f4,f5,a,b,c,d,e = symbols('f0,f1,f2,f3,f4,f5,a,b,c,d,e')
np2 = f3 + 2*a + 4*b + 8*c + 16*d + 32*e -f5
np1 = f3 + a + b + c + d + e - f4
nm1 = f3 -a + b - c + d - e - f2
nm2 = f3 -2*a + 4*b - 8*c + 16*d - 32*e - f1
nm3 = f3 -3*a + 9*b - 27*c + 81*d - 243*e - f0
solve([np2,np1,nm1,nm2,nm3],(a,b,c,d,e))
# $\frac{\partial b}{\partial x} = \frac{a}{\delta}$ can be used in an advection scheme. This is what python code might look like for $\frac{\partial b}{\partial x}$ in the 5th order upwind scheme:
#
#
# `dbdx[3:-2] = (-2*b[:-5] + 15*b[1:-4] - 60*b[2:-3] + 20*b[3:-2] + 30*b[4:-1] -3*b[5:0])/(60*dx)`
#
# Note there are 3 points to the left, and 2 points to the right, of the point where we want the derivative to apply. This should be appropriate for flow from the left.
# <hr/>
# ## Student Task 1: Fourth-order centered advection
#
# This should be easy. Let's just truncate the above 5th order analysis to 4th order.
# $$ f(x) = f(0) + a X + b X^2 + c X^3 + d X^4 $$
# where $X \equiv x/\delta$.
#
# We have values $f(-2\delta)$, $f(-\delta)$,
# $f(\delta)$ and $f(2\delta)$ to fit by finding the appropriate values for $a$, $b$, $c$ and $d$.
#
# | $\LaTeX\qquad$|`python` |
# | --- | --- |
# | $f(-2\delta)$ | f1 |
# | $f(-1\delta)$ | f2 |
# | $f(0)$ | f3 |
# | $f(\delta)$ | f4 |
# | $f(2\delta)$ | f5 |
#
# **STUDENTS:** finish the sympy stuff for the 4th order scheme:
# <hr/>
# # Student Task 2: Implement the 4th and 5th order advection schemes
#
# Make a copy of your `AdvectionPDE1d.ipynb` into something like `MoreAdvection.ipynb`. Modify the new notebook to include options for `advord=4` and `advord=5`. Do some experiments to make pretty plots comparing the 1 thru 5 schemes. I suggest you use N=101 points.
| Fiedler_revised/n060_sympyschemes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="5D3U6h6e3HBz" colab_type="text"
# Notebook created by [<NAME>](https://www.linkedin.com/in/daniel-fojo/) - [UPC School](https://www.talent.upc.edu/ing/estudis/formacio/curs/310400/postgrau-artificial-intelligence-deep-learning/) **2019**
# + [markdown] id="FFOw5rs6TKSd" colab_type="text"
# # Image Classification
#
#
# ----
#
# The problem we are trying to solve here is to classify grayscale images of handwritten digits (28 pixels by 28 pixels), into their 10 categories (0 to 9). The dataset we will use is the MNIST dataset, a classic dataset in the machine learning community, which has been around for almost as long as the field itself and has been very intensively studied. It's a set of 60,000 training images, plus 10,000 test images, assembled by the National Institute of Standards and Technology (the NIST in MNIST) in the 1980s. You can think of "solving" MNIST as the "Hello World" of deep learning -- it's what you do to verify that your algorithms are working as expected. As you become a machine learning practitioner, you will see MNIST come up over and over again, in scientific papers, blog posts, and so on.
#
# The MNIST dataset comes pre-loaded in Keras, in the form of a set of four Numpy arrays:
#
# + id="xv56sgw4tev2" colab_type="code" colab={}
# + id="ATLrBKlwVMp5" colab_type="code" colab={}
import numpy as np
np.random.seed(123)
import keras
keras.__version__
# + id="Kr2SBUfCVXZB" colab_type="code" colab={}
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + [markdown] id="rR0hkZ3JVsLB" colab_type="text"
#
# train_images and train_labels form the "training set", the data that the model will learn from. The model will then be tested on the "test set", test_images and test_labels. Our images are encoded as Numpy arrays, and the labels are simply an array of digits, ranging from 0 to 9. There is a one-to-one correspondence between the images and the labels.
#
# Let's have a look at the training data:
# + id="2D9goI5SVxJt" colab_type="code" colab={}
train_images.shape
# + [markdown] id="j8AfjoQlWOyc" colab_type="text"
# And now let's check the labels:
#
#
# + id="f18OzfOJWa-W" colab_type="code" colab={}
train_labels.shape
# + [markdown] id="uIWzrI3sW-1g" colab_type="text"
# And now let's look at the kind of images we are dealing with:
# + id="vjJ95WBKXC8E" colab_type="code" colab={}
import matplotlib.pyplot as plt
import random
def plot_samples(X_train,N=5):
'''
Plots N**2 randomly selected images from training data in a NxN grid
'''
ps = random.sample(range(0,X_train.shape[0]), N**2)
f, axarr = plt.subplots(N, N)
p = 0
for i in range(N):
for j in range(N):
if len(X_train.shape) == 3:
axarr[i,j].imshow(X_train[ps[p]],cmap='gray')
else:
im = X_train[ps[p]]
axarr[i,j].imshow(im)
axarr[i,j].axis('off')
p+=1
plt.show()
plot_samples(train_images)
# + [markdown] id="TkkrNZ6KYRBJ" colab_type="text"
# # Training a Multi-Layer Perceptron (MLP)
#
#
#
# Our workflow will be as follow: first we will train our neural network with the training data, train_images and train_labels. The network will then learn to associate images and labels. Finally, we will ask the network to produce predictions for test_images, and we will verify if these predictions match the labels from test_labels.
#
# For the time being, we will use a very simple network:
#
#
# + id="q_RJrYIqlgwC" colab_type="code" colab={}
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
# + [markdown] id="VoE6YJNyltNi" colab_type="text"
#
#
# Here our network consists of a sequence of two Dense layers, which are densely-connected (also called "fully-connected") neural layers. The second (and last) layer is a 10-way "softmax" layer, which means it will return an array of 10 probability scores (summing to 1). Each score will be the probability that the current digit image belongs to one of our 10 digit classes.
#
# To make our network ready for training, we need to pick three more things, as part of "compilation" step:
#
#
# * **A loss function**: this is how the network will be able to measure how good a job works on its training data, and thus how it will be able to steer itself in the right direction.
# * **An optimizer**: this is the mechanism through which the network will update itself based on the data it sees and its loss function.
# * ** Metrics to monitor during training and testing**. Here we will only care about accuracy (the fraction of the images that were correctly classified).
#
# Now we can check which is the arhitecture of the network, and the number of parameters of each layer:
#
#
#
#
#
# + id="wtHlf2B6nqwL" colab_type="code" colab={}
network.summary()
# + id="t8F41Fm-me3T" colab_type="code" colab={}
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# + [markdown] id="-1KGwVGNmi-t" colab_type="text"
# Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in the [0, 1] interval. Previously, our training images for instance were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.
# + id="h31BVqUhnI6a" colab_type="code" colab={}
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# + [markdown] id="serjqj5ynQqU" colab_type="text"
#
#
# We also need to categorically encode the labels, so that a sample with a label N...
# + id="qXsL2p5Eo1BC" colab_type="code" colab={}
train_labels[0]
# + id="5cXIV1l0pB5U" colab_type="code" colab={}
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# + [markdown] id="p8h1JWrepEDy" colab_type="text"
# Is represented by a vector of all 0s and a 1 in the Nth position
# + id="MWewerBRpK6P" colab_type="code" colab={}
train_labels[0]
# + [markdown] id="6Yw8w_zjpPFa" colab_type="text"
# We are now ready to train our network, which in Keras is done via a call to the fit method of the network: we "fit" the model to its training data.
# + id="wSVQwRpvpctb" colab_type="code" colab={}
network.fit(train_images, train_labels, epochs=5, batch_size=128)
# + [markdown] id="aTLightupnbr" colab_type="text"
# Two quantities are being displayed during training: the "loss" of the network over the training data, and the accuracy of the network over the training data.
#
# We quickly reach an accuracy of 0.989 (i.e. 98.9%) on the training data. Now let's check that our model performs well on the test set too:
#
# + id="mCsCgoyoptyU" colab_type="code" colab={}
test_loss, test_acc = network.evaluate(test_images, test_labels)
# + id="LDw4XXrDp1p-" colab_type="code" colab={}
print('test_acc:', test_acc)
# + [markdown] id="VI-0sEN6p5rI" colab_type="text"
# Our test set accuracy turns out to be around 98% -- that's quite a bit lower than the training set accuracy (take into account that this dataset is really simple - state of the art methods can reach 99.8%). This gap between training accuracy and test accuracy is an example of "overfitting", the fact that machine learning models tend to perform worse on new data than on their training data. Overfitting will be a central topic in the next session.
# + [markdown] id="6mJXKk-YYfGN" colab_type="text"
# # Training a Convolutional Neural Network (CNN)
# + [markdown] id="Fptuotu4qRQZ" colab_type="text"
# We have trained a network using fully connected layers, but in the theory we have learned that when dealing with images, Convolutional Neural Networks (CNNs) are more convenient, so in this second part of the session, we are going to train a convolutional neural network for multiclass classification.
#
# The following lines show what a basic convnet looks like. It's a stack of Conv2D and MaxPooling2D layers. We'll see in a minute what they do concretely. Importantly, a convnet takes as input tensors of shape (image_height, image_width, image_channels) (not including the batch dimension). In our case, we will configure our convnet to process inputs of size (28, 28, 1), which is the format of MNIST images. We do this via passing the argument input_shape=(28, 28, 1) to our first layer.
# + id="3sZmdCC9TKSh" colab_type="code" colab={}
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# + [markdown] id="SC812Uv9TKSq" colab_type="text"
# Let's display the architecture of our convnet so far:
# + id="4U4hnOhVTKSt" colab_type="code" colab={}
model.summary()
# + [markdown] id="L9L1fACeTKS3" colab_type="text"
# You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width
# and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to
# the `Conv2D` layers (e.g. 32 or 64).
#
# The next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are
# already familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor.
# So first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top:
# + id="1qLDPFaTTKS5" colab_type="code" colab={}
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
# + [markdown] id="LDJFZh15TKS-" colab_type="text"
# We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network looks like:
# + id="oaZcrdT3TKTB" colab_type="code" colab={}
model.summary()
# + [markdown] id="DpdXjfxkTKTL" colab_type="text"
# As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.
#
# Now, let's train our convnet on the MNIST digits.
# + id="zQsnmxyITKTP" colab_type="code" colab={}
from keras.datasets import mnist
from keras.utils import to_categorical
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# + id="GJdlY3TyTKTW" colab_type="code" colab={}
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=5, batch_size=64)
# + [markdown] id="CiE-0hojTKTj" colab_type="text"
# Let's evaluate the model on the test data:
# + id="ias6PIT2TKTl" colab_type="code" colab={}
test_loss, test_acc = model.evaluate(test_images, test_labels)
# + id="1p-uPfC5TKTw" colab_type="code" colab={}
test_acc
# + [markdown] id="TAS92nFrTKT4" colab_type="text"
# While our densely-connected network we had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.3%: we decreased our error rate by 68% (relative).
| Upc/Deep_Learning_Days_1-5/aidl2019_dl_lab3_cnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Custom Model Training
# Use the Azure Machine Learning data collector to log various metrics
from azureml.logging import get_azureml_logger
logger = get_azureml_logger()
# +
# Use Azure Machine Learning history magic to control history collection
# History is off by default, options are "on", "off", or "show"
# # %azureml history on
# -
# Update the value of the `keras_backend` variable to switch between tensorflow or CNTK keras backends. Valid values: `tensorflow`, `cntk`
keras_backend = 'tensorflow' # valid values: cntk, tensorflow
os.environ['KERAS_BACKEND'] = keras_backend
# +
import sys, os
from keras_training import train
# project_folder = %pwd
print("Project folder: ", project_folder)
output_folder = os.path.join(project_folder,'outputs')
if not os.path.exists(output_folder):
os.makedirs(output_folder)
data_folder = os.path.join(project_folder, 'data')
if not os.path.exists(data_folder):
print('Error. Path ', data_folder, ' do not exists.')
train_folder = os.path.join(data_folder, 'train')
if not os.path.exists(train_folder):
print('Error. Path ', train_folder, ' do not exists.')
validation_folder = os.path.join(data_folder, 'validation')
if not os.path.exists(validation_folder):
print('Error. Path ', validation_folder, ' do not exists.')
(history_tl, model) = train(train_folder, validation_folder, output_folder, epochs=16)
logger.log("Accuracy", history_tl.history['acc'])
logger.log("Loss", history_tl.history['loss'])
logger.log("Validation Accuracy", history_tl.history['val_acc'])
logger.log("Validation Losss", history_tl.history['val_loss'])
| workbench/training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# +
# azureml-core of version 1.0.72 or higher is required
# azureml-dataprep[pandas] of version 1.1.34 or higher is required
from azureml.core import Workspace, Dataset
subscription_id = ''
resource_group = ''
workspace_name = ''
workspace = Workspace(subscription_id, resource_group, workspace_name)
dataset = Dataset.get_by_name(workspace, name='Nba-Dataset')
dataset.to_pandas_dataframe()
# -
df = dataset.to_pandas_dataframe()
df.head()
# ## Create Experiment
# #### Create experiment in Workspace
# +
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import pandas as pd
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-nba-position'
experiment=Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
# -
# #### Compute Target
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "auto-ml"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# -
# #### Run AutoML Config
# +
from azureml.train.automl import AutoMLConfig
automl_settings = {
"experiment_timeout_hours" : 0.3,
"enable_early_stopping" : True,
"iteration_timeout_minutes": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
#"n_cross_validations": 2,
"primary_metric": 'AUC_weighted',
"featurization": 'auto',
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
experiment_exit_score = 0.9984,
blocked_models = ['KNN','LinearSVM'],
enable_onnx_compatible_models=True,
training_data = dataset,
label_column_name ='POSITION',
**automl_settings
)
# -
remote_run = experiment.submit(automl_config, show_output = False)
| 01. Using Azure Machine Learning/05. The AzureML SDK/Automl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Importing Libraries
import pandas as pd
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
import numpy as np
import ppscore as pps
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
# %matplotlib inline
plt.style.use('dark_background')
# ### Reading in data into pandas DataFrame
df = pd.read_csv('melb_data.csv')
# #### Converting the date object into int64
df['Day'] = pd.DatetimeIndex(df['Date']).day
df['Month'] = pd.DatetimeIndex(df['Date']).month
df['Year'] = pd.DatetimeIndex(df['Date']).year
# #### Printing dataframe information
df.info()
# #### Dropping null values and irrelevant features
df.dropna(inplace=True)
df.drop(['Lattitude','Longtitude','Address','Date'], axis=1, inplace=True)
df.head()
# #### Calculating the Predictive Power Score and plotting heatmap
def pps_heatmap(df):
'''
Function for calculating the Predictive Power Score and plotting a heatmap
Args:
Pandas DataFrame or Series object
__________
Returns:
figure
'''
nuff = pps.matrix(df)
nuff1 = nuff[['x', 'y', 'ppscore']].pivot(columns='x', index='y', values='ppscore')
plt.figure(figsize = (15, 8))
ax = sb.heatmap(nuff1, vmin=0, vmax=1, cmap="Oranges_r", linewidths=0.5, annot=True)
ax.set_title("PPS matrix")
ax.set_xlabel("feature")
ax.set_ylabel("target")
return ax
pps_heatmap(df)
# #### Computing pairwise correlation
def correlation_matrix(df):
plt.figure(figsize=(10, 8))
sb.heatmap(df.corr(), annot=True, cmap='seismic_r')
plt.show()
return df.corr()
correlation_matrix(df)
df.columns
# #### Data Preprocessing and Transformation
# +
column_trans = make_column_transformer(
(OneHotEncoder(), ['Suburb','Type','Method', 'SellerG','CouncilArea', 'Regionname']), remainder='passthrough'
)
X = df[['Suburb', 'Rooms', 'Type', 'Method', 'SellerG', 'Distance',
'Postcode', 'Bedroom2', 'Bathroom', 'Car', 'Landsize', 'BuildingArea',
'YearBuilt', 'CouncilArea', 'Regionname', 'Propertycount', 'Day',
'Month', 'Year']]
y = df[['Price']]
x = column_trans.fit_transform(X)
# -
X = StandardScaler(with_mean=False).fit_transform(x)
y = StandardScaler().fit_transform(y)
model = LinearRegression()
model.fit(X, y)
model.score(X, y)
# #### Splitting data into training and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model.fit(x_train, y_train)
# #### Performing predictions
y_pred = model.predict(x_test)
# #### Computing the metrics of the model.
r_squared = r2_score(y_test, y_pred)
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
r_squared
rmse
# #### Residual Analysis to Evaluate Model Performance
## prediction results ##
y_train_pred = model.predict(x_train)
y_test_pred = model.predict(x_test)
## Printing the rMSE and the R² score ##
print('Root Mean Squared Error: Training Data %.2f' % np.sqrt(mean_squared_error(y_train, y_train_pred)))
print('Root Mean Squared Error: Test Data %.2f' % np.sqrt(mean_squared_error(y_test, y_test_pred)))
print('Coefficient of Determination R²: Training Data %.2f' %(r2_score(y_train, y_train_pred)))
print('Coefficient of Determination R²: Test Data %.2f' %(r2_score(y_test, y_test_pred)))
## Residual Analysis ##
plt.figure(figsize = (10, 8))
plt.scatter(y_train_pred, y_train_pred - y_train, c = 'lightgoldenrodyellow', label = 'Training data')
plt.scatter(y_test_pred, y_test_pred - y_test, c = 'tomato', label = 'Test data')
plt.hlines(y = 0, xmin = -10, xmax = 10, lw = 2, color = 'snow')
plt.xlabel('Predicted Values')
plt.ylabel('Residuals')
plt.legend(loc = 'best')
plt.show()
| cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # (baseline development) Silicon per M2 Calculations
# This journal documents the methods and assumptions made to create a baseline material file for silicon.
# ## Mass per M2
# The mass of silicon contained in a PV module is dependent on the size, thickness and number of cells in an average module. Since there is a range of sizes and number of cells per module, we will attempt a weighted average. These weighted averages are based on ITRPV data, which goes back to 2010, Fraunhofer data back to 1990, and
# +
import numpy as np
import pandas as pd
import os,sys
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
plt.rcParams['figure.figsize'] = (12, 8)
density_si = 2.3290 #g/cm^3 from Wikipedia of Silicon (https://en.wikipedia.org/wiki/Silicon)
#it might be better to have mono-Si and multi-Si densities, including dopants,
#but that density is not readily available
# -
# A Fraunhofer report indicates that in 1990, wafers were 400 micron thick, decreasing to the more modern 180 micron thickness by 2008. ITRPVs back to 2010 indicate that 156 mm x 156mm was the standard size wafer through 2015.
# +
#now lets try to do this for 2019 through 2030 all at once with dataframes
#taking the average of the ranges specified in ITRPVs
#first we input the market share data for mcSi and monoSi, read in from csv
cwd = os.getcwd() #grabs current working directory
skipcols=['Source']
mrktshr_cellsize = pd.read_csv(cwd+"/../../../PV_ICE/baselines/SupportingMaterial/MarketShare_CellSize.csv",
index_col='Year', usecols=lambda x: x not in skipcols)
mrktshr_cellsize /=100 #turn whole numbers into decimal percentages
#print(mrktshr_cellsize)
#then split them into two dataframes for later computations
dfmarketshare_mcSi = mrktshr_cellsize.filter(regex = 'mcSi')
dfmarketshare_monoSi = mrktshr_cellsize.filter(regex = 'monoSi')
#adjust column names for matching computation later
dfmarketshare_mcSi.columns = ['share156','share156.75','share157.75','share163','share166up']
dfmarketshare_monoSi.columns = ['share156','share156.75','share157.75','share163','share166up']
print(dfmarketshare_mcSi)
print(dfmarketshare_monoSi)
# -
# Interpolate marketshare for missing years in ITRPV 2020 predictions
# ----
# choosing to interpolate market share of different sizes rather than cell size because this should be more basedin technology - i.e. crystals only grow certain sizes. Additionally, it is more helpful to understand the impact silicon usage by keeping cell size and marketshare seperate.
# +
#interpolate for missing marketshare data
##the interpolate function returns a view of the df, doesn't modify the df itself
##therefore you have to set the old df, or a new one = the df.interpolate function
dfmarketshare_mcSi=dfmarketshare_mcSi.interpolate(method='linear',axis=0,limit=2,limit_area='inside')
dfmarketshare_monoSi=dfmarketshare_monoSi.interpolate(method='linear',axis=0,limit=2,limit_area='inside')
#fill remaining NaN/outside with 0 (i.e., no market share)
dfmarketshare_mcSi=dfmarketshare_mcSi.fillna(0.0)
dfmarketshare_monoSi=dfmarketshare_monoSi.fillna(0.0)
print(dfmarketshare_mcSi)
print(dfmarketshare_monoSi)
# +
#multiply each marketshare dataframe column by it's respective size
#dfmarketshare_mcSi.share156 *=156 #this is a slow way to multiply each column by its respective size
cellsizes = {'share156':156,
'share156.75':156.75,
'share157.75':157.75,
'share163':163.875,
'share166up':166} #dictionary of the average cell dimension for each market share bin (ITRPV 2020)
#multiply cell dimensions by their market share to get a weighted average
##this is where the column names needed to match
df_scalecell_mcSi = dfmarketshare_mcSi.mul(cellsizes,'columns')
df_scalecell_monoSi = dfmarketshare_monoSi.mul(cellsizes,'columns')
print(df_scalecell_mcSi)
print(df_scalecell_monoSi)
# +
#now add the columns together to get the weighted average cell size for each year for each technology
df_avgcell_mcSi = pd.DataFrame(df_scalecell_mcSi.agg("sum", axis="columns"))
df_avgcell_monoSi = pd.DataFrame(df_scalecell_monoSi.agg("sum", axis="columns")) #agg functions return a series not a dictionary
#print(df_avgcell_mcSi)
#join the two dataframes into single one with two columns
df_avgcell = pd.concat([df_avgcell_monoSi,df_avgcell_mcSi], axis=1) #concatinate on the columns axis
df_avgcell.columns = ['monoSi','mcSi'] #name the columns
print(df_avgcell)
# -
# However, we know that it wasn't 156 mm back to 1995, but exact records of average cell size are lacking. A mention of a companies' new manufacturing line producing 125 mm mono-Si in 1993 can be found in IEA PVPS documentation, and Martin Green 2000 calls out 100 mm to 150 mm manufacturing. SinoVoltaics "History of Wafer Sizes" claims 100 mm x 100mm was standard through 1996, then 125 mm x 125mm took over, followed by 156 mm (start date not specified), with 156.75mm starting in 2016. A 2020 Trina solar presentation claims 100 mm x 100 mm in 1995, and only increases to 125mm in 2005, with 156mm taking over in ~2010 (which generally agrees with ITRPVs).
#
# WoodMackenzie Reports indicate that 1981 through 2012, 125mm was the dominant size, 2013-2016 was 156mm and 156.75mm, 2017-2018 had many sizes between 158.75mm and 161.75mm, and finally 2019 saw the introduction of 166m, 182mm and 210mm (Sun, Solar PV Module Technology Report 2020). The report data notes that these are "shipment" or manufacturing data, and the installed capacity will lag by 3 to 18 months. Finally, the report predicts that 182mm or 210mm will become dominant shipped technology by 2023. This timeline is ahead of what the ITRPVs predict, though may align when accounting for the 3 to 18 month lag.
#
# Based on these sources, we will say that cell sizes in 1995 were 100 mm, and in 2000 were 125 mm, and 156mm in 2010 (where ITRPV data starts). These will be step functions instead of linear interpolations to attempt to better represent that most size changes requires a replacement of the manufacturing line equipment. In reality, there would be some marketshare blending, which could be added with improved future data finding.
# +
#turn zeros back into NaN
df_avgcell.replace(0.0, np.NaN, inplace=True)
#write over 1995 and 2000 data
#df_avgcell['monoSi'][1995]=100.00
#df_avgcell['monoSi'][2000]=125.00
#df_avgcell['mcSi'][1995]=100.00
#df_avgcell['mcSi'][2000]=125.00
#set dates for start and end of cell size changes
start_100 = 1995
end_100 = 1999
start_125 = 2000
end_125 = 2009
df_avgcell.loc[start_100:end_100] = 100.00
df_avgcell.loc[start_125:end_125] = 125.00
print(df_avgcell)
# -
# Now we have an average cell dimension for mc-Si and mono-Si for 1995 through 2030.
# ## Marketshare Data Manipulation
# Next, we apply the marketshare of mc-Si vs mono-Si to get the average cell dimension for the year. Market share of mc-Si vs mono-Si is taken from LBNL "Tracking the Sun" report (warning: this is non-utility scale data i.e. <5MW, and is from 2002-2018), from Mints 2019 SPV report, from ITRPVs, and old papers (Costello & Rappaport 1980, Maycock 2003 & 2005).
#read in a csv that was copied from CE Data google sheet
cwd = os.getcwd() #grabs current working directory
techmarketshare = pd.read_csv(cwd+"/../../../PV_ICE/baselines/SupportingMaterial/ModuleType_MarketShare.csv",index_col='Year')
#this file path navigates from current working directory back up 2 folders, and over to the csv
techmarketshare /=100 #turn whole numbers into decimal percentages
print(techmarketshare)
# #### create a harmonization of annual market share, and interpolate
# +
# first, create a single value of tech market share in each year or NaN
#split mcSi and monoSi
mcSi_cols = techmarketshare.filter(regex = 'mcSi')
monoSi_cols = techmarketshare.filter(regex = 'mono')
#show where the data is coming from graphically
#plt.plot(mcSi_cols,'.')
monoSikeys = monoSi_cols.columns
labelnames = [e[7:] for e in monoSikeys] #e is a random variable to iterate through the elements of the array
mcSikeys = mcSi_cols.columns
labelnames_mcSi = [e[5:] for e in mcSikeys]
#print(monoSikeys)
# +
#aggregate all the columns of mono or mcSi into one averaged market share
est_mktshr_mcSi = pd.DataFrame(mcSi_cols.agg("mean", axis="columns"))
#print(est_mktshr_mcSi)
est_mktshr_monoSi = pd.DataFrame(monoSi_cols.agg("mean", axis="columns"))
#print(est_mktshr_monoSi)
#Join the monoSi and mcSi back together as a dataframe
est_mrktshrs = pd.concat([est_mktshr_monoSi,est_mktshr_mcSi], axis=1) #concatinate on the columns axis
est_mrktshrs.columns = ['monoSi','mcSi'] #name the columns
#plot individuals AND average aggregate
plt.rcParams.update({'font.size': 22})
plt.rcParams['figure.figsize'] = (15, 6)
plt.plot(monoSi_cols.index,monoSi_cols[monoSikeys[0]]*100,lw=3,marker='o', ms=10,label=labelnames[0])
plt.plot(monoSi_cols.index,monoSi_cols[monoSikeys[1]]*100,lw=3,marker='D',ms=10,label=labelnames[1])
plt.plot(monoSi_cols.index,monoSi_cols[monoSikeys[2]]*100,lw=3,marker='P',ms=12,label=labelnames[2])
plt.plot(monoSi_cols.index,monoSi_cols[monoSikeys[3]]*100,lw=3,marker='^',ms=10,label=labelnames[3])
plt.plot(monoSi_cols.index,monoSi_cols[monoSikeys[4]]*100,lw=3,marker='v',ms=10,label=labelnames[4])
plt.plot(est_mrktshrs.index,est_mrktshrs['monoSi']*100,'--k', marker='s',ms=5, label='PV ICE')
plt.legend(bbox_to_anchor=(0, -0.2, 1, 0), loc=2, mode="expand", ncol=2)
plt.xlim([1979,2032])
#plt.title('monoSi Market Share by Source')
plt.xlabel('Year')
plt.ylabel('Market Share [%]')
# -
plt.plot(mcSi_cols.index,mcSi_cols[mcSikeys[0]],lw=2,marker='o',label=labelnames_mcSi[0])
plt.plot(mcSi_cols.index,mcSi_cols[mcSikeys[1]],lw=2,marker='D',label=labelnames_mcSi[1])
plt.plot(mcSi_cols.index,mcSi_cols[mcSikeys[2]],lw=2,marker='P',label=labelnames_mcSi[2])
plt.plot(mcSi_cols.index,mcSi_cols[mcSikeys[3]],lw=2,marker='^',label=labelnames_mcSi[3])
plt.plot(est_mrktshrs.index,est_mrktshrs['mcSi'],'--k', marker='s',label='avgd marketshare of mcSi')
plt.legend()
plt.title('mcSi Market Share by Source')
plt.xlabel('Year')
plt.ylabel('Market Share (%)')
# ### Interpolate and Normalize
# +
#Interpolate for marketshare NaN values
est_mrktshrs['mcSi'][1980]=0.0
est_mrktshrs = est_mrktshrs.interpolate(method='linear',axis=0,limit_area='inside')
#sanity check of market share data - does it add up?
est_mrktshrs['Total'] = est_mrktshrs.monoSi+est_mrktshrs.mcSi
plt.plot(est_mrktshrs['monoSi'], label='MonoSi')
plt.plot(est_mrktshrs['mcSi'], label='mcSi')
plt.plot(est_mrktshrs['Total'],label='Total')
plt.legend()
plt.title('Annual Technology Market Share')
plt.xlabel('Year')
plt.ylabel('Market Share (%)')
#print(est_mrktshrs)
#Warning: 2002, 10% of the silicon marketshare was "other", including amorphous, etc.
#del est_mrktshrs['Total']
# +
#normalize all marketshares each year to make sure everything adds to 100%
est_mrktshrs['Scale'] = 1/est_mrktshrs['Total']
est_mrktshrs['monoSi_scaled']= est_mrktshrs['Scale']*est_mrktshrs['monoSi']
est_mrktshrs['mcSi_scaled']= est_mrktshrs['Scale']*est_mrktshrs['mcSi']
scaled_marketshares = est_mrktshrs[['monoSi_scaled','mcSi_scaled']]
scaled_marketshares.columns = ['monoSi','mcSi']
scaled_marketshares.to_csv(cwd+'/../../../PV_ICE/baselines/SupportingMaterial/output_scaledmrktshr_mcSi_mono.csv', index=True)
scaled_marketshares['Total'] = scaled_marketshares['monoSi']+scaled_marketshares['mcSi']
#print(scaled_marketshares)
plt.plot(scaled_marketshares['monoSi'],label='MonoSi')
plt.plot(scaled_marketshares['mcSi'],label='mcSi')
plt.plot(scaled_marketshares['Total'],label='Total')
plt.legend()
plt.title('Annual Technology Market Share - normalized')
plt.xlabel('Year')
plt.ylabel('Market Share (%)')
# -
# Combining Cell Size shares and Market Share Data
# ----------
# Now we have separate mono and mcSi dataframes, which contain the average cell size, based on the market share of the cell size bin as enumerated in ITRPV 2020. The next step is to combine these technology specific (mono vs mc) based on the module technology market share.
# +
#now combine technology market share of mcSi and monoSi with their respective average cell dimensions
#which have already been cell size marketshare weighted
#going to ignore "otherSi" because for the most part less than 2%, except 2002
#trim the techmarketshare data to 1995 through 2030
est_mrktshrs_sub = scaled_marketshares.loc[est_mrktshrs.index>=1995] #could also use a filter function instead
#multiply the share of each tech by the weighted average cell size
mrkt_wtd_cells = est_mrktshrs_sub.mul(df_avgcell,'columns')
#sum across monoSi and mcSi for the total market average cell size (x and y)
market_average_cell_dims = pd.DataFrame(mrkt_wtd_cells.agg("sum", axis="columns"))
market_average_cell_dims.columns = ['avg_cell']
#print(market_average_cell_dims)
plt.plot(market_average_cell_dims, label='annual average cell dimensions in mm')
#plt.legend()
plt.title('Average cell dimensions each year')
plt.xlabel('Year')
plt.ylabel('Average cell dimension (mm)')
# -
# Area of a cell
# -------
# The above weighted averages are 1 axis dimension of the square cells in a module. Here we create a dataframe of the averge area of a cell for each year.
df_cellarea_mm2 = market_average_cell_dims.pow(2,axis='columns') #still in mm^2/cell
#you cannot multiply the df.columnname by itself and get a dataframe back, but df.pow returns a dataframe
df_cellarea_mm2.columns = ['avg_cell']
df_cellarea_m2 = df_cellarea_mm2*0.000001 #mm^2 to m^2
df_cellarea_cm2 = df_cellarea_mm2*0.01 #mm^2 to cm^2
#print(df_cellarea_m2)
# ## Calculate cells/m^2
# While there is technology information of # of cells per module (ex: 60, 72), we are looking for the amount of silicon per m^2 of module. Therefore, it will be easier to figure out how many cells at their respective sizes fit into a m^2 rather than scaling up to the module, only to divide by module size. Additionally, the analysis excludes any spacing efficiency factor (i.e. how close the cells are together), as this type of information is not readily available. Therefore, the assumption is the cells are placed close together, leaving no space, which should slightly overestimate the silicon per m^2.
#
# This # cells/ m^2 of module will be used as a factor in the final calculation of g Si/m^2 module.
# calculate # cells/m^2 at this point, rather than using the # cells per module factor
df_cellperm2 = 1/df_cellarea_m2
#print(df_cellperm2)
df_cellperm2.to_csv(cwd+'/../../../PV_ICE/baselines/SupportingMaterial/output_cell_per_m2.csv', index=True)
# g of Si per cell
# ---------
# In addition to the number of cells that fit into 1m^2 of module, we need the weight of silicon per cell. First, the weighted average of wafer thickness was calculated for each year based on wafer trends and module type market share in the CE Data google spreadsheet. This data is read in here.
#read in a csv that was copied from CE Data google sheet where the marketshare weighting was done
cwd = os.getcwd() #grabs current working directory
wafer_thickness = pd.read_csv(cwd+"/../../../PV_ICE/baselines/SupportingMaterial/Wafer_thickness.csv",
index_col='Year', usecols=lambda x: x not in skipcols)
#this file path navigates from current working directory back up 2 folders, and over to the csv
#convert micron to cm
wafer_thick_cm = wafer_thickness/10000 # microns in a cm
#print(wafer_thick_cm)
#There are missing data, so we will interpolate linearly for missing years
wafer_thick_cm = wafer_thick_cm.interpolate(method='linear',axis=0)
print(wafer_thick_cm)
plt.plot(wafer_thick_cm, label='Wafer Thickness (cm)')
plt.title('Wafer Thickness (cm)')
# Now multiply the thickness of the cell by the area of the cell to get a cell volume for each year
#First, remove 1990 through 1994, to match the size of the cell area df
wafer_thick_cm_1995 = wafer_thick_cm.loc[wafer_thick_cm.index>=1995]
#rename the columns for df.mul operation
wafer_thick_cm_1995.columns = ['avg_cell']
df_cell_volume = df_cellarea_cm2.mul(wafer_thick_cm_1995,'columns')
df_cell_volume.columns = ['cell_volume_cm3']
#print(df_cell_volume)
#plt.plot(df_cell_volume, label='average cell volume (cm^3)')
#plt.legend()
# Now we have the volume of the cell in cm^3 for each year, we can bring in the density of Silicon to get a mass of Silicon per cell for each year.
df_Simass_percell = df_cell_volume.mul(density_si)
df_Simass_percell.columns = ['Si_gpercell']
print(df_Simass_percell)
df_Simass_percell.to_csv(cwd+'/../../PV_ICE/baselines/SupportingMaterial/output_si_g_per_cell.csv', index=True)
plt.plot(df_Simass_percell, label='Mass Si per cell (g/cell)')
#plt.legend()
plt.title('Mass Silicon per cell annually')
plt.xlabel('Year')
plt.ylabel('Silicon (grams/cell)')
# ## g Si per m^2 of module
# Now take the above mass of silicon per cell and multiply it by the factor of number of cells per m^2 of module
# +
df_Simass_percell.columns = df_cellperm2.columns = ['Si_g'] #rename to a common name
df_Simass_perm2 = df_Simass_percell.mul(df_cellperm2, 'columns') #multiply
#print(df_Simass_perm2)
#print out to a csv
df_Simass_perm2.to_csv(cwd+'/../../PV_ICE/baselines/SupportingMaterial/output_si_g_per_m2.csv', index=True)
#make a pretty plot
plt.plot(df_Simass_perm2, label='Silicon g/m^2 of module')
plt.legend()
plt.title('Mass of Si per module area')
plt.xlabel('Year')
plt.ylabel('Silicon (grams/m^2)')
# -
# For post-2030, the mass per m^2 of silicon was held constant through 2050 due to the uncertainty about future technology trends. For example, there are at least 3 different cell sizes which are vying for becoming the next mainstream technology, the move to all bifiacial might affect silicon use differently, and the half-cut and smaller cell technologies will also have an affect. Therefore, we have held it constant from 2030 onward, and this assumption can be modified by the user.
# +
#understanding what influences the changes in Si mass/module m^2
plt.plot(df_Simass_perm2, label='Silicon g/m^2')
plt.plot(wafer_thick_cm, label='Wafer Thickness (cm)')
plt.plot(market_average_cell_dims, label='annual avg cell dims in mm')
plt.plot(scaled_marketshares['monoSi'],label='MonoSi Marketshare')
plt.legend()
plt.title('Mass of Si per module area by constituent parts')
plt.xlim([1995,2030])
plt.xlabel('Year')
# -
# # Bifacial Trend - 50% by 2030
# Along with glass-glass packaging and reduced aluminum framing for bifacial modules, the silicon cell requirements for bifaciality differ from standard mono-Si monofacial cells. First, we'll calculate silicon mass per m^2 for bifacial modules, then we will market share weight this with other silicon cell technology, such that bifacial is 50% of the cell/module market by 2030, then hold constant through 2050.
| docs/tutorials/baseline development documentation/(baseline development) Silicon per m2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.6 (''env'': venv)'
# name: env
# ---
from pathlib import Path
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
# # Define autocorrelation function
def auto_corr(series, length):
series = series - np.mean(series)
correlation = np.correlate(series, series, mode="full")
middle_idx = int((len(correlation)-1)/2)
correlation = correlation[middle_idx:]
correlation = correlation/np.dot(series,series)
l = len(correlation)
if l > length:
correlation = correlation[:length]
if l < length:
correlation = np.concatenate([correlation, np.zeros((length-l))])
return correlation
# # Read trace data, compute autocorrelation
# +
root_dir = Path(os.path.abspath('')).parents[1]
experiment_dir = os.path.join(root_dir, "axon_geometry")
brains = ["brain1", "brain2"]
measures = ["curvature", "torsion"]
max_id = 300
corr_length=25
d = []
for brain in brains:
data_dir = os.path.join(experiment_dir, "data", brain)
segments_swc_dir = os.path.join(data_dir, "segments_swc")
trace_data_dir = os.path.join(data_dir, "trace_data", "1", "no_dropout")
print(f"Directory where swcs reside: {segments_swc_dir}")
for i in tqdm(np.arange(0, max_id)):
i = int(i)
trace_data_path = os.path.join(trace_data_dir, f"{i}.npy")
if os.path.exists(trace_data_path) is True:
trace_data = np.load(trace_data_path, allow_pickle=True)
for node in trace_data:
for measure in measures:
_measure = node[measure]
if np.var(_measure) > 0:
autocorr = auto_corr(_measure, corr_length)
for distance, value in zip(np.arange(corr_length), autocorr):
d.append({"brain": brain, "measure": measure, "distance": distance, "value": value})
df = pd.DataFrame(d)
# -
# # Plot autocorrelation as a function of lag
# +
sns.set_theme()
sns.set_context("paper")
g = sns.FacetGrid(df, col="brain", hue="measure")
g.map(sns.lineplot, "distance", "value", err_style="band", ci="sd")
g.set_axis_labels(r"Lag ($\mu m$)", "Autocorrelation")
g.add_legend(title="")
axes = g.axes.flatten()
axes[0].set_title("Brain 1")
axes[1].set_title("Brain 2")
g.savefig(os.path.join(experiment_dir, "figures", f"autocorrelation.eps"))
g.savefig(os.path.join(experiment_dir, "figures", f"autocorrelation.jpg"))
# -
print(axes[0].errorbar)
# +
from scipy import stats
for measurement in ["curvature", "torsion"]:
for lag in range(1,25):
data = df[(df["distance"] == lag) & (df["measure"] == measurement) & (df["brain"] == "brain1")]["value"].to_numpy()
_, p = stats.ttest_1samp(data,0.3,alternative="greater")
if p < 0.05:
print(f"Signicant t-test at lag: {lag} in brain 1 and measurement: {measurement}, p: {p}")
else:
break
for measurement in ["curvature", "torsion"]:
for lag in range(1,25):
data = df[(df["distance"] == lag) & (df["measure"] == measurement) & (df["brain"] == "brain2")]["value"].to_numpy()
_, p = stats.ttest_1samp(data,0.3,alternative="greater")
if p < 0.05:
print(f"Signicant t-test at lag: {lag} in brain 2 and measurement: {measurement}, p: {p}")
else:
break
# -
| experiments/axon_geometry/notebooks/autocorrelation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # First Prototype Without NLP
# ## Import relevant libraries and load data
import numpy as np
import pandas as pd
import random
from haversine import haversine, Unit # NOTE: Needs install: pip install haversine
from IPython import display
# Read in data
resourcesV3_data = pd.read_excel("../data/interim/MasterSyntheticDatabase_v3.xlsx", header=0, usecols="B:O")
# Show first 5 rows
resourcesV3_data.head()
# Drop irrelevant columns for this task
resources_data = resourcesV3_data.drop(["Total Hectares", "Production Hectares", "Volume Kilos", "Total Workers"], axis=1)
resources_data.head()
# ## Denote the borders of countries and generate a point for a hub
# +
# Define a square denoted by an x range and a y range for each country
# Show image of defining the rectangle boundaries
# display.Image("../references/RainforestMapCountryBoundaries.png")
def getBoundaries(country):
if country == "Brazil":
return [[-70.084311, -46.969077], [-11.214025, 2.352550]]
elif country == "Colombia":
return [-75.577474, -70.084311], [-2.891532, 4.047229]
elif country == "Ecuador":
return [-79.137045, -75.577474], [-3.725094, 0.886501]
elif country == "Peru":
return [-79.137045, -70.172201], [-12.504216, -3.725094]
elif country == "Suriname":
return [-57.911460, -54.258506], [2.352550, 5.842207]
else:
raise Exception("Country not defined")
# +
def randomHubLocation(country):
# Seed random number generator
random.seed(1)
# For a hub in Colombia, generate random coordinates
bx, by = getBoundaries(country)
hub_coord = [random.uniform(bx[0], bx[1]), random.uniform(by[0], by[1])]
return hub_coord
hub_coord = randomHubLocation("Colombia")
# +
# Distance between crop and hub
def distanceCropsHub(hub_coord, df):
''' Calculate the Harvesine distance between each crop and the hub. '''
for i, crop in df.iterrows():
x_crop_str, y_crop_str = crop["Location"].split(',', 1)
crop_coord = [float(x_crop_str), float(y_crop_str)]
distance = haversine(crop_coord, hub_coord)/10 # in km; divide by 10 to make distances more similar to what they will be in the raiinforest
df.at[i, "Haversine Distance"] = distance
distanceCropsHub(hub_coord, resources_data)
resources_data.to_excel("../data/interim/MasterSyntheticDatabase_v4.xlsx")
resources_data.head()
# +
#----------NLP INPUT
def recommendationAlgorithm(type_given, val):
# Initiating variables
n_workers, money_needed = 0, 0
conversion_USD_Real = 5.16 # As of 27/02/22
answers = []
# Depending on the input
if type_given == "Workers":
# For the amount of workers available, find maximum amount of profit that can be made
# from nearby resources
n_workers = val
for i, crop in resources_data.iterrows():
resources_data.at[i, "Tot Money"] = np.nan # reset values in database
tot_volume = min(crop["Volume Tonnes"], n_workers/crop["Worker Per Tonne"]) # find the minimum value for volume of produce
potential_money = crop["Price With Tax And Subsidies Per Tonne"] * tot_volume * conversion_USD_Real/12
# Update columns
resources_data.at[i, "Tot Volume For Team"] = tot_volume
resources_data.at[i, "Tot Money"] = potential_money
# Find best crop that are close to the Hub
closest = resources_data.nsmallest(10, 'Haversine Distance')
best = closest.nlargest(3, 'Tot Money')
best.head()
# Array: name of crop, location and the monthly income in Brasilian Real for producing it given the number of people in the team
for i, crop in best.iterrows():
answers.append([crop["Crop"], str(np.round(float(crop["Tot Money"]), -3))])
elif type_given == "Money":
# For the amount of money needed to be made in a month, find minimum amount of workers needed
# to profit from nearby resources
money_needed = val
for i, crop in resources_data.iterrows():
resources_data.at[i, "Workers Needed"] = np.nan # reset values in database
# Rearrange previous equation to find workers needed to make that much money in a month
workers_needed = money_needed * 12 * crop["Worker Per Tonne"] / (crop["Price With Tax And Subsidies Per Tonne"] * conversion_USD_Real)
if workers_needed/crop["Worker Per Tonne"] < crop["Volume Tonnes"]:
# Update column
resources_data.at[i, "Workers Needed"] = workers_needed
closest = resources_data.nsmallest(10, 'Haversine Distance')
best = closest.nsmallest(3, 'Workers Needed')
# Array: name of crop, the number of people needed to make the money requested in a month
for i, crop in best.iterrows():
answers.append([crop["Crop"], str(np.ceil(float(crop["Workers Needed"])))])
return answers # returns array with 3 recommendations
#----------NLP OUTPUT
#print(recommendationAlgorithm("Workers", 7)) # test like in the spoken demo
| notebooks/4.0-cgb-firstprototypewithoutnlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.1 64-bit
# metadata:
# interpreter:
# hash: 5f51ddfaf8b2bda7e9920d2fe19bd5c12735de5dbebc79f1545cb9fd80f99b2c
# name: python3
# ---
# Nesneye Yönelik Programlama
# (Object Oriented Programming)
# Nesne ( Object ) - Sınıf ( class )
a = 5
type(a)
# Kalem - rengi, türü, şekli, boyu,//
# yazı, resim, çizim
# Bir nesne kullanmak için
# önce bir tane bu nesne için
# bir şablon oluştur.
# Bu şablonu kullanarak nesne den birden fazla oluşturabiliriz.
# Sınıf - class olarak isimlendir.
# Nesnenin özelliklerini içeren bir plandır.
# Nesneler içerisinde veri taşır
# Sınıf oluşturma : Paketleme (encapsulation)
#
# instantiation - bir sınıfın örneğini ( nesne oluşturmak ) oluşturmak.
#
# kalem1 = Kalem()
#
# örnek - instance
# constructor - yapıcı metod - nesne tanımlandığında çalışır.
from ornek_sinif import Kalem
kalem1 = Kalem()
dir(kalem1)
# Değişken tanımlaması :
# 1- Sınıfın değişkeni : Sınıftan oluşturulan her nesne için aynıdır.
# 2- Nesne değişkeni : Sınıftan oluşturulan her nesne için farklıdır.
| HAFTA-6/DERS-16/OOP/oop1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: finanls
# language: python
# name: finanls
# ---
# # Porcess news information
import requests
import re
import pprint
# ## 1. Get web page
company = "阿里巴巴"
url = f"https://search.sina.com.cn/?country=usstock&q={company}&name={company}&t=&c=news&k={company}&range=all&col=1_7&from=channel&ie=utf-8"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/92.0.4515.159 Safari/537.36 "
}
page = requests.get(url, headers=headers).text
# ## 2. Get source and date
p_info = '<span class="fgray_time">(.*?)</span>'
info = re.findall(p_info, page, re.S)
info
# ## 3. Get links and titles
p_href = '<h2><a href="(.*?)" target="_blank">'
hrefs = re.findall(p_href, page, re.S)
p_title = '<h2><a href=.*?target="_blank">(.*?)</a>'
titles = re.findall(p_title, page, re.S)
hrefs, titles
# ## 4. Clean the data
for i, title in enumerate(titles):
titles[i] = title.strip()
titles[i] = re.sub('<.*?>', '', title)
# +
source, date = [], []
for i, data in enumerate(info):
source.append(data.split(" ")[0])
temp = data.split(" ")[1:]
date_item = ""
for string in temp:
date_item += string
date_item += " "
date.append(date_item.strip())
source, date
# -
# ## 5. Show the information
content = []
for i in range(len(titles)):
temp = f"{titles[i]} ({date[i]}, {source[i]})\n{hrefs[i]}\n"
content.append(temp)
print(temp)
# # Multiple company news
# +
def news(company: str, filename="log.txt"):
append_to_file(company, filename)
url = f"https://search.sina.com.cn/?country=usstock&q={company}&name={company}&t=&c=news&k={company}&range=all&col=1_7&from=channel&ie=utf-8"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/92.0.4515.159 Safari/537.36 "
}
page = requests.get(url, headers=headers).text
p_info = '<span class="fgray_time">(.*?)</span>'
info = re.findall(p_info, page, re.S)
source, date = [], []
for i, data in enumerate(info):
source.append(data.split(" ")[0])
temp = data.split(" ")[1:]
date_item = ""
for string in temp:
date_item += string
date_item += " "
date.append(date_item.strip())
p_href = '<h2><a href="(.*?)" target="_blank">'
hrefs = re.findall(p_href, page, re.S)
p_title = '<h2><a href=.*?target="_blank">(.*?)</a>'
titles = re.findall(p_title, page, re.S)
for i, title in enumerate(titles):
titles[i] = title.strip()
titles[i] = re.sub('<.*?>', '', title)
content = []
for i in range(len(titles)):
temp = f"\t{titles[i]} ({date[i]}, {source[i]})\n\t{hrefs[i]}"
append_to_file(temp, filename)
content.append(temp)
return content
def append_to_file(content: str, filename: str="log.txt"):
with open(filename, mode="a+", encoding="utf-8") as file:
file.write(content)
file.write("\n")
# +
import pprint
import time
companys = ["腾讯", "阿里巴巴", "华为"]
# while True:
for company in companys:
print(f"{company}: ")
try:
for item in news(company, "SinaFin.txt"):
print(item)
except:
print(company + "新闻获取失败!")
# time.sleep(10)
# -
| SinaFin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.DataFrame(["male", "MALE", "mle", "I am male", "femail", "female", "enby"], columns = ["gender"])
df
gender_list = pd.read_csv('/Users/mpacer/jupyter/gendercoder/inst/extdata/GenderDictionary_List.csv')
gender_typos = pd.read_csv('/Users/mpacer/jupyter/gendercoder/inst/extdata/GenderDictionary.csv')
df['Typos'] = df.gender.apply(lambda x: x.lower())
df.join(gender_typos.set_index("Typos"), on="Typos", lsuffix="anything", rsuffix="hi")
# +
gender_list = pd.read_csv('/Users/mpacer/jupyter/gendercoder/inst/extdata/GenderDictionary_List.csv')
gender_typos = pd.read_csv('/Users/mpacer/jupyter/gendercoder/inst/extdata/GenderDictionary.csv')
def genderRecoder(in_df, mode="narrow"):
temp_df = in_df.copy()
temp_df['Typos'] = in_df.gender.apply(lambda x: x.lower())
if mode =="narrow":
blah = in_df.copy()
blah['encoded'] = temp_df.join(gender_typos.set_index("Typos"), on="Typos", lsuffix="anything", rsuffix="hi")["ThreeOptions"]
return blah
# -
df
genderRecoder(df)
| hearts_full_of_pride/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python385jvsc74a57bd031f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6
# ---
# +
# Step 0
# imports for fast api query parameters tutorial
from fastapi import FastAPI
from typing import Optional
import uvicorn
import nest_asyncio
import requests
nest_asyncio.apply()
app = FastAPI()
# +
# Step: 1.1 initialize a db as list of dict,
# 1.2 Decorate the instance of the class app with function get("/items/")
# 1.3 define the async function read_item with skip and limit parameters
# that return the items in the db based on what was passed.
fake_items_db = [{"item_name": "Foo"}, {"item_name": "Bar"}, {"item_name": "Baz"}]
@app.get("/items/")
async def read_item(skip: int = 0, limit: int = 10):
return fake_items_db[skip : skip + limit]
# -
# now initialize a localhost server
uvicorn.run(app, port=8001, host='localhost')
# get a response from the server using the url
response = requests.get('http://127.0.0.1:8000/items/?skip=0&limit=10') # this goes here
print(response)
print(response.url)
| test_nb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["module-nm"]
# (nm_linear_algebra_intro)=
# # Linear algebra introduction
#
# ## Linear (matrix) systems
#
# We can re-write a system of simultaneous (linear) equations in a matrix form. For example, let's consider:
#
# \\[\begin{eqnarray*}
# 2x + 3y &=& 7 \\\\\\
# x - 4y &=& 3
# \end{eqnarray*}\\]
#
# It can be rewritten in a matrix form:
#
# \\[
# \left(
# \begin{array}{rr}
# 2 & 3 \\\\\\
# 1 & -4 \\\\\\
# \end{array}
# \right)\left(
# \begin{array}{c}
# x \\\\\\
# y \\\\\\
# \end{array}
# \right) = \left(
# \begin{array}{c}
# 7 \\\\\\
# 3 \\\\\\
# \end{array}
# \right)
# \\]
#
# We understand that this system always has the form of
#
# \\[
# \left(
# \begin{array}{rr}
# a & b \\\\\\
# c & d \\\\\\
# \end{array}
# \right)\left(
# \begin{array}{c}
# x \\\\\\
# y \\\\\\
# \end{array}
# \right) = \left(
# \begin{array}{c}
# e \\\\\\
# f \\\\\\
# \end{array}
# \right),
# \\]
#
# where \\(a,b,c,d,e,f\\) are arbitrary constants.
#
# Let's call the matrix which stores the coefficients of our system of linear equations to be \\(A\\)
#
# \\[
# A=
# \left(
# \begin{array}{rr}
# a & b \\\\\\
# c & d \\\\\\
# \end{array}
# \right)
# \\]
#
# and the matrix that contains our variables to be \\(\mathbf{x}\\)
#
# \\[
# \mathbf{x}=
# \left(
# \begin{array}{c}
# x \\\\\\
# y \\\\\\
# \end{array}
# \right).
# \\]
#
# The matrix that contains the results of our system of linear equation will be called \\(\mathbf{b}\\)
#
# \\[
# \mathbf{b}=
# \left(
# \begin{array}{c}
# e \\\\\\
# f \\\\\\
# \end{array}
# \right).
# \\]
#
# This system of equations can be represented as the matrix equation
# \\[A\pmb{x}=\pmb{b}.\\]
#
# More generally, consider an arbitrary system of \\(n\\) linear equations for \\(n\\) unknowns
#
# \\[
# \begin{eqnarray*}
# A_{11}x_1 + A_{12}x_2 + \dots + A_{1n}x_n &=& b_1 \\\\\\
# A_{21}x_1 + A_{22}x_2 + \dots + A_{2n}x_n &=& b_2 \\\\\\
# \vdots &=& \vdots \\\\\\
# A_{n1}x_1 + A_{n2}x_2 + \dots + A_{nn}x_n &=& b_n
# \end{eqnarray*}
# \\]
#
# where \\(A_{ij}\\) are the constant coefficients of the linear system, \\(x_j\\) are the unknown variables, and \\(b_i\\)
# are the terms on the right hand side (RHS). Here the index \\(i\\) is referring to the equation number
# (the row in the matrix below), with the index \\(j\\) referring to the component of the unknown
# vector \\(\pmb{x}\\) (the column of the matrix).
#
# This system of equations can be represented as the matrix equation \\(A\pmb{x}=\pmb{b}\\):
#
# \\[
# \left(
# \begin{array}{cccc}
# A_{11} & A_{12} & \dots & A_{1n} \\\\\\
# A_{21} & A_{22} & \dots & A_{2n} \\\\\\
# \vdots & \vdots & \ddots & \vdots \\\\\\
# A_{n1} & A_{n2} & \dots & A_{nn} \\\\\\
# \end{array}
# \right)\left(
# \begin{array}{c}
# x_1 \\\\\\
# x_2 \\\\\\
# \vdots \\\\\\
# x_n \\\\\\
# \end{array}
# \right) = \left(
# \begin{array}{c}
# b_1 \\\\\\
# b_2 \\\\\\
# \vdots \\\\\\
# b_n \\\\\\
# \end{array}
# \right)
# \\]
#
#
# We can easily solve the above \\(2 \times 2\\) example of two equations and two unknowns using substitution (e.g. multiply the second equation by 2 and subtract the first equation from the resulting equation to eliminate \\(x\\) and hence allowing us to find \\(y\\), then we could compute \\(x\\) from the first equation). We find:
#
# \\[ x=\frac{37}{11}, \quad y=\frac{1}{11}.\\]
#
# ```{margin} Note
# Cases where the matrix is non-square, i.e. of shape \\(m \times n\\) where \\(m\ne n\\) correspond to the over- or under-determined systems where you have more or less equations than unknowns.
# ```
#
# Example systems of \\(3\times 3\\) are a little more complicated but doable. In this notebook, we consider the case of \\(n\times n\\), where \\(n\\) could be billions (e.g. in AI or machine learning).
#
# ## Matrices in Python
#
# We can use `numpy.arrays` to store matrices. The convention for one-dimensional vectors is to call them column vectors and have shape \\(n \times 1\\). We can extend to higher dimensions through the introduction of matrices as two-dimensional arrays (more generally vectors and matrices are just two examples of {ref}`tensors <tensor_review>`).
#
# We use subscript indices to identify each component of the array or matrix, i.e. we can identify each component of the vector \\(\pmb{v}\\) by \\(v_i\\), and each component of the matrix \\(A\\) by \\(A_{ij}\\).
#
# The dimension or shape of a vector/matrix is the number of rows and columns it posesses, i.e. \\(n \times 1\\) and \\(m \times n\\) for the examples above. Here is an example of how we can extend our use of the `numpy.array` to two dimensions in order to define a matrix \\(A\\).
# +
import numpy as np
A = np.array([[10., 2., 1.],
[6., 5., 4.],
[1., 4., 7.]])
print(A)
# -
# Check total size of the array storing matrix \\(A\\). It will be \\(3\times3=9\\):
print(np.size(A))
# Check the number of dimensions of matrix \\(A\\):
print(np.ndim(A))
# Check the shape of the matrix \\(A\\):
print(np.shape(A))
# Transpose matrix \\(A\\):
print(A.T)
# Get the inverse of matrix \\(A\\):
# +
import scipy.linalg as sl
print(sl.inv(A))
# -
# Get the determinant of matrix \\(A\\):
print(sl.det(A))
# ````{margin}
# Normal `*` operator does operations element-wise, which we do not want!!!
# ```python
#
# print(A*sl.inv(A))
# ```
# [[ 1.42857143 -0.15037594 0.02255639]
# [-1.71428571 2.59398496 -1.02255639]
# [ 0.14285714 -1.14285714 2. ]]
#
# ````
#
# Multiply \\(A\\) with its inverse using the `@` matrix multiplication operator. Note that due to roundoff errors the off diagonal values are not exactly zero:
print(A @ sl.inv(A))
# Another way of multiplying matrices is to use `np.dot` function:
print(np.dot(A, sl.inv(A)))
print("\n")
print(A.dot(sl.inv(A)))
# Initialise vector and matrix of zeros:
print(np.zeros(3))
print("\n")
print(np.zeros((3,3)))
# Initialise identity matrix:
print(np.eye(3))
# ### Matrix objects
#
# Note that NumPy has a matrix object. We can cast the above two-dimensional arrays into matrix objects and then the star operator does yield the expected matrix product:
# +
A = np.array([[10., 2., 1.],
[6., 5., 4.],
[1., 4., 7.]])
print(type(A))
print(type(np.mat(A)))
# -
print(np.mat(A)*np.mat(sl.inv(A)))
# ### Slicing
# We can use slicing to extract components of matrices:
# +
# Single entry, first row, second column
print(A[0,1])
# First row
print(A[0,:])
# last row
print(A[-1,:])
# Second column
print(A[:,1])
# Extract a 2x2 sub-matrix
print(A[1:3,1:3])
# -
# ## Exercises
#
# ### Solving a linear system
#
# Let's quickly consider the \\(2 \times 2\\) case from the beginning of the notebook that we claimed the solution for to be
#
# \\[x=\frac{37}{11} \quad\text{and}\quad y=\frac{1}{11}.\\]
#
# To solve the matrix equation
#
# \\[ A\pmb{x}=\pmb{b}\\]
#
# we can simply multiply both sides by the inverse of the matrix \\(A\\) (if \\(A\\) is [invertible](https://en.wikipedia.org/wiki/Invertible_matrix)):
#
# \\[
# \begin{align}
# A\pmb{x} & = \pmb{b}\\\\\\
# \implies A^{-1}A\pmb{x} & = A^{-1}\pmb{b}\\\\\\
# \implies I\pmb{x} & = A^{-1}\pmb{b}\\\\\\
# \implies \pmb{x} & = A^{-1}\pmb{b}
# \end{align}
# \\]
#
# so we can find the solution \\(\pmb{x}\\) by multiplying the inverse of \\(A\\) with the RHS vector \\(\pmb{b}\\).
# +
A = np.array([[2., 3.],
[1., -4.]])
# Check first whether the determinant of A is non-zero
print("Det A = ", sl.det(A))
b = np.array([7., 3.])
# Compute A inverse and multiply by b
print("A^-1 @ b =", sl.inv(A) @ b)
# -
# We can solve the system using `scipy.linalg.solve`:
print("A^-1 @ b =", sl.solve(A,b))
# Check if the solutions match:
print(np.allclose(np.array([37./11., 1./11.]), sl.solve(A,b)))
# ### Matrix multiplication
#
#
# Let
# \\[
# A = \left(
# \begin{array}{ccc}
# 1 & 2 & 3 \\\\\\
# 4 & 5 & 6 \\\\\\
# 7 & 8 & 9 \\\\\\
# \end{array}
# \right)
# \mathrm{\quad\quad and \quad\quad}
# b = \left(
# \begin{array}{c}
# 2 \\\\\\
# 4 \\\\\\
# 6 \\\\\\
# \end{array}
# \right)
# \\]
#
# We will store \\(A\\) and \\(b\\) in NumPy arrays. We will create NumPy array \\(I\\) containing the identity matrix \\(I_3\\) and perform \\(A = A+I\\). Then we will substitute third column of \\(A\\) with \\(b\\). We will solve \\(Ax=b\\).
# +
A = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
b = np.array([2, 4, 6])
print("A =", A)
print("b = ",b)
print("Size of A: ", A.size," and shape of A: ",A.shape)
print("Size of b: ", b.size," and shape of b: ",b.shape)
I = np.eye(3)
print("I = ",I)
A = A + I
print("A = ",A)
A[:, 2] = b
print("A = ",A)
x = sl.solve(A,b)
print("x = ", x)
# -
# ## Matrix properties
#
# Consider \\(N\\) linear equations in \\(N\\) unknowns, \\(A\pmb{x}=\pmb{b}\\).
#
# this system has a unique solution provided that the determinant of \\(A\\), \\(\det(A)\\), is non-zero. In this case the matrix is said to be non-singular.
#
# If \\(\det(A)=0\\) (with \\(A\\) then termed a singular matrix), then the linear system does not have a unique solution, it may have either infinite or no solutions.
#
# For example, consider
#
# \\[
# \left(
# \begin{array}{rr}
# 2 & 3 \\\\\\
# 4 & 6 \\\\\\
# \end{array}
# \right)\left(
# \begin{array}{c}
# x \\\\\\
# y \\\\\\
# \end{array}
# \right) = \left(
# \begin{array}{c}
# 4 \\
# 8 \\
# \end{array}
# \right).
# \\]
#
# The second equation is simply twice the first, and hence a solution to the first equation is also automatically a solution to the second equation.
#
# We only have one linearly-independent equation, and our problem is under-constrained - we effectively only have one eqution for two unknowns with infinitely many possibly solutions.
#
# If we replaced the RHS vector with \\((4,7)^T\\), then the two equations would be contradictory - in this case we have no solutions.
#
# Note that a set of vectors where one can be written as a linear sum of the others are termed linearly-dependent. When this is not the case the vectors are termed linearly-independent.
#
# ```{admonition} The following properties of a square \\(n\times n\\) matrix are equivalent:
#
# * \\(\det(A)\ne 0\implies\\) A is non-singular
# * the columns of \\(A\\) are linearly independent
# * the rows of \\(A\\) are linearly independent
# * the columns of \\(A\\) span \\(n\\)-dimensional space (we can reach any point in \\(\mathbb{R}^N\\) through a linear combination of these vectors)
# * \\(A\\) is invertible, i.e. there exists a matrix \\(A^{-1}\\) such that \\(A^{-1}A = A A^{-1}=I\\)
# * the matrix system \\(A\pmb{x}=\pmb{b}\\) has a unique solution for every vector \\(b\\)
#
# ```
| notebooks/c_mathematics/numerical_methods/11_linear_algebra_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
import imp
import yaml
import csv
import pandas as pd
import re
from rf import *
from svm import *
#from nnet import *
modl = imp.load_source('read_model_yaml', 'read_model_yaml.py')
# Import argument
#inp_yaml = "model/spec/SS/SS_RF_1.yaml"
inp_yaml = sys.argv[1]
# +
# Open test and train sets
df_test = pd.read_csv("data/output/model_clean_data/test.tar.gz",compression='gzip', index_col = None)
df_train = pd.read_csv("data/output/model_clean_data/train.tar.gz",compression='gzip', index_col = None)
# Define test/training set
X_test = np.array(df_test.drop(['labels'], axis = 1))
Y_test = np.array(df_test[['labels']])[:,0]
X_train = np.array(df_train.drop(['labels'], axis = 1))
Y_train = np.array(df_train[['labels']])[:,0]
# -
import inspect
print(inspect.getsource(rf))
# +
def write_results_txt(filename, result):
"""
Write results into csv file.
Parameters
----------
filename : string
filename to output the result
results : list or numpy array
results of some simulation
labels : list
labels for the results, i.e. names of parameters and metrics
"""
with open(filename, "w") as fp:
for item in result:
fp.write("%s\n\n" % item)
def run_model(inp_yaml,X_train,Y_train,X_test,Y_test):
"""Apply trees in the forest to X, return leaf indices.
Parameters
----------
inp_yaml : A yaml file with model specifications
Returns
-------
parameters_dict : A python dictionary with the model specifications
to be used to encode metadata for the model
and pass into specific model functions e.g. random
forest
"""
# Define output file name based on input
folder_name = re.split("/","model/spec/SS/SS_RF_1.yaml")[2]
file_name = re.split("/","model/spec/SS/SS_RF_1.yaml")[3][:-5]
output = 'data/output/'+folder_name+'/'+file_name+'.txt'
# Read in and parse all parameters from the YAML file
yaml_params = modl.read_model_yaml(inp_yaml)
#-------------------------------------------------
# Run RF (RANDOM FOREST)
#-------------------------------------------------
if yaml_params["model_type"] == "RF":
# Extract the RF model variables from the YAML file
n_estimators = yaml_params["parameters"]["n_estimators"]
criterion = yaml_params["parameters"]["criterion"]
max_features = yaml_params["parameters"]["max_features"]
max_depth = yaml_params["parameters"]["max_depth"]
n_jobs = yaml_params["parameters"]["n_jobs"]
# Run many simulations in parallel using as many cores as necessary
if yaml_params["simulations"]:
print("running RF WITH simulation...")
# Run simulation
result = rf_simulation(X_train = X_train
, Y_train = Y_train
, X_test = X_test
, Y_test = Y_test
, n_estimators = n_estimators
, criterion = criterion
, max_features = max_features
, max_depth = max_depth)
print("finished - RF WITH simulation")
# Write into csv
write_results_txt(output, result)
# Run a single simulation
else:
print("running RF WITHOUT simulation...")
# Run simulation
result = rf(X_train = X_train
, Y_train = Y_train
, X_test = X_test
, Y_test = Y_test
, n_estimators = n_estimators
, criterion = criterion
, max_features = max_features
, max_depth = max_depth)
print("finished - rf without simulation")
# Write into csv
write_results_txt(output, result)
#-------------------------------------------------
# Run SVM (SUPPORT VECTOR MACHINE)
#-------------------------------------------------
# Extract the SVM model variables from the YAML file
if yaml_params["model_type"] == "SVM":
kernel = yaml_params["parameters"]["kernel"]
degree = yaml_params["parameters"]["degree"]
gamma = yaml_params["parameters"]["gamma"]
tol = yaml_params["parameters"]["tol"]
# Define labels of output
labels = ["logloss"
, "miss_err"
, "prec"
, "recall"
, "f1"
, "C"
, "kernel"
, "degree"
, "gamma"
, "tol"
, "decision_function_shape"]
# Run many simulations in parallel using as many cores as necessary
if yaml_params["simulations"]:
# Run simulation
result = svm_simulation(X_train = X_train
, Y_train = Y_train
, X_test = X_test
, Y_test = Y_test
, kernel = kernel
, C = 1.0
, degree = degree
, gamma = gamma
, tol = tol
, decision_function_shape ='ovr')
# Write into csv
write_results_csv(output, result, labels)
# Run a single simulation
else:
# Run simulation
result = svm(X_train = X_train
, Y_train = Y_train
, X_test = X_test
, Y_test = Y_test
, kernel = kernel
, C = 1.0
, degree = degree
, gamma = gamma
, tol = tol
, decision_function_shape='ovr')
# Write into csv
write_results_csv(output, result, labels)
# -
# Run the model
run_model(inp_yaml,X_train,Y_train,X_test,Y_test)
| lobpredictrst/run_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import transforms, datasets
# +
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
EPOCHS = 50
BATCH_SIZE = 64
# +
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data',
train=True,
download=True,
transform=transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data',
train=False,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=BATCH_SIZE,
shuffle=True)
# -
class Net(nn.Module):
def __init__(self, dropout_p=0.2):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
self.dropout_p = dropout_p
def forward(self, x):
x = x.view(-1, 784)
x = F.relu(self.fc1(x))
# ADD dropout
x = F.dropout(x,
training=self.training,
p=self.dropout_p
)
x = F.relu(self.fc2(x))
# ADD dropout
x = F.dropout(x,
training=self.training,
p=self.dropout_p
)
x = self.fc3(x)
return x
model = Net(dropout_p=0.2).to(DEVICE)
optimizer = optim.SGD(model.parameters(), lr=0.01)
def train(model, train_loader, optimizer):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(DEVICE), target.to(DEVICE)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
def evaluate(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(DEVICE), target.to(DEVICE)
output = model(data)
test_loss += F.cross_entropy(output, target,
reduction='sum').item()
# sum correct samples
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = 100. * correct / len(test_loader.dataset)
return test_loss, test_accuracy
for epoch in range(1, EPOCHS+1):
train(model, train_loader, optimizer)
test_loss, test_accuracy = evaluate(model, test_loader)
print("[{}] Test Loss: {:.4f}, Accuracy: {:.2f}%".format(
epoch, test_loss, test_accuracy))
iter_tl = iter(train_loader)
i = next(iter_tl)
i[0].shape
| ml/pytorch/init_env/overfitting_contorl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TASK 5 - MILESTONE 2
# ---
# # Packages used for analysis
# ---
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# # Initial Hypothesis
# ---
# **Health care remains a very important issue in todays world where the majority of the costs are thrown on patients. In the United States, health insurance is major concern given that public health insurance is not an option. This leaves millions of individuals without any health insurance if they are not able to afford insurance provided by the private sector. Could it be possible that different medical expenses are incurred based on geographic location, sex or age? Could individuals with different demographics be incurring different costs? These are all questions worth investigating and what we hope to answer through our analysis!**
# # DATASET
# ---
df1 = pd.read_csv('https://gist.githubusercontent.com/meperezcuello/82a9f1c1c473d6585e750ad2e3c05a41/raw/d42d226d0dd64e7f5395a0eec1b9190a10edbc03/Medical_Cost.csv')
df1
from project_functions import load_and_process
df2 = load_and_process('https://gist.githubusercontent.com/meperezcuello/82a9f1c1c473d6585e750ad2e3c05a41/raw/d42d226d0dd64e7f5395a0eec1b9190a10edbc03/Medical_Cost.csv')
# ---
# # QUESTIONS AND INQUIRIES
# ---
# # What is the average medical cost of males and females in this set?
sns.barplot(x='sex', y='charges', data=df1, estimator=np.mean, palette='plasma')
plt.title('Medical Charges for Males and Females')
# **What this graph shows us is that the avergae paid on medical care is greater for men than for women**
# # What is the correlation between variables in this data set?
plt.figure(figsize=(8,6))
sns.set_context('paper', font_scale=1.4)
df_hm = df1.corr()
sns.heatmap(df_hm, annot=True, cmap='plasma', linecolor='white')
plt.title('Correlations Between Age, BMI, Number of Children, and Medical Charges')
# **This heatmap did not really help us much as the two variables we were concerned with didn't show up which were gender and whether the patient is a smoker or not**
# # Is there a correlation between BMI, SMOKING & MEDICAL CHARGES INCURRED?
sns.set_context('paper', font_scale=1.4)
lm = sns.lmplot(x='bmi', y='charges', data=df1, col='sex', hue='smoker',
height=8, aspect=0.6, palette='plasma')
fig = lm.fig
fig.suptitle("Charges for Each Sex Based on BMI", fontsize=13)
# **<CENTER>From the graph above we can deduce that being smoker leads to greater medical charges & a higher BMI for both men and women</center>**
# # Is there a relationship between age and medical charges?
sns.lineplot(data=df1, x='age', y='charges', hue='smoker')
plt.ylabel('Medical Charges')
plt.xlabel('Age')
plt.title('Relationship Between Medical Charges and Age for Smokers and Non-Smokers')
# **<center>This plot is gradually increasing with age. Therefore, there is a linear relationship between age and charges. The plot shows that there is a quite large difference between smokers and non-smokers. At most points, a smoker has almost double the charges compared to someone of the same age.</center>**
# # Is there a substantial difference in medical charges between males and females?
sns.catplot(data=df1, kind="bar", x="smoker", y="charges", estimator=np.mean, hue="sex", aspect=1.0)
plt.ylabel('Medical Charges')
plt.xlabel('Smoker')
plt.title('Medical Charges for Smokers Based on Sex')
# **<center>From the data, there isn't a substantial difference in medical charges between males and females. From the small differences between the sexes, males have higher medical expenses than females if they're a smoker but females have higher medical expenses when they are non-smokers. Though, there is quite a large difference between smokers and non-smokers.</center>**
# # Is there an increase in medical charges, with an increase in children?
ax = sns.boxplot(x="children", y="charges", data=df1)
plt.title('Medical Charges Based on Number of Children Each Person Has')
# **The boxplot shows there is not a direct correlation between Medical Charges and amount of Children, in fact the lowest mean is that of the 5 children.**
# # Is there a correlation between BMI and Medical Charges?
sns.lmplot(x="bmi", y="charges", data=df1,
ci=None, palette="muted", height=4,
scatter_kws={"s": 50, "alpha": 1})
plt.title('Medical Charges Based on BMI')
# **Yes, as BMI increases, so too do the medical charges, at a near linear rate. Additionally, from the graph, the amount of lower medical charges appears fairly even for all BMI, but as the BMI increases, the higher medical charges become even more pronounced. For example at 20 BMI the highest charges appear to be just over 30,000, while above 30 BMI theres a few over 50,000.**
# # Conclusion
# **From the visualizations of the dataset, several conclusions can be made, regarding our initial inquiries. Firstly, the biggest factors that increase medical costs are smoking, age, and BMI, with smoking being the worst of the three. Secondly, there appears to be little to no correlation to medical charges based on sex or number of children. Males tend to have slightly higher medical charges, as do those with 2 children, but not substantially more. While not directly related to the hypothesis, there does also appear to be a correlation between smoking and BMI, and those that smoke and have a high BMI, tend to have much larger medical charges.**
| Milestone 2/.ipynb_checkpoints/TASK 5 - MILESTONE 2-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/abhilb/Cheatsheets/blob/master/part2/MRL_Dataset_CNN_Classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="R4j_2KTYdVQA"
from pathlib import Path
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Softmax, InputLayer, Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.losses import BinaryCrossentropy, SparseCategoricalCrossentropy
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/"} id="jxAuKAsud6Hx" outputId="4170debb-fc5e-4274-efaa-6b1d1c211fab"
# !gdown https://drive.google.com/uc?id=1JkdFa4fj0DMrDHju7QqC4EiUfJD9Cjws
# + id="n8VtN-S0yOOg"
dataset_path = Path('mrl_dataset.npz').absolute().resolve()
data = np.load(str(dataset_path))
X = data['data']
y = data['labels']
data.close()
# + colab={"base_uri": "https://localhost:8080/"} id="l6fcS2Y5yR5C" outputId="ac42fc35-3fab-4fbc-a96a-ba9c12a6ef1d"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print(f"Number of samples in training dataset: {X_train.shape[0]}")
print(f"Number of samples in testing dataset : {X_test.shape[0]}")
# + id="izz4Ujiud_Au"
model = Sequential()
model.add(InputLayer(input_shape=(32, 32, 1)))
model.add(Conv2D(filters=32, kernel_size=3, activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(rate=0.5))
model.add(Conv2D(32, 3, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Conv2D(32, 3, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Softmax())
# + id="03FwPOU8xY8S"
opt = Adam(learning_rate=0.0001)
loss = SparseCategoricalCrossentropy()
metrics = ['accuracy']
callback = EarlyStopping(monitor='loss', patience=3)
model.compile(optimizer=opt, loss=loss, metrics=metrics)
# + colab={"base_uri": "https://localhost:8080/"} id="RHgikDYFx64X" outputId="9ef4794e-0a63-4608-86d7-6b464deb91ef"
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="aOVU1pP1x-LJ" outputId="7addcff6-fa4c-4a22-dd06-0047c47a6c4f"
plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
# + id="uezK_NFwLKe3"
from tensorflow.keras.utils import Sequence
from typing import Tuple
class DataGenerator(Sequence):
def __init__(self, batch_size: int,
dims: Tuple[int, int],
data, labels):
self.batch_size = batch_size
self.dims = dims
self.data = np.hstack((data, labels))
self.nb_samples = self.data.shape[0]
self.indexes = np.arange(self.nb_samples)
self.on_epoch_end()
def __len__(self):
return self.nb_samples // self.batch_size
def __getitem__(self, index):
start_idx = index * self.batch_size
end_idx = (index + 1) * self.batch_size
data_slice = self.data[start_idx:end_idx, :]
X = data_slice[:,0:-1].reshape(-1,*self.dims)
y = data_slice[:,-1]
return X, y
def on_epoch_end(self):
np.random.shuffle(self.data)
# + colab={"base_uri": "https://localhost:8080/"} id="3WSS0qe5yB-g" outputId="8c15d1e0-060f-4ed0-a89c-239d444eb71b"
train_generator = DataGenerator(64, (32, 32, 1), X_train[:40000,:], y_train[:40000])
valid_generator = DataGenerator(32, (32, 32, 1), X_train[40000:,:], y_train[40000:])
history = model.fit(train_generator, validation_data=valid_generator, epochs=100)
# + id="D2OUncJKUPD5" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="8a8bff0b-266f-47f6-ca7b-2a9dbaea00f5"
figure, axes = plt.subplots(1, 2)
figure.set_size_inches((10, 5))
epochs = np.arange(len(history.history['loss']))
axes[0].plot(epochs, history.history['loss'])
axes[0].plot(epochs, history.history['val_loss'])
axes[0].set_title("LOSS")
axes[1].plot(epochs, history.history['accuracy'])
axes[1].plot(epochs, history.history['val_accuracy'])
axes[1].set_title("ACCURACY")
plt.show()
# + id="fJG_CbeUbIRH"
test_generator = DataGenerator(32, (32, 32, 1), X_test, y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="N8krzQy6bo6x" outputId="90dcb981-5786-47b5-d1ac-855c07d73626"
print("Evaluate on test data")
results = model.evaluate(test_generator)
print("test loss, test acc:", results)
# + colab={"base_uri": "https://localhost:8080/"} id="bMvT7t8PmRwd" outputId="0d9ac0d9-e1b7-4107-9630-2bad1301c30d"
print(f"ACCURACY: {results[1] * 100:.2f} %")
# + id="FQhQH-rOccuI"
index = np.random.choice(X_test.shape[0], 9, replace=False)
test_samples = X_test[index].reshape(-1, 32, 32, 1)
# + id="z_YqAE1Eji5F"
test_samples_preds = model.predict(test_samples)
test_samples_preds = np.argmax(test_samples_preds, axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 591} id="lwzuub4zkpme" outputId="6d304bc1-fa76-4ecf-b88e-1a628f2ccb96"
fig, axes = plt.subplots(3, 3)
fig.set_size_inches(15, 10)
axes = axes.flatten()
for idx, ax in enumerate(axes):
ax.imshow(test_samples[idx, :].reshape(32, 32), cmap='gray')
ax.axis('off')
ax.set_title('OPEN' if test_samples_preds[idx] else 'CLOSED')
plt.show()
# + id="A5AYr4LplVI_"
| part2/MRL_Dataset_CNN_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # $I_{Na,t}$ модель (заготовка для заполнения)
# %pylab inline
from __future__ import division
# ** # If you use anaconda python distribution, run the following command in the terminal ***
#
# ```conda install bokeh```
# +
import bokeh
import bokeh.plotting as bp
from bokeh.models import ColumnDataSource, Range1d
from ipywidgets import interact
# -
bp.output_notebook()
# **Run the following command to get the `PyDSTool` installed:**
#
# ```pip install PyDSTool```
# +
import PyDSTool as dst
from PyDSTool.Toolbox import phaseplane as pp
# -
# ## Модель $I_{Na,t}$
#
# \begin{equation}
# C\dot{V} = I - (\bar{g}_{Na}m_\infty(V)^3h(V-E_{Na}) + gl(V-El))
# \end{equation}
#
# \begin{equation}
# \tau_h \dot{h} = (h_\infty(V) - h)
# \end{equation}
#
# \begin{equation}
# x_\infty = \frac{1}{1 + \exp(\frac{V_{1/2}-V}{k})}
# \end{equation}
nat_pset = dict(
I = 0.0,
El = -70.0,
Ena = 60.0,
gl = 1.0,
gna = 15.0,
minf_vhalf = -40.,
minfk = 15.,
hinf_vhalf = -62.,
hinfk = -7,
htau = 5.0
)
pset_str = ';\n'.join(['{k}=dst.Par({v},"{k}")'.format(k=k,v=v) for k,v in nat_pset.items()])
print pset_str
exec(pset_str)
V = dst.Var('V')
h = dst.Var('h')
boltzman = dst.Fun(1./(1. + dst.Exp(('Vhalf'- V)/'bk')), ['Vhalf','bk'], 'boltzman')
minf = boltzman(minf_vhalf,minfk)
hinf = boltzman(hinf_vhalf,hinfk)
vtest = linspace(-89, 45, 250)
plot(vtest, eval(str(minf.eval(V='vtest', Exp='np.exp', **nat_pset))))
plot(vtest, eval(str(hinf.eval(V='vtest', Exp='np.exp',**nat_pset))))
xlabel(u'V, mV'); legend(('$m_\infty$', '$h_\infty$'),loc='best')
# ### Уравнения системы
# Инактивация Na-тока
dh = (hinf-h)/htau
# +
# Na-ток и уравнение для dV/dt
iNa = gna*(V-Ena)*minf**3*h
ileak = gl*(V-El)
dV = I - (ileak + iNa )
print dV
# -
# Стационарный ток при разных потенциалах
Iinf = gna*(V-Ena)*minf**3*hinf + ileak
# vnull = ... # Fill in
Nat_model = dst.args(
name = 'nat',
pars = nat_pset,
varspecs = {'V':dV, 'h':dh},
tdomain=[0,250],
xdomain=dict(V=[-150, 60], h=[0,1]),
ics = {'V':-70,'h':0})
# Система уравнений (модель)
odeset = dst.Generator.Vode_ODEsystem(Nat_model)
# Траектория (динамика системы)
traj = odeset.compute('test')
pts = traj.sample(dt=0.1)
# График
plot(pts['t'], pts['V'])
| I_Na,t-stub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regulome Explorer Notebook
#
# This notebook computes significant association scores between pairwise data types available in the PanCancer Atlas dataset of ISB-CGC. The specific statistical tests implmeneted are described ['here'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/RegulomeExplorerNotebooks.html#standard-pairwise-statistics), and a description of the original Regulomen Explorer is avaiable ['here'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/RegulomeExplorerNotebooks.html#id5).
#
# The output of the notebook is a table of significacnt associations specified by correltions and p-values. This notebook also performs a more detailed analysis from a user specified pair of features names generating figures and additional statistics.
# ### Authentication
# The first step is to authorize access to BigQuery and the Google Cloud. For more information see ['Quick Start Guide to ISB-CGC'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html) and alternative authentication methods can be found [here](https://googleapis.github.io/google-cloud-python/latest/core/auth.html).
# ### Import Python libraries
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
from google.cloud import bigquery
import pandas as pd
import re_module.bq_functions as regulome
# ### Specify Parameters
# The parameters for this experiment are the cancer type (study), a list of genes, a couple of molecular features (), the significance level, and the minimum number of samples required for the statistical analysis.
[study, feature1, feature2, gene_names, size, cohortlist, significance] = regulome.makeWidgets()
# ### Build the query
# The Bigquery query to compute associations between feature 1 and 2 are created using functions in the 'regulome' module. Please refer to our github repository to access the notebooks with description of the methods used for each possible combination of features available in TCGA: https://github.com/isb-cgc/Community-Notebooks/tree/master/RegulomeExplorer.
# +
SampleList, PatientList = regulome.readcohort( cohortlist )
LabelList = [ x.strip() for x in gene_names.value.split(',') ]
funct1 = regulome.approx_significant_level( )
table1, table2 = regulome.get_feature_tables(study.value,feature1.value,feature2.value,SampleList,PatientList,LabelList)
str_summarized = regulome.get_summarized_pancanatlas( feature1.value, feature2.value )
str_stats = regulome.get_stat_pancanatlas(feature1.value, feature2.value, size.value, significance.value )
sql = (funct1 + 'WITH' + table1 + ',' + table2 + ',' + str_summarized + str_stats)
print(sql)
# -
# ### Run the Bigquery
bqclient = bigquery.Client()
df_results = regulome.runQuery ( bqclient, sql, LabelList, SampleList, PatientList, dryRun=False )
regulome.pvalues_dataframe( df_results )
df_results
# ## Analyze a pair of labels
# From the table above please select a pair of features names to perform a statistical analysis and display the data. You can print the variable 'pair_query' to obtain the query used to retrieve the data.
# **pair_query** is the query used to retreive the necessary data for the statistical test.
[name1 , name2 ] = regulome.makeWidgetsPair()
pair_query = regulome.get_query_pair(name1.value,name2.value,study.value,SampleList,feature1.value,feature2.value)
#print(pair_query)
df_pair = regulome.runQuery( bqclient, pair_query, LabelList, SampleList, PatientList, dryRun=False )
regulome.plot_statistics_pair ( df_pair, feature2.value, name1.value, name2.value, size.value )
| RegulomeExplorer/RegulomeExplorer-notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: batch
# language: python
# name: batch
# ---
from hyppo.ksample import KSample
from hyppo.independence import Dcorr
from combat import combat
import pandas as pd
import glob
import os
import graspy as gp
import numpy as np
from dask.distributed import Client, progress
import dask.dataframe as ddf
from scipy.stats import zscore, rankdata, mannwhitneyu
import copy
import math
import networkx as nx
from graspy.models import SIEMEstimator as siem
import re
# +
def get_sub_pheno_dat(subid, scan, pheno_dat):
matches = pheno_dat.index[pheno_dat["SUBID"] == int(subid)].tolist()
match = np.min(matches)
return(int(pheno_dat.iloc[match]["SEX"]))
def get_age_pheno_dat(subid, scan, pheno_dat):
matches = pheno_dat.index[pheno_dat["SUBID"] == int(subid)].tolist()
match = np.min(matches)
return(float(pheno_dat.iloc[match]["AGE_AT_SCAN_1"]))
def apply_along_dataset(scs, dsets, fn):
scs_xfmd = np.zeros(scs.shape)
for dset in np.unique(dsets):
scs_xfmd[dsets == dset,:] = np.apply_along_axis(fn, 0, scs[dsets == dset,:])
return(scs_xfmd)
def apply_along_individual(scs, fn):
scs_xfmd = np.zeros(scs.shape)
def zsc(x):
x_ch = copy.deepcopy(x)
if (np.var(x_ch) > 0):
x_ch = (x_ch - np.mean(x_ch))/np.std(x_ch)
return x_ch
else:
return np.zeros(x_ch.shape)
def ptr(x):
x_ch = copy.deepcopy(x)
nz = x[x != 0]
x_rank = rankdata(nz)*2/(len(nz) + 1)
x_ch[x_ch != 0] = x_rank
if (np.min(x_ch) != np.max(x_ch)):
x_ch = (x_ch - np.min(x_ch))/(np.max(x_ch) - np.min(x_ch))
return(x_ch)
# +
# path to directory produced by download_aws.sh
basepath = '/mnt/nfs2/MR/corr/corr_m2g/graphs/m2g/fmri/'
# path to directory containing phenotypic annotations for download_aws.sh script
pheno_basepath = '/mnt/nfs2/MR/corr/corr_m2g/phenotypic/CoRR_AggregatedPhenotypicData.csv'
pheno_dat = pd.read_csv(pheno_basepath)
datasets = os.listdir(basepath)
print(datasets)
# -
# +
fmri_dict = {}
for i, dataset in enumerate(datasets):
try:
dset_dir = os.path.join('{}{}'.format(basepath, dataset), '*.csv')
files_ds = glob.glob(dset_dir)
successes = len(files_ds)
scans = []
sexs = []
ages = []
ds_lab = []
subjects = []
subids = []
sessions = []
for f in files_ds:
# obtain graph for this subject
try:
gr_dat = gp.utils.import_edgelist(f).flatten()
scansub = re.split('-|_', os.path.basename(f))
sex = get_sub_pheno_dat(scansub[1], scansub[3], pheno_dat)
age = get_age_pheno_dat(scansub[1], scansub[3], pheno_dat)
subid = "dataset-{}_sub-{}_ses-{}".format(dataset, scansub[1], scansub[3])
scans.append(gr_dat)
sexs.append(sex)
ages.append(age)
subjects.append(scansub[1])
ds_lab.append(dataset)
subids.append(subid)
sessions.append(scansub[3])
except Exception as e:
successes -= 1
if (successes < 5):
raise ValueError("Dataset: {} does not have enough successes.".format(dataset))
# add it in assuming there are enough unique files with metadata annotation
scans = np.vstack(scans)
fmri_dict[dataset] = {"Data": scans, "Subject": subjects, "Session": sessions, "Subid": subids,
"Sex": sexs, "Age": ages, "Dataset": ds_lab}
except Exception as e:
print("Error in {} Dataset.".format(dataset))
print(e)
# -
ncores = 99
client = Client(threads_per_worker=1, n_workers=ncores)
# ## Preservation of Network Statistics
# +
def diag_edges(n):
"""
A function for generating diagonal SIEM edge communities.
"""
m = int(n/2)
edge_comm = np.zeros((n,n))
for i in range(n):
for j in range(n):
if (i == j + m) or (j == i + m):
edge_comm[i,j] = 1
else:
edge_comm[i,j] = 2
np.fill_diagonal(edge_comm, 0)
return edge_comm
def modular_edges(n):
"""
A function for generating modular sbm edge communities.
"""
m = int(n/2)
edge_comm = np.zeros((n,n))
for i in range(n):
for j in range(n):
if ((i<m) & (j<m)) or ( (i>=m ) & (j>=m) ):
edge_comm[i,j] = 1
else:
edge_comm[i,j] = 2
np.fill_diagonal(edge_comm, 0)
return edge_comm
des_diag = diag_edges(70)
des_mod = modular_edges(70)
def mww(G, C):
A = G[C == 1]
B = G[C == 2]
test_res = list(mannwhitneyu(A, B, alternative='greater'))
test_res.append(np.mean(A))
test_res.append(np.mean(B))
return(test_res)
# -
dset_ls = [fmri_dict[ds]["Data"] for ds in fmri_dict.keys()]
raw_dat = np.vstack(dset_ls)
datasets = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Dataset"]])
# get the subject ids and dataset ids as a big list
subjects = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Subject"]])
sessions = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Session"]])
subids = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Subid"]])
sexs = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Sex"]])
ages = np.array([j for ds in fmri_dict.keys() for j in fmri_dict[ds]["Age"]])
raw_dat.shape
# +
def prepare_aggregate_data(scans, datasets):
newdat = {}
newdat["raw"] = copy.deepcopy(scans)
# copy the raw data over
newdat["zscore"] = copy.deepcopy(scans)
newdat["ptr"] = copy.deepcopy(scans)
newdat["combat"] = copy.deepcopy(scans)
# remove stationary edges for combat
combat_rem_edges = ~np.all(newdat["combat"] == 0, axis=0)
# apply relevant transforms en-masse
newdat["zscore"] = apply_along_dataset(newdat["zscore"], datasets, zscore)
# replace nans with zeros
newdat["zscore"][np.isnan(newdat["zscore"])] = 0
newdat["ptr"] = apply_along_dataset(newdat["ptr"], datasets, ptr)
newdat["combat"][:,combat_rem_edges] = np.array(combat(pd.DataFrame(newdat["combat"][:,combat_rem_edges].T), datasets, model=None, numerical_covariates=None)).T
return(newdat)
data_preproc = {}
data_preproc["raw"] = prepare_aggregate_data(raw_dat, datasets)
data_preproc["ptr"] = prepare_aggregate_data(np.apply_along_axis(ptr, 1, raw_dat), datasets)
# -
# +
exps = []
for i, sub in enumerate(subjects):
for sxfm in ["raw", "ptr"]:
for dxfm in ["raw", "zscore", "ptr", "combat"]:
exps.append([datasets[i], subjects[i], sessions[i], sexs[i], ages[i], i, sub, sxfm, dxfm])
sim_exps = pd.DataFrame(exps, columns=["Dataset", "Subject", "Retest", "Sex", "Age",
"Ix", "Fullname", "Sxfm", "Dxfm"])
print(sim_exps.head(n=20))
# -
def singlegraph_exp(row):
# grab data, and reshape it to nv x nv matrix
flat_gr = data_preproc[row[7]][row[8]][row[5],:]
nv = int(np.sqrt(np.max(flat_gr.shape)))
exp_gr = flat_gr.reshape((nv, nv))
G = nx.from_numpy_matrix(exp_gr)
cc = nx.average_clustering(G, weight="weight")
deg = np.array(list(dict(G.degree(weight="weight")).values())).mean()
homophilic = mww(exp_gr, des_mod)
homotopic = mww(exp_gr, des_diag)
return(row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7],
row[8], cc, deg, homophilic[2], homotopic[2], homophilic[3], homotopic[3],
homophilic[1], homotopic[1], homophilic[0], homotopic[0])
sim_exps = ddf.from_pandas(sim_exps, npartitions=ncores)
sim_results = sim_exps.apply(lambda x: singlegraph_exp(x), axis=1, result_type='expand',
meta={0: str, 1: str, 2: str, 3:str, 4:str, 5:str, 6:str, 7:str, 8:str,
9: float, 10: float, 11: float, 12: float, 13: float, 14: float,
15: float, 16: float, 17: float, 18: float})
sim_results
sim_results = sim_results.compute(scheduler="multiprocessing")
sim_results = sim_results.rename(columns={0: "Dataset", 1: "Subject", 2: "Retest", 3: "Sex", 4: "Age", 5: "Ix",
6: "Fullname", 7: "Sxfm", 8: "Dxfm", 9: "Clustering",
10: "Degree", 11: "Homophilic_mean", 12: "Homotopic_mean",
13: "Heterophilic_mean", 14: "Heterotopic_mean",
15: "Homophilic_pvalue", 16: "Homotopic_pvalue",
17: "Homophilic_stat", 18: "Homotopic_stat"})
sim_results.to_csv('../data/summary/batch_statistics.csv')
sim_results.head(n=30)
# ## Save Example Connectome for each option type
# +
refsub = "0025864"; refses = "1"
nv = 70
row_ix = np.zeros((nv, nv))
col_ix = np.zeros((nv, nv))
for i in range(70):
for j in range(70):
col_ix[i,j] = j
row_ix[i,j] = i
row_ix = row_ix.flatten()
col_ix = col_ix.flatten()
data = []
data_avg = []
for sxfm, data_preproc_sxfm in data_preproc.items():
for dxfm, data_preproc_sxfm_dxfm in data_preproc_sxfm.items():
gr_dat = data_preproc_sxfm_dxfm[np.logical_and(subjects == refsub, sessions==refses),:][0,:].reshape((nv, nv)).flatten()
for i in range(nv**2):
data.append([sxfm, dxfm, int(col_ix[i] + 1), int(row_ix[i] + 1), gr_dat[i]])
for dsi in np.unique(datasets):
for dsj in np.unique(datasets):
dsids = np.array([ds in [dsi, dsj] for ds in datasets])
data_avg.append([sxfm, dxfm, dsi, dsj, data_preproc_sxfm_dxfm[dsids,].mean()])
dat_df = pd.DataFrame(data, columns=["Sxfm", "Dxfm", "Column", "Row", "Value"])
dat_avg_df = pd.DataFrame(data_avg, columns=["Sxfm", "Dxfm", "Dataset1", "Dataset2", "Average"])
print(dat_df.head(n=20))
print(dat_avg_df.head(n=20))
dat_df.to_csv('../data/summary/proc_graph.csv')
dat_avg_df.to_csv('../data/summary/avg_gr_weights.csv')
# -
| summary/summary_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import keras
from keras.datasets import fashion_mnist #importing fashion mnist from keras
(train_data,train_labels),(test_data,test_labels) = fashion_mnist.load_data()
# ### Checking the data
train_data.shape
train_labels[:10]
train_labels.shape
label_index = [ "T-shirt/top", # index 0
"Trouser", # index 1
"Pullover", # index 2
"Dress", # index 3
"Coat", # index 4
"Sandal", # index 5
"Shirt", # index 6
"Sneaker", # index 7
"Bag", # index 8
"Ankle boot"] #index 9]
# +
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure()
for i in range(9):
plt.subplot(3,3,i+1)
plt.tight_layout()
plt.imshow(train_data[i], interpolation='none')
plt.title("Label: {}".format(label_index[train_labels[i]]))
plt.xticks([])
plt.yticks([])
plt.savefig('Labels example')
# -
# ### Preparing the data for processing
x_train = train_data.reshape(train_data.shape[0],28,28,1) # reshaping the data to create a third dimension for colour
x_test = test_data.reshape(test_data.shape[0],28,28,1)
y_train =keras.utils.to_categorical(train_labels, 10) #one hot encoding label data
y_test =keras.utils.to_categorical(test_labels, 10)
import numpy as np
x_train = x_train.astype('float32')/np.max(train_data) #normalizing data
x_test = x_test.astype('float32')/np.max(train_data)
from keras import models,layers
model = models.Sequential()
# +
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(28,28,1)))
model.add(layers.Dropout(0.2))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.Dropout(0.3))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128,activation='relu'))
model.add(layers.Dense(10,activation='softmax'))
model.summary()
# -
model.compile(optimizer='rmsprop',
metrics= ['accuracy'] ,
loss = 'categorical_crossentropy' )
history = model.fit(x_train,y_train,
epochs=10,
batch_size = 512,
validation_data=(x_test,y_test),
verbose=0)
val_loss = history.history['val_loss']
train_loss = history.history['loss']
epochs = np.arange(1,11)
plt.plot(epochs,val_loss,'r',label='Validation loss')
plt.plot(epochs,train_loss,'bo',label='Training loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
val_acc = history.history['val_accuracy']
train_acc = history.history['accuracy']
epochs = np.arange(1,11)
plt.plot(epochs,val_acc,'m',label='Validation accuracy')
plt.plot(epochs,train_acc,'cD',label='Training accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('Accuracy graph')
| CNNs on Fashion mnist.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0
# language: julia
# name: julia-1.7
# ---
# # MATH50003 Numerical Analysis: Problem 3
#
# This problem sheet explores implementation of triangular solves,
# supporting a matrix with two super-diagonals, as well as
# permutation and Householder reflections that can be applied to a vector in
# $O(n)$ complexity.
# +
using LinearAlgebra, Test
# We will override these functions below
import Base: getindex, setindex!, size, *, \
# -
# ## 1. Dense Matrices
#
# **Problem 1.1** Show that `A*x` is not
# implemented as `mul(A, x)` from the lecture notes
# by finding a `Float64` example where the bits do not match.
#
#
#
#
# ## 2. Triangular Matrices
#
# **Problem 2.1** Complete the following functions for solving linear systems with
# triangular systems by implementing back and forward-substitution:
# +
function ldiv(U::UpperTriangular, b)
n = size(U,1)
if length(b) != n
error("The system is not compatible")
end
x = zeros(n) # the solution vector
## TODO: populate x using back-substitution
x
end
function ldiv(U::LowerTriangular, b)
n = size(U,1)
if length(b) != n
error("The system is not compatible")
end
x = zeros(n) # the solution vector
## TODO: populate x using forward-substitution
end
# -
# **Problem 2.2⋆** Given $𝐱 ∈ ℝ^n$, find a lower triangular matrix of the form
# $$
# L = I - 2 𝐯 𝐞_1^⊤
# $$
# such that:
# $$
# L 𝐱 = x_1 𝐞_1.
# $$
# What does $L𝐲$ equal if $𝐲 ∈ ℝ^n$ satisfies $y_1 = 𝐞_1^⊤ 𝐲 = 0$?
#
# ## 3. Banded matrices
#
# **Problem 3.1** Complete the implementation of `UpperTridiagonal` which represents a banded matrix with
# bandwidths $(l,u) = (0,2)$:
# +
struct UpperTridiagonal{T} <: AbstractMatrix{T}
d::Vector{T} # diagonal entries
du::Vector{T} # super-diagonal enries
du2::Vector{T} # second-super-diagonal entries
end
size(U::UpperTridiagonal) = (length(U.d),length(U.d))
function getindex(U::UpperTridiagonal, k::Int, j::Int)
d,du,du2 = U.d,U.du,U.du2
# TODO: return U[k,j]
end
function setindex!(U::UpperTridiagonal, v, k::Int, j::Int)
d,du,du2 = U.d,U.du,U.du2
if j > k+2
error("Cannot modify off-band")
end
# TODO: modify d,du,du2 so that U[k,j] == v
U # by convention we return the matrix
end
# -
# **Problem 3.2** Complete the following implementations of `*` and `\` for `UpperTridiagonal` so that
# they take only $O(n)$ operations.
# +
function *(U::UpperTridiagonal, x::AbstractVector)
T = promote_type(eltype(U), eltype(x)) # make a type that contains both the element type of U and x
b = zeros(T, size(U,1)) # returned vector
# TODO: populate b so that U*x == b (up to rounding)
end
function \(U::UpperTridiagonal, b::AbstractVector)
T = promote_type(eltype(U), eltype(b)) # make a type that contains both the element type of U and b
x = zeros(T, size(U,2)) # returned vector
# TODO: populate x so that U*x == b (up to rounding)
end
# -
# ## 4. Permutations
#
# **Problem 4.1⋆** What are the permutation matrices corresponding to the following permutations?
# $$
# \begin{pmatrix}
# 1 & 2 & 3 \\
# 3 & 2 & 1
# \end{pmatrix}, \begin{pmatrix}
# 1 & 2 & 3 & 4 & 5 & 6\\
# 2 & 1 & 4 & 3 & 6 & 5
# \end{pmatrix}.
# $$
#
#
# **Problem 4.2** Complete the implementation of a type representing
# permutation matrices that supports `P[k,j]` and such that `*` takes $O(n)$ operations.
# +
struct PermutationMatrix <: AbstractMatrix{Int}
p::Vector{Int} # represents the permutation whose action is v[p]
function PermutationMatrix(p::Vector)
sort(p) == 1:length(p) || error("input is not a valid permutation")
new(p)
end
end
size(P::PermutationMatrix) = (length(P.p),length(P.p))
function getindex(P::PermutationMatrix, k::Int, j::Int)
# TODO: Return P[k,j]
end
function *(P::PermutationMatrix, x::AbstractVector)
# TODO: permute the entries of x
end
# If your code is correct, this "unit test" will succeed
p = [1, 4, 2, 5, 3]
P = PermutationMatrix(p)
@test P == I(5)[p,:]
# -
# ## 5. Orthogonal matrices
#
# **Problem 5.1⋆** Show that orthogonal matrices preserve the 2-norm of vectors:
# $$
# \|Q 𝐯\| = \|𝐯\|.
# $$
#
#
# **Problem 5.2⋆** Show that the eigenvalues $λ$ of an orthogonal matrix $Q$ are
# on the unit circle: $|λ| = 1$.
#
#
# **Problem 5.3⋆** Explain why an orthogonal matrix $Q$ must be equal to $I$ if all its eigenvalues are 1.
#
#
# **Problem 5.4** Complete the implementation of a type representing
# reflections that supports `Q[k,j]` and such that `*` takes $O(n)$ operations.
# +
# Represents I - 2v*v'
struct Reflection{T} <: AbstractMatrix{T}
v::Vector{T}
end
Reflection(x::Vector{T}) where T = Reflection{T}(x/norm(x))
size(Q::Reflection) = (length(Q.v),length(Q.v))
function getindex(Q::Reflection, k::Int, j::Int)
# TODO: Return Q[k,j]
end
function *(Q::Reflection, x::AbstractVector)
# TODO: permute the entries of x
end
# If your code is correct, these "unit tests" will succeed
x = randn(5)
Q = Reflection(x)
v = x/norm(x)
@test Q == I-2v*v'
@test Q*v ≈ -v
@test Q'Q ≈ I
# -
# **Problem 5.5** Complete the following implementation of a Housholder reflection so that the
# unit tests pass. Here `s == true` means the Householder reflection is sent to the positive axis and `s == false` is the negative axis.
# +
function householderreflection(s::Bool, x::AbstractVector)
# TODO: implement Householder reflection, returning a Reflection
end
x = randn(5)
Q = householderreflection(true, x)
@test Q isa Reflection
@test Q*x ≈ norm(x)*[1; zeros(length(x)-1)]
Q̃ = householderreflection(false, x)
@test Q̃ isa Reflection
@test Q̃*x ≈ -norm(x)*[1; zeros(length(x)-1)]
| sheets/week3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: analyses
# language: python
# name: analyses
# ---
# # NIPS impementation challenge: "Concentration of Multilinear Functions of the Ising Model with Applications to Network Data"
#
# This notebook provides some example code to implement the algorithm mentioned in [<a href="">Daskalakis et al. 2017</a>] for testing the hypothesis that a synthetic sample generated via some process which causes it to be different from an Ising model in the high temperature regime could have been sampled from an Ising model in the high temperature regime. This departure of the generated lattice from the high-temperature limit of the Ising model is parameterized by some number $\tau \in [0, 1]$ and here the results in [<a href="">Daskalakis et al. 2017</a>] are confirmed for the values of $\tau$ where the statistic used can detect this departure.
# +
import itertools
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# %matplotlib inline
# -
# ## Construct samples
#
# ### Social network
#
# Our departures from the null hypothesis are generated in the following manner, parameterized by some parameter $\tau \in [0, 1]$. The grid is initialized by setting each node independently to be $-1$ or $1$ with equal probability. We then iterate over the nodes in column major order. For the node $x$ at position $v_x = (i, j)$, we select a node $y$ uniformly at random from any of the vertices at most a Manhattan distance of 2 away. Then, with probability $\tau$, we set $y$ to have the same value as $x$. We imagine this construction as a type of social network model, where each individual tries to convert one of his nearby connections in the network to match his signal, and is successful with probability $\tau$.
#
# Constants
theta_critical = np.log(1 + np.sqrt(2)) / 2
print "Critical value of theta:", theta_critical
# +
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
return self.memo[args]
def get_neighbors(u, N, distance):
"""List all neighbors for a given vertex u = (row, col) on an NxN lattice."""
row, col = u[0], u[1]
neighbors = [((row - distance + i) % N, (col - distance + j) % N)
for (i, j) in np.ndindex(2 * distance + 1, 2 * distance + 1)
if sp.spatial.distance.cityblock((i, j), (distance, distance)) <= distance]
neighbors.remove(u)
return neighbors
get_neighbors = Memoize(get_neighbors)
def random_array(N):
"""Return random array {-1, 1}^{N x N}."""
return (np.random.choice([1, -1], size=N ** 2)
.reshape(N, N))
def stochastic_social_network(N, tau):
"""Construct an N x N grid of up and down spins.
:param int N: number of rows / columns
:param float tau: number between 0 and 1 that parameterizes the distance
from a perfectly random grid by introducing correlations between nearby
spins. In particular, it is the probability that a given spin can
'convince' one of its 'friends' (Manhattan distance 2 or less) to copy
its value
"""
social_grid = random_array(N) # initialize social grid as random
# iterate over index of vertices in column major order
for col, row in np.ndindex(N, N):
v_x = (row, col)
neighbors = get_neighbors(v_x, N, 2)
v_y = neighbors[np.random.randint(len(neighbors))]
# Pick a number between 0 and 1 with p(1) = tau
convinced = np.random.choice([True, False], p=[tau, 1 - tau])
if convinced:
social_grid[v_y] = social_grid[v_x]
return social_grid
# -
N, tau = 40, 0.04
social_grid004 = stochastic_social_network(N, tau)
sns.heatmap(social_grid004, xticklabels=False, yticklabels=False);
plt.title('Social network grid (tau = %.2f)' % tau);
N, tau = 40, 0.4
social_grid04 = stochastic_social_network(N, tau)
sns.heatmap(social_grid04, xticklabels=False, yticklabels=False);
plt.title('Social network grid (tau = %.1f)' % tau);
N, tau = 40, 1
social_grid1 = stochastic_social_network(N, tau)
sns.heatmap(social_grid1, xticklabels=False, yticklabels=False,
cbar=True);
plt.title('Social network grid (tau = %d)' % tau);
# ### Ising model
#
#
# The data will be compared with an Ising model, of which we give an example state below with nearest-neighbour interaction $\theta = 0.1$). It is clearly difficult to tell these states apartjust by looking at them.
class Ising_lattice(object):
"""Constructs NxN Ising lattice with nearest-neighbor interaction theta."""
def __init__(self, N, theta):
self.N = N
self.theta = theta
self.mixing_time = self._mixing_time()
self.glauber_transition_probabilities = self._compute_glauber_transition_probabilities()
# Create an Ising state by running Glauber dynamics until mixing occurs
self.random_lattice = self.random_ising_lattice()
self.ising_lattice = self.ising_lattice()
def _compute_glauber_transition_probabilities(self):
return {delta: 1. / (1 + np.exp(self.theta * delta))
for delta in (-8, -4, 0, 4, 8)}
def _mixing_time(self):
"""Estimate mixing time for eta-high-temperature regime."""
n_nodes = self.N ** 2
eta = 1 - np.tanh(self.theta)
mixing_time = int(n_nodes * np.log(n_nodes) / eta)
return mixing_time
def glauber_step(self, lattice):
"""Perform one step in Glauber dynamics."""
# Choose a random spin i indexed by (row_i, col_i)
row_i, col_i = np.random.randint(0, self.N), np.random.randint(0, self.N)
# Find its nearest neighbours (under pbc) and compute energy delta
sum_of_neighboring_spins = sum(
[lattice[v] for v in get_neighbors((row_i, col_i), self.N, 1)])
delta = 2 * lattice[(row_i, col_i)] * sum_of_neighboring_spins
# Look up transition probability p_flip
p_flip = self.glauber_transition_probabilities[delta]
# With probability p_flip, flip spin i
random_number = np.random.uniform()
if random_number < p_flip:
lattice[row_i, col_i] *= -1
return lattice
def random_ising_lattice(self):
return (np.random.choice([1, -1], size=self.N ** 2)
.reshape(self.N, self.N))
def ising_lattice(self):
"""Run the Glauber dynamics long enough to reach mixing."""
# initialize lattice at random
lattice = self.random_lattice
for _ in range(self.mixing_time):
lattice = self.glauber_step(lattice)
return lattice
def sample_ising_states(self, n_samples):
"""Starting from an Ising state, create a collection of n_samples."""
intermediate_state = self.ising_lattice
samples = []
for _ in range(n_samples):
intermediate_state = glauber_step(intermediate_state)
samples.append(intermediate_state)
return samples
ising = Ising_lattice(40, 0.04)
sns.heatmap(np.reshape(ising.ising_lattice, (40, 40)), xticklabels=False, yticklabels=False,
cbar=True);
plt.title('Ising grid (theta = %.2f)' % 0.04);
ising04 = Ising_lattice(40, 0.4)
sns.heatmap(np.reshape(ising04.ising_lattice, (40, 40)), xticklabels=False, yticklabels=False,
cbar=True);
plt.title('Ising grid (theta = %.1f)' % 0.4);
# ### Tests for Ising model: energy and magnetization
# We can check whether our MCMC algorithm is correctly implemented by computing the magnetization and energy for each step of the Glauber dynamics and ascertaining that the magnetization remains close to 0 and the energy decreases on average.
# +
# Tests
def magnetization(lattice):
return lattice.flatten().sum()
def energy(lattice):
N = len(lattice)
return sum([-0.5 * lattice[row, col] * sum(
[lattice[v] for v in get_neighbors((row, col), N, 1)])
for (row, col) in np.ndindex(N, N)])
# +
lattice = ising.random_lattice
energies, magnetizations = [], []
for _ in range(1000):
energies.append(energy(lattice))
magnetizations.append(magnetization(lattice))
lattice = ising.glauber_step(lattice)
sns.plt.plot(energies)
sns.plt.title("Energy during Glauber dynamics from random state");
# -
sns.plt.plot(magnetizations)
sns.plt.title("Magnetization during Glauber dynamics from random state");
# ## Hypothesis testing
#
# 1. Construct network sample with value of `tau` (where `tau` parameterizes the departure from an Ising model in the high-temperature limit)
# 1. Compute `theta_mple` for the network sample
# 1. If `theta_mple > theta_critical`, reject the null hypothesis (not in high-temperature regime)
# 1. Generate 100 Ising samples with `theta_mple`
# 1. Compute `Z_2` (local partition function) for the network sample
# 1. Compute `Z_2` for 100 Ising samples with `theta = theta_mple` and compute the 95% confidence interval
# 1. If the value of `Z_2` for the network sample falls outside the 95% confidence interval, reject the null hypothesis
#
# ### Ising model probability density
# Given the Ising model on a graph $G = (V, E)$,
# $$f(\{\theta\}, \sigma)=\exp\left(\sum_{v\in V}\theta_vX_v + \sum_{u,v\in V}\theta_{u,v}X_uX_v - F({\beta})\right),$$
# where $\sigma$ is one state consisting of a lattice of spins $\{X_u\},\; u\in V$, and $\theta_v$ and $\theta_{u, v}$ a local magnetic field and magnetic interaction terms, respectively (note that repeated indices imply summation, and note that $\{\theta\}$ indicates all parameters), and $F(\beta)$ proportional to the free energy.
#
# Our null hypothesis is that the sample is generated from an Ising model in the high temperature regime on the grid, with no external field (i.e. $\theta_u = 0$ for all $u$) and a constant nearest-neighbour interaction strength parameterized by $\theta$ (i.e., $\theta_{uv}=\theta$ iff nodes $u$ and $v$ are adjacent in the grid, and $0$ otherwise). For the Ising model on the grid, the critical edge parameter for high-temperature is
# $\theta_c=\ln(1+\sqrt{2})/2$. In other words, we are in high-temperature if and only if $\theta\leq \theta_c$, and we can reject the null hypothesis if the MPLE estimate $\hat{\theta} > \theta_c$.
#
# In summary, we will consider the case of constant nearest-neighbours interaction $\theta$, with no external magnetization:
# $$
# f(\theta,\sigma) = \exp\left(\theta\sum_{u, v: u\sim v}X_uX_v - F(\beta)\right),
# $$
# where $u \sim v$ indicates that $u$ and $v$ are nearest neighbours.
#
# ### Estimate Ising model parameters
#
# Given a single multivariate sample, we first run the maximum pseudo-likelihood estimator (MPLE) to obtain an estimate of the model’s parameters under the null hypothesis that the sample is generated by a high-temperature Ising model.
#
# The pseudo-likelihood is an approximation of the likelihood, where instead of the entire partition function, one needs to compute only a local partition function.
#
# Given a 2d array of spins $\sigma =(X_{(1,1)}, X_{(1,2)}, \ldots , X_{(N, N)})$ whose joint distribution is parametrized by a parameter $\theta \in \mathbb{R}$, the MPLE of $\theta$ is defined as
# $$\hat{\theta}_{MPLE} := \mbox{arg}\,\mbox{max}\; L_p(\theta) = \mbox{arg}\,\mbox{max}\prod_{u\in V^{N\times N}} p(X_u|\theta, X_v: v\sim u).$$
#
# For the Ising model, the function $L_p(\theta)$ can be written as:
# $$
# L_p(\theta) = \prod_{u\in V}p(X_u|\theta, X_v: v\sim u)
# = \prod_{u\in V}\frac{e^{-\theta\sum_{v\sim u}X_uX_v}}{e^{-\theta\sum_{v\sim u}X_v} + e^{\theta\sum_{v\sim u}X_v}}.
# $$
# This can be explicitly solved by taking the logarithm and computing the derivative with respect to $\theta$
# $$\ell(\theta): = \frac{\partial}{\partial\theta}\log L_{\sigma}(\theta) = \frac{\partial}{\partial\theta}\sum_{u\in V}\left(-\theta\sum_{v\sim u}X_uX_v - \log \left(2\cosh(\theta\sum_{v\sim u}X_v)\right)\right),$$
# which is equal to
# $$\ell(\theta) = \sum_{u\in V}m_u\left(X_u - \tanh(\theta m_u)\right)$$
#
# with
# $$m_u(\sigma) := \sum_{v\sim u}X_v$$
# for some $\sigma \in S_{N\times N}:=\{−1, 1\}^{N\times N}$ under the assumptions of constant nearest-neighbour interactions $\theta$.
#
# Note that $m_u(\sigma)$ does not depend on $X_u$, but only on its neighboring spins. Interpreting $\tanh(\pm\infty) = \pm 1$, the function $\ell(\theta)$ can be extended to $[0,\infty]$ by defining
# $\ell(\infty):= \sum_{u\in V}\left(m_u(\sigma)X_u-|m_u(\sigma)|\right)$. Then it is easy to verify (see <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2363958">Chatterjee</a>) that $\frac{\partial}{\partial\theta}\log f_i(\theta, \sigma) = \ell(\theta)$, and the function $\ell(\theta)$ is a decreasing function of $\theta$. Therefore, the MPLE for $\theta$ in the Ising model is
# $$\hat{\theta}_{MPLE}(\sigma) := \inf\{ x \geq 0 : \ell(x) = 0 \}.$$
# +
def m(u, lattice):
N = len(lattice) if isinstance(lattice, np.ndarray) else lattice.N
return sum([lattice[v] for v in get_neighbors(u, N, 1)])
def ell(theta, lattice):
N = len(lattice) if isinstance(lattice, np.ndarray) else lattice.N
return sum([m(u, lattice) * (lattice[u] - np.tanh(theta * m(u, lattice)))
for u in np.ndindex(N, N)])
def maximum_partial_likelihood_estimator(lattice):
"""MPLE for theta under the assumption of a nearest-neighbours Ising model."""
return sp.optimize.fsolve(ell, 0.5, args=(lattice))[0]
def local_partition_function(lattice, N, external_field=0, distance=2):
"""Compute the local partition function for an NxN lattice."""
offset = np.tanh(external_field)
return sum([(lattice[u] - offset) * sum([
(lattice[v] - offset) for v in get_neighbors(u, N, distance)])
for u in np.ndindex(N, N)])
def hypothesis_test_high_temp_ising(lattice, number_ising_samples=100):
"""Perform a hypothesis test on a square test lattice."""
reject_null_hypothesis = 0
theta_mple = maximum_partial_likelihood_estimator(lattice)
if theta_mple > theta_critical:
reject_null_hypothesis, reason_code = 1, "mple"
return reject_null_hypothesis, reason_code
N = len(lattice) if isinstance(lattice, np.ndarray) else lattice.N
sampled_values_statistic = sorted([local_partition_function(
Ising_lattice(N, theta=theta_mple).ising_lattice, N, distance=2)
for _ in range(number_ising_samples)])
confidence_interval = sp.stats.norm.interval(
0.95, loc=np.mean(sampled_values_statistic),
scale=np.std(sampled_values_statistic))
test_lattice_statistic = local_partition_function(lattice, N, distance=2)
if (test_lattice_statistic < confidence_interval[0] or #sampled_values_statistic[int(0.025 * number_ising_samples)] or
test_lattice_statistic > confidence_interval[1]):
reject_null_hypothesis, reason_code = 1, "p-value"
return reject_null_hypothesis, reason_code
else:
return reject_null_hypothesis, "fail"
# -
# ### Tests for Ising model: MPLE
#
# In order to test whether the MPLE is correctly implemented, check whether the MPLE estimate of a known Ising lattice is close enough to the actual value of theta.
def test_mple_ising():
# Construct Ising grid with pre-determined value of theta
# under the assumption that there is no external field
# theta = 0 corresponds to the high-temperature limit of
# the Ising model under 0 external field. In this case, the
# model is random.
epsilon = 5e-2
random_grid = (np.random.choice([1, -1], size=40 ** 2)
.reshape(40, 40))
print "MPLE random grid:", maximum_partial_likelihood_estimator(random_grid)
assert maximum_partial_likelihood_estimator(random_grid) < epsilon
for theta in [0.04, 0.1, 0.4, 0.8, 1]:
ising_lattice = Ising_lattice(40, theta).ising_lattice
mple = maximum_partial_likelihood_estimator(ising_lattice)
print "theta:", theta, "MPLE:", mple
assert mple - theta < epsilon
test_mple_ising()
# The MPLE works less well in the low-temperature regime, but we will only need to have a precise estimate in the high temperature regime.
N, tau = 40, 1
social_grid = stochastic_social_network(N, tau)
theta_mple = maximum_partial_likelihood_estimator(social_grid)
print "MPLE estimate of theta:", theta_mple
print "High-temperature regime:", theta_mple <= theta_critical
# ### MCMC using MPLE parameters
# If the value of $\hat{\theta}$ is lower than the critical value, i.e. we cannot reject the null hypothesis on the grounds of it not being in the high temperature regime, we instead compute a statistic and compare its value of the sample to a range of values computed on a sample of high-temperature Ising models with the estimated nearest-neighbour interaction $\hat{\theta}$.
# We use the following local bilinear function as a statistic:
# $$Z_{\mbox{local}} = \sum_{u=(i,j)} \sum_{v=(k,l): d(u,v)\leq 2} X_uX_v,$$
# with $d(u, v)$ the Manhattan distance between two lattice sites $u$ and $v$.
#
# The statistic $Z_{\mbox{local}}$ is bilinear in the Ising model, which means it is better able to reject the null hypothesis, since its distribution for the Ising model will be very concentrated. In order to sample from the Ising distribution in the high temperature limit, we start with a random lattice and run the Glauber algorithm, which computes the transition probability of one state to another $\sigma_i\rightarrow \sigma_j$ as:
# $$P(\sigma_i \rightarrow \sigma_j) = \frac{1}{1 + e^{\theta\Delta E_{ji}}}.$$
# In the high temperature regime, mixing occurs quickly, and one needs to run these steps only $O(n\log n)$ times.
#
# Finally, given the range of values for the statistic determined by MCMC, we reject the null hypothesis if p ≤ 0.05.
ising_mple = Ising_lattice(N=40, theta=theta_mple)
sns.heatmap(
np.reshape(ising_mple.ising_lattice, (40, 40)),
xticklabels=False, yticklabels=False, cbar=True);
plt.title('Ising grid with theta = MPLE estimate of social network grid (tau = 1)');
N = 40
sampled_values_statistic = sorted([local_partition_function(
Ising_lattice(N, theta=theta_mple).ising_lattice, N, distance=2) for _ in range(100)])
print sp.stats.describe(sampled_values_statistic)
print "95% confidence interval:", sp.stats.norm.interval(
0.95, loc=np.mean(sampled_values_statistic),
scale=np.std(sampled_values_statistic))
sns.distplot(sampled_values_statistic);
plt.title('Distribution of Z_2 statistic on Ising grid with theta = MPLE estimate');
# Compare this to the value of the local partition function for our social network lattice
statistic_social_grid = local_partition_function(social_grid, N, distance=2)
reject_null = ((statistic_social_grid > sampled_values_statistic[95]) or (statistic_social_grid < sampled_values_statistic[5]))
print 'Value of Z_2 statistic on social network grid (tau = 1):', statistic_social_grid
print "Reject null hypothesis (used cutoff p = 0.05)!" if reject_null else "Failed to reject null hypothesis."
# So it looks like the case $\tau = 1$ is quite easy to distinguish from an Ising grid.
# ### Plot probability of rejecting null hypothesis vs tau
#
# In order to test the power of this statistic in rejecting the null hypothesis for cases when the departure from high temperature Ising is less pronounced, we plot the probability of rejecting vs $\tau$.
# outcome_data = []
for tau in np.logspace(-3, 0, num=25):
print ".",
reasons, test_outcomes = [], []
for _ in range(100):
social_grid = stochastic_social_network(N, tau)
reject_null, reason = hypothesis_test_high_temp_ising(
social_grid, number_ising_samples=100)
test_outcomes.append(reject_null)
reasons.append(reason)
outcome_data.append(
{'tau': tau,
'reject_null_avg': np.mean(test_outcomes),
'reasons': reasons})
# +
fig, ax = plt.subplots()
ax.set(xscale='log')
sns.plt.plot([data['tau'] for data in outcome_data],
[data["reject_null_avg"] for data in outcome_data]);
plt.title('Rejections of null hypothesis (high-T Ising)')
# Set x-axis label
plt.xlabel('log(tau) (departure from high-T Ising model)')
# Set y-axis label
plt.ylabel('fraction of rejections');
# -
#
| NIPS_Ising.ipynb |
-- -*- coding: utf-8 -*-
-- ---
-- jupyter:
-- jupytext:
-- text_representation:
-- extension: .hs
-- format_name: light
-- format_version: '1.5'
-- jupytext_version: 1.14.4
-- kernelspec:
-- display_name: Haskell
-- language: haskell
-- name: haskell
-- ---
-- + [markdown] hidden=false
-- 
--
-- IHaskell Notebook
-- ===
-- Hello, and welcome to the **IHaskell Notebook**. IHaskell Notebook is similar to an interactive shell along the lines of GHCi. However, it is much more powerful, and provides features such as syntax highlighting, autocompletion, multi-line input cells, integrated documentation, rich output visualization, and more. In this notebook, I'd like to demonstrate many of the awesome features IHaskell provides.
--
-- IHaskell is implemented as a language kernel for the [IPython](http://ipython.org) project, which means that although the entire thing is written only in Haskell, we get a beautiful notebook interface practically for free.
--
-- We can start with very simple Haskell expressions:
-- -
-- First of all, we can evaluate simple expressions.
3 + 5
"Hello, " ++ "World!"
-- + [markdown] hidden=false
-- As you can see, each input cell get an execution number. The first input cell is labeled `In [1]`. Just like in GHCi, the output of the last executed statement or expression is available via the `it` variable - however, in addition, the output of the $n$th cell is available via the `itN` variable. For example, if we wanted to see what the first cell printed, we can go ahead and output that:
-- -
it1
-- + [markdown] hidden=false
-- In addition to simple code cells such as the ones you see, you can also have other types of cells. All of this inline text, for instance, is written using Markdown cells, which support the majority of Github markdown syntax. This lets you embed images and formatting and arbitrary HTML interspersed with your Haskell code. In addition, you can export these notebooks into HTML or even as presentations using `reveal.js`.
--
-- Alright, back to code. Let's do something slightly fancier:
-- -
-- Unlike in GHCi, we can have multi-line expressions.
concat [
"Hello",
", ",
"World!"
] :: String
-- + [markdown] hidden=false
-- In addition to multi-line expressions, IHaskell supports most things that you could put in a standard Haskell file. For example, we can have function bindings without the `let` that GHCi requires. (As long as you group type signatures and their corresponding declarations together, you can use pattern matching and put signatures on your top-level declarations!)
-- +
thing :: String -> Int -> Int
thing "no" _ = 100
thing str int = int + length str
thing "no" 10
thing "ah" 10
-- + [markdown] hidden=false
-- So far we've just looked at pure functions, but nothing is stopping us from doing IO.
-- -
print "What's going on?"
-- + [markdown] hidden=false
-- IHaskell supports most GHC extensions via the `:extension` directive (or any shorthand thereof).
-- -
-- We can disable extensions.
:ext NoEmptyDataDecls
data Thing
-- And enable extensions.
:ext EmptyDataDecls
data Thing
-- + [markdown] hidden=false
-- Data declarations do pretty much what you expect, and work fine on multiple lines. If a declaration turns out to be not quite what you wanted, you can just go back, edit it, and re-evaluate the code cell.
-- +
-- Various data declarations work fine.
data One
= A String
| B Int
deriving Show
print [A "Hello", B 10]
-- + [markdown] hidden=false
-- Although this doesn't hold everywhere, we've tried to keep IHaskell relatively similar to GHCi in terms of naming. So, just like in GHCi, you can inspect types with `:type` (or shorthands):
-- -
-- We can look at types like in GHCi.
:ty 3 + 3
-- + [markdown] hidden=false
-- The same goes for the `:info` command. However, unlike GHCi, which simply prints info, the IHaskell notebook brings up a separate pane.
-- -
-- What is the Integral typeclass?
:info Integral
-- + [markdown] hidden=false
-- If you're looking at this notebook after it's been exported to HTML, you won't be able to see this interactive pane that pops up after this is evaluated. However, you can disable the interactive pager, and instead just show the output below the cell:
-- -
-- Only takes effect on later cells, so stick it in its own cell.
:opt no-pager
:info Integral
-- + [markdown] hidden=false
-- We can now write slightly more complicated scripts.
-- +
-- Results are printed as we go, even from a single expression.
import Control.Monad
import Control.Concurrent
forM_ [1..5] $ \x -> do
print x
threadDelay $ 200 * 1000
-- + [markdown] hidden=false
-- This is where the similarities with GHCi end, and the particularly shiny features of IHaskell begin.
--
-- Although looking at text outputs is often enough, there are many times where we really want a richer output. Suppose we have a custom data type for color:
-- -
data Color = Red | Green | Blue
-- + [markdown] hidden=false
-- If we were playing around with designing GUI applications, for instance, we might want to actually *see* these colors, instead of just seeing the text "Red", "Green", and "Blue" when we are debugging.
--
-- IHaskell lets you define a custom display mechanism for any data type via its `IHaskellDisplay` typeclass. Since you can use IHaskell in console mode as well as notebook mode, you can provide a list of display outputs for any data type, and the frontend will simply choose the best one. Here's how you would implement a very simple display mechanism for this `Color` data type:
-- +
import IHaskell.Display
instance IHaskellDisplay Color where
display color = return $ Display [html code]
where
code = concat ["<div style='font-weight: bold; color:"
, css color
, "'>Look!</div>"]
css Red = "red"
css Blue = "blue"
css Green = "green"
-- + [markdown] hidden=false
-- Once we define a custom `display :: a -> IO Display` function, we can simply output a `Color`:
-- -
Red
Green
Blue
-- + [markdown] hidden=false
-- The `DisplayData` type has several constructors which let you display your data as plain text, HTML, images (SVG, PNG, JPG), or even as LaTeX code.
--
-- In order to ship an extension for IHaskell, simply create a package named `ihaskell-thing` with a module named `IHaskell.Display.Thing`. As long as `ihaskell-thing` is installed, IHaskell will detect and use it automatically.
--
-- A number of packages already exist, which we can briefly look at.
-- + [markdown] hidden=false
-- The `ihaskell-aeson` package adds a display for [Aeson](http://hackage.haskell.org/package/aeson) JSON `Value` types. These are automatically formatted as JSON, rather than as Haskell values:
-- +
-- Aeson JSON data types are displayed nicely.
:ext OverloadedStrings
import Data.Aeson
data Coord = Coord { x :: Double, y :: Double }
instance ToJSON Coord where
toJSON (Coord x y) = object ["x" .= x, "y" .= y]
Null
Bool True
toJSON (Coord 3 2)
-- + [markdown] hidden=false
-- The `ihaskell-blaze` package lets you play around with HTML straight from within IHaskell using the [Blaze](http://jaspervdj.be/blaze/tutorial.html) library.
-- +
-- Small bits of HTML generated via Blaze are displayed.
import Prelude hiding (div, id)
import Text.Blaze.Html4.Strict hiding (map, style)
import Text.Blaze.Html4.Strict.Attributes
div ! style "color: red" $ do
p "This is an example of BlazeMarkup syntax."
b "Hello"
forM [1..5] $ \size -> do
let s = toValue $ size * 70
img ! src "https://www.google.com/images/srpr/logo11w.png" ! width s
-- + [markdown] hidden=false
-- The `ihaskell-diagrams` package allows you to experiment with the [diagrams](http://projects.haskell.org/diagrams/) package. It requires the Cairo backend.
-- +
-- We can draw diagrams, right in the notebook.
:extension NoMonomorphismRestriction FlexibleContexts TypeFamilies
import Diagrams.Prelude
-- By <NAME>
-- Draw a Sierpinski triangle!
sierpinski 1 = eqTriangle 1
sierpinski n = s
===
(s ||| s) # centerX
where s = sierpinski (n-1)
-- The `diagram` function is used to display them in the notebook.
diagram $ sierpinski 4
# centerXY
# fc black
`atop` square 10
# fc white
-- + [markdown] hidden=false
-- Just like with Diagrams, `ihaskell-charts` allows you to use the [Chart](https://github.com/timbod7/haskell-chart/wiki) library for plotting from within IHaskell. (You will need to install `cairo` as well, which may be a bit of a hassle.)
-- +
-- We can draw small charts in the notebook.
-- This example is taken from the haskell-chart documentation.
import Graphics.Rendering.Chart
import Data.Default.Class
import Control.Lens
let values = [
("Mexico City" , 19.2, 0),
("Mumbai" , 12.9, 10),
("Sydney" , 4.3, 0),
("London" , 8.3, 0),
("New York" , 8.2, 25)]
pitem (s, v, o) = pitem_value .~ v
$ pitem_label .~ s
$ pitem_offset .~ o
$ def
-- Convert to a renderable in order to display it.
toRenderable
$ pie_title .~ "Relative Population"
$ pie_plot . pie_data .~ map pitem values
$ def
-- + [markdown] hidden=false
-- In addition to displaying outputs in a rich format, IHaskell has a bunch of useful features.
--
-- For instance, the popular linting tool `hlint` is integrated and turned on by default. Let's write some ugly code, and see what it tells us:
-- +
-- There is also hlint integration enabled by default.
-- If you write sketchy code, it will tell you:
f :: Int -> Int
f x = x + 1
-- Most warnings are orange...
f $ 3
do
return 3
-- + [markdown] hidden=false
-- If you're an experienced Haskeller, though, and don't want `hlint` telling you what to do, you can easily turn it off:
-- -
-- If hlint annoys you, though, you can turn it off.
-- Note that this only takes effect in the next cell execution.
:opt no-lint
-- You could similarly use `:opt lint` to turn it back on.
f $ 3
-- + [markdown] hidden=false
-- In addition to `hlint` integration, IHaskell also integrates **Hoogle** for documentation searches. IHaskell provides two directives for searching Hoogle. The first of these, `:document` (or shorthands), looks for exact matches.
-- -
:doc filterM
-- + [markdown] hidden=false
-- The other provided command is `:hoogle`. This does a normal Hoogle search, and thus lets you use imperfect matching and searching by type signature. This will show you documentation for things that match the desired type signature, as demonstrated below. It automatically formats inline Haskell code and hyperlinks the identifiers to their respective Haddock documentations.
-- -
:hoogle :: [a] -> [b] -> [(a, b)]
-- + [markdown] hidden=false
-- If you need a refresher on all of the options, you can just use `:help`:
-- -
:help
-- + [markdown] hidden=false
-- All of the code you normally put into IHaskell is (like in GHCi) interpreted. However, sometimes you've perfected a function, and now need it to run faster. In that case, you can go ahead and define a module in a single cell. As long as your module has a module header along the lines of `module Name where`, IHaskell will recognize it as a module. It will create the file `A/B.hs`, compile it, and load it.
-- +
-- If your code isn't running fast enough, you can just put it into a module.
module A.B where
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
-- + [markdown] hidden=false
-- Note that the module is by default imported unqualified, as though you had typed `import A.B`.
-- -
-- The module is automatically imported unqualified.
print $ A.B.fib 20
print $ fib 20
-- + [markdown] hidden=false
-- Note that since a new module is imported, all previous bound identifiers are now unbound. For instance, we no longer have access to the `f` function from before:
-- -
f 3
-- + [markdown] hidden=false
-- However, if you re-import this module with another import statement, the original implicit import goes away.
-- +
import qualified A.B as Fib
Fib.fib 20
fib 20
-- + [markdown] hidden=false
-- Thanks!
-- ---
--
-- That's it for now! I hope you've enjoyed this little demo of **IHaskell**! There are still a few features that I haven't covered, such as the `show-types` and `show-errors` options, as well as the relatively intelligent autocompletion mechanism and inline type info popups.
--
-- I hope you find IHaskell useful, and please report any bugs or features requests [on Github](https://github.com/gibiansky/IHaskell/issues). If you have any comments, want to contribute, or just want to get in touch, don't hesitate to contact me at Andrew dot Gibiansky at Gmail. Contributions are also more than welcome, and I'm happy to help you get started with IHaskell development if you'd like to contribute!
--
-- Thank you to [<NAME>](https://github.com/aavogt), [<NAME>](http://reganmian.net/), and [@edechter](https://github.com/edechter) for their testing, bug reporting, pull requests, and general patience!
| notebooks/IHaskell.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# coding:utf-8
from __future__ import print_function
import math
import numpy as np
#import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import argparse
import time
def snoob(x):
next = 0
if(x>0):
smallest = x & -(x)
ripple = x + smallest
ones = x ^ ripple
ones = (ones >> 2) // smallest
next = ripple | ones
return next
def binomial(n,r):
return math.factorial(n) // (math.factorial(n - r) * math.factorial(r))
def count_bit(n):
count = 0
while (n):
count += n & 1
n >>= 1
return count
def init_parameters(N,Sz):
Nup = N//2 + Sz
Nhilbert = binomial(N,Nup)
ihfbit = 1 << (N//2)
irght = ihfbit-1
ilft = ((1<<N)-1) ^ irght
iup = (1<<(N-Nup))-1
return Nup, Nhilbert, ihfbit, irght, ilft, iup
def make_list(N,Nup,Nhilbert,ihfbit,irght,ilft,iup):
list_1 = np.zeros(Nhilbert,dtype=int)
list_ja = np.zeros(ihfbit,dtype=int)
list_jb = np.zeros(ihfbit,dtype=int)
ii = iup
ja = 0
jb = 0
ia_old = ii & irght
ib_old = (ii & ilft) // ihfbit
list_1[0] = ii
list_ja[ia_old] = ja
list_jb[ib_old] = jb
ii = snoob(ii)
for i in range(1,Nhilbert):
ia = ii & irght
ib = (ii & ilft) // ihfbit
if (ib == ib_old):
ja += 1
else:
jb += ja+1
ja = 0
list_1[i] = ii
list_ja[ia] = ja
list_jb[ib] = jb
ia_old = ia
ib_old = ib
ii = snoob(ii)
return list_1, list_ja, list_jb
def get_ja_plus_jb(ii,irght,ilft,ihfbit,list_ja,list_jb):
ia = ii & irght
ib = (ii & ilft) // ihfbit
ja = list_ja[ia]
jb = list_jb[ib]
return ja+jb
def make_hamiltonian(J1,D1,N,Nhilbert,irght,ilft,ihfbit,list_1,list_ja,list_jb):
listki = np.zeros((N+1)*Nhilbert,dtype=int)
loc = np.zeros((N+1)*Nhilbert,dtype=int)
elemnt = np.zeros((N+1)*Nhilbert,dtype=float)
listki = [i for k in range(N+1) for i in range(Nhilbert)]
for k in range(N):
isite1 = k
isite2 = (k+1)%N
is1 = 1<<isite1
is2 = 1<<isite2
is0 = is1 + is2
wght = -2.0*J1[k]
diag = wght*0.5*D1[k]
for i in range(Nhilbert):
ii = list_1[i]
ibit = ii & is0
if (ibit==0 or ibit==is0):
elemnt[N*Nhilbert+i] -= diag
loc[N*Nhilbert+i] = i
else:
elemnt[N*Nhilbert+i] += diag
loc[N*Nhilbert+i] = i
iexchg = ii ^ is0
newcfg = get_ja_plus_jb(iexchg,irght,ilft,ihfbit,list_ja,list_jb)
elemnt[k*Nhilbert+i] = -wght
loc[k*Nhilbert+i] = newcfg
HamCSR = scipy.sparse.csr_matrix((elemnt,(listki,loc)),shape=(Nhilbert,Nhilbert))
return HamCSR
# +
N = 14 # should be N>=4
Sz = 0
Nup, Nhilbert, ihfbit, irght, ilft, iup = init_parameters(N,Sz)
binirght = np.binary_repr(irght,width=N)
binilft = np.binary_repr(ilft,width=N)
biniup = np.binary_repr(iup,width=N)
print("N=",N)
print("Sz=",Sz)
print("Nup=",Nup)
print("Nhilbert=",Nhilbert)
print("ihfbit=",ihfbit)
print("irght,binirght=",irght,binirght)
print("ilft,binilft=",ilft,binilft)
print("iup,biniup=",iup,biniup)
start = time.time()
list_1, list_ja, list_jb = make_list(N,Nup,Nhilbert,ihfbit,irght,ilft,iup)
end = time.time()
print (end - start)
#print("list_1=",list_1)
#print("list_ja=",list_ja)
#print("list_jb=",list_jb)
#print("")
#print("i ii binii ja+jb")
#for i in range(Nhilbert):
# ii = list_1[i]
# binii = np.binary_repr(ii,width=N)
# ind = get_ja_plus_jb(ii,irght,ilft,ihfbit,list_ja,list_jb)
# print(i,ii,binii,ind)
# +
J1 = np.ones(N,dtype=float) # J_{ij}>0: AF
D1 = np.ones(N,dtype=float) # D_{ij}>0: AF
start = time.time()
HamCSR = make_hamiltonian(J1,D1,N,Nhilbert,irght,ilft,ihfbit,list_1,list_ja,list_jb)
end = time.time()
print (end - start)
#print (HamCSR)
start = time.time()
ene,vec = scipy.sparse.linalg.eigsh(HamCSR,k=5)
end = time.time()
print (end - start)
#print ("# GS energy:",ene[0])
print ("# energy:",ene[0],ene[1],ene[2],ene[3],ene[4])
#vec_sgn = np.sign(np.amax(vec[:,0]))
#print ("# GS wave function:")
#for i in range (Nhilbert):
# ii = list_1[i]
# binii = np.binary_repr(ii,width=N)
# print (i,vec[i,0]*vec_sgn,binii)
# -
| ed_sz_conserved_1d_heisenberg_py.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nlp-1 (Python3)
# language: python
# name: nlp-1
# ---
# +
import re
import string
from collections import Counter
import squarify
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import spacy
from spacy.tokenizer import Tokenizer
from bs4 import BeautifulSoup
import html as ihtml
import requests
import sqlite3
# -
df1 = pd.read_csv('https://raw.githubusercontent.com/JimKing100/techsearch/master/data/techsearch_p1.csv')
df1 = df1.drop(df1.columns[0], axis=1)
df2 = pd.read_csv('https://raw.githubusercontent.com/JimKing100/techsearch/master/data/techsearch_p2.csv')
df2 = df2.drop(df2.columns[0], axis=1)
df = pd.concat([df1, df2], ignore_index=True)
# +
def clean_text(text):
text = text.replace('\n', ' ') # remove newline
text = BeautifulSoup(text, "lxml").get_text() # remove html
text = text.replace('/', ' ') # remove forward slashes
text = re.sub(r'[^a-zA-Z ^0-9]', '', text) # letters and numbers only
text = text.lower() # lower case
text = re.sub(r'(x.[0-9])', '', text) # remove special characters
return text
df['description'] = df.apply(lambda x: clean_text(x['description']), axis=1)
# -
nlp = spacy.load("en_core_web_lg")
tokenizer = Tokenizer(nlp.vocab)
STOP_WORDS = nlp.Defaults.stop_words.union(['year'])
# +
# Tokenizer pipe removing stop words and blank words and lemmatizing
tokens = []
for doc in tokenizer.pipe(df['description'], batch_size=500):
doc_tokens = []
for token in doc:
if (token.lemma_ not in STOP_WORDS) & (token.text != ' '):
doc_tokens.append(token.lemma_)
tokens.append(doc_tokens)
df['tokens'] = tokens
# -
df.head()
tech_terms = ['python', 'r', 'sql', 'hadoop', 'spark', 'java', 'sas', 'tableau',
'hive', 'scala', 'aws', 'c', 'c++', 'matlab', 'tensorflow', 'excel',
'nosql', 'linux', 'azure', 'scikit', 'machine learning', 'statistic',
'analysis', 'computer science', 'visual', 'ai', 'deep learning',
'nlp', 'natural language processing', 'neural network', 'mathematic',
'database', 'oop', 'blockchain',
'html', 'css', 'javascript', 'jquery', 'git', 'photoshop', 'illustrator',
'word press', 'seo', 'responsive design', 'php', 'mobile', 'design', 'react',
'security', 'ruby', 'fireworks', 'json', 'node', 'express', 'redux', 'ajax',
'java', 'api', 'state management',
'wireframe', 'ui prototype', 'ux writing', 'interactive design',
'metric', 'analytic', 'ux research', 'empathy', 'collaborate', 'mockup',
'prototype', 'test', 'ideate', 'usability', 'high-fidelity design',
'framework',
'swift', 'xcode', 'spatial reasoning', 'human interface', 'core data',
'grand central', 'network', 'objective-c', 'foundation', 'uikit',
'cocoatouch', 'spritekit', 'scenekit', 'opengl', 'metal', 'api', 'iot',
'karma']
df['tokens_filtered'] = df.apply(lambda x: list(set(x['tokens']) & set(tech_terms)), axis=1)
df.head()
# Create a count function
def count(docs):
word_counts = Counter()
appears_in = Counter()
total_docs = len(docs)
for doc in docs:
word_counts.update(doc)
appears_in.update(set(doc))
temp = zip(word_counts.keys(), word_counts.values())
wc = pd.DataFrame(temp, columns = ['word', 'count'])
wc['rank'] = wc['count'].rank(method='first', ascending=False)
total = wc['count'].sum()
wc['pct_total'] = wc['count'].apply(lambda x: x / total)
wc = wc.sort_values(by='rank')
wc['cul_pct_total'] = wc['pct_total'].cumsum()
t2 = zip(appears_in.keys(), appears_in.values())
ac = pd.DataFrame(t2, columns=['word', 'appears_in'])
wc = ac.merge(wc, on='word')
wc['appears_in_pct'] = wc['appears_in'].apply(lambda x: x / total_docs)
return wc.sort_values(by='rank')
def populate_df(title, city):
j_title = df['job'] == title
j_city = df['city'] == city
subset_df = df[j_title & j_city]
subset_df = subset_df.reset_index()
wc = count(subset_df['tokens_filtered'])
skills = wc['word'][:10]
if subset_df.shape[0] > 0:
data = {'job': title,
'city': city,
'counts': subset_df['counts'][0],
'low_salary': subset_df['low_salary'].mean(),
'high_salary': subset_df['high_salary'].mean(),
'skills': list(skills)}
else:
data = {'job': title,
'city': city,
'counts': 0,
'low_salary': 0,
'high_salary': 0,
'skills': []}
return data
# +
final_df = pd.DataFrame(columns=['job', 'city', 'counts', 'low_salary', 'high_salary', 'skills'])
results = populate_df('data scientist', 'San Jose')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'San Francisco')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Seattle')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Washington')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'New York')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Baltimore')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Boulder')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'San Diego')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Denver')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Huntsville')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Colorado Springs')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Houston')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Trenton')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Dallas')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Columbus')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Austin')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Philadelphia')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Durham')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Raleigh')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('data scientist', 'Atlanta')
final_df = final_df.append(results, ignore_index=True)
# -
results = populate_df('web developer', 'San Jose')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'San Francisco')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Seattle')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Washington')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'New York')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Baltimore')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Boulder')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'San Diego')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Denver')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Huntsville')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', '<NAME>')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Houston')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Trenton')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Dallas')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Columbus')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Austin')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Philadelphia')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Durham')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Raleigh')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('web developer', 'Atlanta')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'San Jose')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'San Francisco')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Seattle')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Washington')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'New York')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Baltimore')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Boulder')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'San Diego')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Denver')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Huntsville')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Colorado Springs')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Houston')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Trenton')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Dallas')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Columbus')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Austin')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Philadelphia')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Durham')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Raleigh')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ux designer', 'Atlanta')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', '<NAME>')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'San Francisco')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Seattle')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Washington')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'New York')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Baltimore')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Boulder')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'San Diego')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Denver')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Huntsville')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Colorado Springs')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Houston')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Trenton')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Dallas')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Columbus')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Austin')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Philadelphia')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Durham')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Raleigh')
final_df = final_df.append(results, ignore_index=True)
results = populate_df('ios developer', 'Atlanta')
final_df = final_df.append(results, ignore_index=True)
final_df['low_salary'] = final_df['low_salary'].fillna(0)
final_df['low_salary'] = final_df['low_salary'].apply(lambda x: int(x))
final_df['low_salary'] = final_df['low_salary'].apply(lambda x: 0 if x < 10000 else x)
final_df['high_salary'] = final_df['high_salary'].fillna(0)
final_df['high_salary'] = final_df['high_salary'].apply(lambda x: int(x))
final_df['high_salary'] = final_df['high_salary'].apply(lambda x: 0 if x < 10000 else x)
final_df
final_df.to_csv('scrape_results.csv')
| main2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('survey_results_public.csv', index_col = 'Respondent')
# loading the other csv file, schema tells us what questions were asked during the survey
schema_df = pd.read_csv('survey_results_schema.csv', index_col = 'Column')
# to know about your data frame, u can print .shape attribute to know how much rows and columns are in
df.shape
# or we can use .info() method in order to know more about our data frame
# prints out name of each column as well as data types as well as the number of entries etc
df.info()
# we can set option if we want to see the desirable number of rows or columsn like this
# after printing out the number of columns we can see that it prints out all the numbers of columns
pd.set_option('display.max_columns',61)
# in order to view some specific rows from start or the end we use the .head() and .tail() methods from df e.g
# we can specify the no of rows we want to see, inside the method
df.head(15)
# if we want to access multiple columns of the df, we can pass a list of the
df[['Country','Gender']]
# we can acess the row we want to access via loc and iloc (with loc u can use labels, the columns names)
df.iloc[1]
# we can pass a list of rows we want df.iloc[[1,2,3]]
# similarly we can view the columns too by i.loc[1, 2] # the mean the first row and 2nd column
# if we want to get the count of unique values, number of "yes" or "no" are counted as responses
df['Hobbyist'].value_counts()
# u can also pass indexes as the labels like, rows 0:2 and columns from hobb to empl
df.loc[1:2, 'Hobbyist':'Employment']
# <font size="5">INDEXING</font>
# if we want a specific column to be set as an index we can do so by by
df.set_index('Respondent')
# we can make a column our index column by set index method as well as by specifying is the read_csv method by index_col
# after running this we will have the index as the above mentioned but pandas don't explicitly change the dataFrame
# until we mention inplace=True inside the set_index function, WOhoooo
# after setting up the index col arg in read_csv of schema we can search for a specific row e.g Respondant
# specific column in that row e.g questiontext col in Respondant
schema_df.loc['Respondent','QuestionText']
# we can also sort the indexes by ascending as well as descending
schema_df.sort_index()
# <font size="5"> Filtering </font>
# if one wants to see whether a specific column has the Value equal to ..
# this return a kind of filter that has true and false values, whether our querry matched or not
# we can use this filter and apply it to the data Frame
filt = (df['Country']=='Pakistan')
df[filt]
# we can filter our dataFrame through a .loc method like this (this method is preferrable)
df.loc[filt] # the 2nd arg pass the column you want from that specific row
# using operators for filtering &, | etc
# using & to narrow down the search
# filt2 = (df['Country']=='Brazil') & (df['Age']>30)
filt2 = (df['Country']=='Pakistan') | (df['Country']=='Brazil')
df.loc[filt2]
# the ~ operator filter the opposite to the filter like ~filt2 will query all the values other than pak and brazil
# df.loc[~filt2]
# the specific columns you want from the df after applying the filter
df.loc[filt2, ['Age','Country']]
# filter can be applied this way too
countries = ['United States', 'United Kingdom', 'India', 'Canada']
filt3 = df['Country'].isin(countries)
df.loc[filt3, 'Country']
# if we want to grab some specific string values in rows of a column we can use the string methods as follows
filt4 = df['LanguageWorkedWith'].str.contains('Python', na=False)
df.loc[filt4, 'LanguageWorkedWith']
# <font size="5"> Altering the data </font>
# we can rename the columns we want to like, we can pass the dictionary to the
df.rename(columns={'Country':'country', 'Age':'age'})
#if you want the changes to take place inplace=True is must
# altering the data in the rows
# we can just pass the values through the list like this, bu the length of list must be equal to the values passed
# these aren't enough and will give one an error, it is a mess for greater values
# df.loc[2] = ['i am not a professional dev', 'Yes']
# we can take out the specific values like this and change
df.loc[2, ['Age', 'Country']] = [19, 'United Kingdom']
# pandas have specific method .at() for changing a specific value, instead of .loc
df.at[2, 'Age'] = 19
# <font size="3">Updating multiple rows and columns </font>
# <br>
# e.g if we want to update all the Mainbranch to lowercase we can use str.lower mehtod
df['MainBranch'] = df['MainBranch'].str.lower()
# There are 4 specific methods used to make changes and people often get confused these 4 methods are <br>
# apply, map, applymap, replace
# <font size="3"> Apply function on a series </font>
def updateMainBranch(branch):
return branch.upper()
# we apply the function without passing the paranthesis inside, mostly lambdas func are used in apply()
# df['MainBranch'].head(10).apply(updateMainBranch)
df['MainBranch'].head(10).apply(lambda x:x.lower())
# <font size="3"> Apply function on a dataFrame </font>
# this return the values or no of rows in each column or series
df.apply(len, axis='columns') # we can mention the axix="column" or row arg inside the apply
| Python Library/intro_to_numpy_pandas_matplotlib/Learning_pandas1..ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome to the Awesome-sauce, let be your guide to radness
# ### Heat transfer model: Extended Surface of An Infinite Rod
#
#
#
# #### 1.) Diagram:
#
# 
# #### 2.) A mathematical expression of the unsolved differential equation, with an explanation of each term:
#
# $$ Q_x = Q_{x+\Delta x} + Q_{convection} $$
#
# | Letter | Meaning | Units |
# |---------|----------------------|-------|
# | T | Temperature | k |
# | m | $\sqrt{\frac{hP}{kA}}$ | $m^{-1}$ |
# | P | perimeter | m |
# | A | Cross Sectional Area | $m^2$ |
# | x | distance from origin | m |
#
#
# #### 3.) Explanation of expected behavior
#
# As h/k decreases, the temperature profile should dimish slower in respect to x
#
# This can be proved by setting h to zero to get no convective heat transfer, meaning the rod will never decrease in temperature in respect to x.
#
# #### 4.) A step-by-step solution of the differential equation using analytical methods
#
#
# Equation 1: $$ Q_x = Q_{x+\Delta x} + Q_{convection} $$
#
# Equation 2: $$ \frac{dQ_x}{dx} = \frac{Q_{x+\Delta x} - Q_x}{\Delta x} $$
#
# Equation 2 (rearranged): $$ Q_{x+\Delta x} = \frac{dQ_x}{dx}\Delta x + Q_X $$
#
# Equation 3: $$ Q_{convection} = hA[T-T_{\infty}] $$
#
# Combining equations 1, 2, and 3 yields:
#
# $$ Q_x = \frac{dQ_x}{dx}\Delta x + Q_X + hA_{shell}[T-T_{\infty}] $$
#
# $$ 0 = \frac{d({kA_{cross}}\frac{dT}{dx})}{dx}\Delta x + hP\Delta x[T-T_{\infty}] $$
#
# $$ 0 = kA_{cross} \frac{d^2T}{dx^2} + hP[T-T_{\infty}] $$
#
# Substitute $ m^2 = \frac{hP}{kA_{cross}}:$
#
# $$ 0 = \frac{d^2T}{dx^2} + m^2[T-T_{\infty}] $$
#
# Solving the equation yields:
#
# $$ T(x) = C_1e^{-mx} + C_2e^{mx} + T_{\infty} $$
#
# The $ C_2e^{mx} $ can be ignored because the fin does not increase with temperature
#
# $$ T(x) = C_1e^{-mx} + T_{\infty} $$
#
# Applying the boundary condition of T(0) = T_o:
#
# $$ T_o = C_1e^{-m*0} + T_{\infty} $$
#
# $$ C_1 = T_o - T_{\infty} $$
#
# Therefore the final solution is:
#
# $$ T = (T_o - T_{\infty})e^{-mx} + T_{\infty} $$
# #### 5.) A plot of the solution and an explanation of its behavior
# +
import numpy as np
import math as mt
import matplotlib.pyplot as plt
from array import *
T_0 = 200
T_inifity = 100
k = 385.0 # for copper
h = [0,0.005, 2, 5, 10,100]
r = 5.0/100
A = float(mt.pi*r**2)
P = float(2*mt.pi*r)
x = np.linspace(0,10,100)
T = 0*x
string = ['','']
for e in range(0,len(h)):
ratio = h[e]/k
for i in range(0,len(x)):
m = (ratio*P/(A))**0.5
T[i] = (T_0-T_inifity)*mt.exp(-m*x[i]) + T_inifity
plt.plot(x,T,label = 'h/k = %.5f' %ratio )
plt.legend()
plt.xlabel('x')
plt.ylabel('T(x)')
| presentations/10_11_19_Anthony.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sparkify 项目 Workspace
# 这个 Workspace 包括一个迷你的子数据集(128MB),是完整数据集(12GB)的一个子集。在将你的项目部署到云上之前,你可以自由使用 Workspace 来创建你的项目或用Spark来探索这个较小数据集。设置 Spark 集群的指南可以在选修 Spark 课程的内容里找到。
#
# 你可以依照下面的步骤进行项目的数据分析和模型搭建部分。
# +
# import pyspark related libraries
from pyspark.sql.functions import avg, col, concat, desc, explode, lit, min,\
max, split, sum, sumDistinct,udf
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions
from pyspark.sql.types import IntegerType
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.types
from pyspark.sql import Window
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Window
from pyspark.sql import SparkSession
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.SparkSession
from pyspark.ml import Pipeline
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml
from pyspark.ml.classification import LogisticRegression, GBTClassifier, LinearSVC,\
RandomForestClassifier
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.classification
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.evaluation
from pyspark.ml.feature import StandardScaler, VectorAssembler
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.feature
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.tuning
# import pandas, matplotlib, and seaborn for visualization and EDA
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# import datetime for parsing datetime object
import datetime
# https://docs.python.org/3/library/datetime.html
from time import time
# https://docs.python.org/3/library/time.html
# -
# create a Spark session
spark = SparkSession.builder.getOrCreate()
# # 加载和清洗数据
# 在这个 Workspace 中,小数据集的名称是 `mini_sparkify_event_data.json`.加载和清洗数据集,检查是否有无效或缺失数据——例如,没有userid或sessionid的数据。
# load data(df)
df = spark.read.json('mini_sparkify_event_data.json')
# show the first row of df
df.head()
# https://blog.csdn.net/yisun123456/article/details/90677924
# show structure of df
df.printSchema()
# http://spark.apache.org/docs/latest/sql-getting-started.html
# +
# clean data & save the cleaned dataframe.
## drop the rows in which userId or sessionId is null & save the omitted dataframe.
df_clean = df.dropna(how = "any", subset = ["userId", "sessionId"])
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
## drop the rows which are duplicated in userId & check the first 5 row of df_clean
df_clean.select("userId").dropDuplicates(["userId"]).sort("userId").show(5)
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
# +
# drop empty user
df_clean = df_clean.filter(df["userId"] != "")
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
# comparion with the original dataframe and the clean dataframe
df.count(), df_clean.count(), df.count() - df_clean.count()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
# delete 8346 records with null userId
# -
# # 探索性数据分析
# 当你使用完整数据集时,通过加载小数据集,在 Spark 中完成基础操作来实现探索性数据分析。在这个 Workspace 中,我们已经提供给你一个你可以探索的小数据集。
#
# ### 定义客户流失
#
# 在你完成初步分析之后,创建一列 `Churn` 作为模型的标签。我建议你使用 `Cancellation Confirmation` 事件来定义客户流失,该事件在付费或免费客户身上都有发生。作为一个奖励任务,你也可以深入了解 `Downgrade` 事件。
#
# show the distinct items in page column
df_clean.select("page").distinct().sort("page").show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
# +
# create churn
# add churn_event column in df_clean
churn_label = udf(lambda x: 1 if x == "Cancellation Confirmation" else 0, IntegerType())
df_clean = df_clean.withColumn("churn_event", churn_label("page"))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.udf
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# labelled churn event
# label user who churned according to churn event
window = Window.partitionBy("userId")
# determine who has confirmed and recorde it in churn
df_clean = df_clean.withColumn("churn", max("churn_event").over(window))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.Window
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=over#pyspark.sql.Column.over
# -
# check records on userId and churn columns
df_clean.select(["userId", "churn"]).dropDuplicates().show()
# check the number of user
df_clean.select(["userId"]).dropDuplicates().count()
# +
# add time column to df_clean
# convert timestamp into time string
convert = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0).strftime("%Y-%m-%d %X"))
# https://www.jb51.net/article/130259.htm
# https://blog.csdn.net/p9bl5bxp/article/details/54945920
# convert ts/event_time
df_clean = df_clean.withColumn('event_time', convert("ts"))
# convet registration/add registration_time
df_clean = df_clean.withColumn("registration_time", convert("registration"))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# added time columns when the event take place or registrates
# -
# check data
df_clean.printSchema()
# check the first row
df_clean.head()
# ### 探索数据
# 你定义好客户流失后,就可以执行一些探索性数据分析,观察留存用户和流失用户的行为。你可以首先把这两类用户的数据聚合到一起,观察固定时间内某个特定动作出现的次数或者播放音乐的数量。
# show in spark with the columns involved in churn
df_clean.select(['page', 'artist', 'song', 'level', 'userId', 'churn',\
'event_time', 'registration_time']).show()
# +
# visulization
# viz number of songs listened
# define counts of NextSong as number of songs listened
number = df_clean.filter('page == "NextSong"').\
groupby(["userId","churn","gender","page"]).count().toPandas()
# number.head()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=where#pyspark.sql.DataFrame.filter
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=topandas#pyspark.sql.DataFrame.toPandas
# user pandas dataframe can save a lot of time
number = number.pivot_table(index = ['userId', "churn", 'gender'],\
columns = 'page', values = 'count').reset_index()
# number.head()
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html?highlight=pivot#pandas.DataFrame.pivot
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html?highlight=reset_index#pandas.DataFrame.reset_index
# Draw a combination of boxplot and kernel density estimate
sns.set(style="whitegrid")
ax = sns.violinplot(y = 'churn', x = 'NextSong', data = number,\
hue = 'gender', split=False, palette="Blues_d", orient = 'h')
# http://seaborn.pydata.org/generated/seaborn.violinplot.html#seaborn.violinplot
plt.ylabel('Churned or Not?')
plt.xlabel('Numbers of Songs Listened')
plt.legend(title = 'Gender')
plt.title('Comparison of Numbers of Songs Listened on Churn')
# https://www.jianshu.com/p/5ae17ace7984
sns.despine(ax = ax)
# http://seaborn.pydata.org/generated/seaborn.despine.html?highlight=despine#seaborn.despine
# Numbers of songs in regard to customers churned and not didn't vary a lot, except a litte bit outliers.
# +
# viz churn on genders
# count user who churned or not on gender
gender_churn= df_clean.dropDuplicates(['userId','gender']).groupby(['churn',\
'gender']).count().toPandas()
gender_churn.head()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=topandas#pyspark.sql.DataFrame.toPandas
sns.set(style="whitegrid")
ax = sns.barplot(y = 'churn', x = 'count', hue = 'gender',\
data = gender_churn, palette="Blues_d", orient = 'h')
# http://seaborn.pydata.org/generated/seaborn.barplot.html?highlight=barplot#seaborn.barplot
plt.ylabel("Churned or Not?")
plt.xlabel("Count")
plt.legend(title = 'Gender', ncol = 2)
plt.title("Comparison of Likeliness of Churn on Gender")
sns.despine(ax = ax)
# Male customers likely churn than female counterparts
# +
# viz churn on user types
# count churn on user level
level = df_clean.filter('page == "Cancellation Confirmation"').\
groupby('level').count().toPandas()
# level.head()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=where#pyspark.sql.DataFrame.filter
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=topandas#pyspark.sql.DataFrame.toPandas\
ax = sns.barplot(x = 'level', y = 'count', data = level, palette="Blues_d")
plt.xlabel('Churn on User Level')
plt.ylabel('Count')
plt.title('Comparison of User Level on Churn')
sns.despine(ax = ax)
# Churn Usually Happens When a Customer Is Using a Paid Plan
# +
# viz churn on user lifetime
# create lifetime column for lifetime dataframe
lifetime = df_clean.select('userId', 'registration', 'ts', 'churn', 'gender')\
.withColumn('lifetime', (df_clean.ts-df_clean.registration))
# lifetime.show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# calculate max lifetime of user
lifetime = lifetime.groupBy('userId', 'churn', 'gender').agg({'lifetime':'max'}).withColumnRenamed\
("max(lifetime)",'lifetime')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#pyspark.sql.DataFrame.agg
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# lifetime.show()
# convert timestramp into days and save lifetime as Pandas dataframe
lifetime = lifetime.select('userId', 'churn', 'gender', (col('lifetime')/1000/3600/24)\
.alias('lifetime')).toPandas()
ax = sns.violinplot(y = 'churn', x = 'lifetime', hue = 'gender', data = lifetime, split = False, palette="Blues_d", orient = 'h')
# http://seaborn.pydata.org/generated/seaborn.violinplot.html#seaborn.violinplot
plt.xlabel('the max lifetime before churn')
plt.ylabel('Churned or Not')
plt.title('Comparison of user lifetime on churn and gender')
plt.legend(title = 'Gender', ncol = 2, loc = 'best')
sns.despine(ax=ax)
# Churned Customer Use the Service for a Shorter Period of Time.
# -
# # 特征工程
# 熟悉了数据之后,就可以构建你认为会对训练模型帮助最大的特征。要处理完整数据集,你可以按照下述步骤:
# - 写一个脚本来从小数据集中提取你需要的特征
# - 确保你的脚本可以拓展到大数据集上,使用之前教过的最佳实践原则
# - 在完整数据集上运行你的脚本,按运行情况调试代码
#
# 如果是在教室的 workspace,你可以直接用里面提供的小数据集来提取特征。确保当你开始使用 Spark 集群的时候,把上述的成果迁移到大数据集上。
# +
# define user lifetime from registration timestamp to cancel timestamp
feature_1 = df_clean.select('userId', 'registration', 'ts').withColumn('lifetime',\
(df_clean.ts-df_clean.registration))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# aggregation with max lifetime as lifetime
feature_1 = feature_1.groupby('userId').agg({'lifetime':"max"})\
.withColumnRenamed('max(lifetime)', 'life_time')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#pyspark.sql.DataFrame.agg
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# convert lifetime into days
feature_1 = feature_1.select('userId', (col('life_time')\
/1000/3600/24).alias('life_time'))
# summary of feature_1
feature_1.describe(['life_time']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# Total Songs by Users
# count songs by users
feature_2 = df_clean.select('userId', 'song').groupby('userId').count()\
.withColumnRenamed('count', 'num_songs')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of total_songs
feature_2.describe(['num_songs']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# Number of Thumbs Up
# filter thumbs up in page column and count number by user
feature_3 = df_clean.select("userId", 'page').filter(df_clean.page == 'Thumbs Up')\
.groupby("userId").count().withColumnRenamed('count', 'num_thumb_up')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=where#pyspark.sql.DataFrame.filter
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of number of thumbs up
feature_3.describe(['num_thumb_up']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# Number of Thumbs Down
# filter thumbs down in page column and count number by user
feature_4 = df_clean.select('userId', 'page').where(df_clean.page == "Thumbs Down")\
.groupby('userId').count().withColumnRenamed('count','num_thumb_down')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.where
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of number of thumbs down
feature_4.describe(['num_thumb_down']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# number of songs added to playlist
# count number of Add to Playlist in page by user
feature_5 = df_clean.select('userId', 'page').where(df_clean.page == 'Add to Playlist')\
.groupby('userId').count().withColumnRenamed('count', 'add_to_playlist')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.where
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of Add to Playlist
feature_5.describe(['add_to_playlist']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# number of friends added
# count number add friend by users
feature_6 = df_clean.select('userId', 'page').where(df_clean.page == 'Add Friend') \
.groupby("userId").count().withColumnRenamed('count', 'add_friend')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.where
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of add friend
feature_6.describe(['add_friend']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# length of listening by user
feature_7 = df_clean.select('userId', "length")\
.groupby('userId').sum()\
.withColumnRenamed('sum(length)', 'total_listen')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.functions.sum
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of listen time
feature_7.describe(['total_listen']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# number of songs listened per session by users
# count songs listened by userId and sessionId
feature_8 = df_clean.where('page == "NextSong"').groupBy('userId', 'sessionId').count()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.where
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
#feature_8.show()
# calculated average numbers of listened per session by users
feature_8 = feature_8.groupby(['userId']).agg({'count':'avg'})\
.withColumnRenamed('avg(count)', 'avg_songs_played')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#pyspark.sql.GroupedData.agg
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of average songs played per session by users
feature_8.describe(['avg_songs_played']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# Gender
# drop duplicated records by userId and gender
# replace gender into int
feature_9 = df_clean.select('userId', 'gender').dropDuplicates()\
.replace(['M', 'F'], ['0', '1'], 'gender')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=dropduplicates#pyspark.sql.DataFrame.dropDuplicates
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=replace#pyspark.sql.DataFrame.replace
# cast gender into int
feature_9 = feature_9.select('userId', col('gender').cast('int'))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=cast#pyspark.sql.Column.cast
# summary of gender
feature_9.describe(['gender']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# nubmer of artists listened
# filter song listened by userId and artist
# drop duplicates records
feature_10 = df_clean.filter(df_clean.page == 'NextSong')\
.select('userId', 'artist').dropDuplicates()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=where#pyspark.sql.DataFrame.filter
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=dropduplicates#pyspark.sql.DataFrame.dropDuplicates
#feature_10.show()
# count artist by users
feature_10 = feature_10.groupby('userId').count()\
.withColumnRenamed('count', 'num_artist')
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=groupby#pyspark.sql.DataFrame.groupby
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=count#pyspark.sql.DataFrame.count
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumnRenamed
# summary of num_artist
feature_10.describe(['num_artist']).show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# churn label
# alias churn as label
# drop duplicated records
label = df_clean.select('userId', col('churn').alias('label'))\
.dropDuplicates()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=alias#pyspark.sql.Column.alias
# summary of label
label.describe().show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# +
# merge all features into one dataset
# join all features in outer
# drop userId
# fill null in empty cells
data = feature_1.join(feature_2, 'userId', 'outer')\
.join(feature_3, 'userId', 'outer' )\
.join(feature_4, 'userId', 'outer' )\
.join(feature_5, 'userId', 'outer' )\
.join(feature_6, 'userId', 'outer' )\
.join(feature_7, 'userId', 'outer' )\
.join(feature_8, 'userId', 'outer' )\
.join(feature_9, 'userId', 'outer' )\
.join(feature_10, 'userId', 'outer' )\
.join(label, 'userId', 'outer' )\
.drop('userId')\
.fillna(0)
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=join#pyspark.sql.DataFrame.join
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=drop#pyspark.sql.DataFrame.drop
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=fillna#pyspark.sql.DataFrame.fillna
# summary of data
data.describe().show()
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=describe#pyspark.sql.DataFrame.describe
# -
# # 建模
# 将完整数据集分成训练集、测试集和验证集。测试几种你学过的机器学习方法。评价不同机器学习方法的准确率,根据情况调节参数。根据准确率你挑选出表现最好的那个模型,然后报告在训练集上的结果。因为流失顾客数据集很小,我建议选用 F1 score 作为优化指标。
# explore structure of data
data.printSchema()
# +
# Vectorize and Standardize data to fit and transform models
# vector assembler
cols = ['life_time', 'num_songs', 'num_thumb_up', \
'num_thumb_down', 'add_to_playlist','add_friend', 'total_listen',\
'avg_songs_played', 'gender', 'num_artist']
# merges multiple columns into a vector column.
vecAssembler = VectorAssembler(inputCols = cols, outputCol = 'NumFeatures')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=vectorassembler#pyspark.ml.feature.VectorAssembler
data = vecAssembler.transform(data)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.feature.VectorAssembler.transform
# show data
data.show()
# -
# standard scaler
# Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set
standardScaler = StandardScaler(inputCol = 'NumFeatures', outputCol = 'features', withStd = True)
model = standardScaler.fit(data)
data = model.transform(data)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=standardscaler#pyspark.ml.feature.StandardScaler
data.show()
# vecotrized and standardized data
# data split
# divid data into 6:2:2
train, validation, test = data.randomSplit([0.6, 0.2, 0.2], seed = 42)
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=randomsplit#pyspark.sql.DataFrame.randomSplit
# divid train, validation, test 6:2:2.
# +
# model design
# set baseline model to grip the baseline of metric values for futher models
'''
Evaluate two baseline models, one with all users labelled as churn = 1, and the others labelled as churn = 0.
And calculate accuracy and f1 score.
To konw the baseline values of metrics of futher models.
'''
# baseline model for churn = 1
result_baseline_1 = test.withColumn('prediction', lit(1.0))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=lit#pyspark.sql.functions.lit
evaluator = MulticlassClassificationEvaluator(predictionCol = 'prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
print("Test Set 1 Metrics: ")
print('Accuracy: {}'.format(evaluator.evaluate(result_baseline_1, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_baseline_1, {evaluator.metricName: 'f1'})))
# baseline model for churn = 0
result_baseline_0 = test.withColumn('prediction', lit(0.0))
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=withcolumn#pyspark.sql.DataFrame.withColumn
# http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=lit#pyspark.sql.functions.lit
evaluator = MulticlassClassificationEvaluator(predictionCol = 'prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
print("Test Set 0 Metrics: ")
print('Accuracy: {}'.format(evaluator.evaluate(result_baseline_0, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_baseline_0, {evaluator.metricName: 'f1'})))
# +
# Choose the best model from Logisitic Regression, Gradient-Boosted Trees, Support Vector Machine, and Random Forest by check accurary and f1 score stemmed from the training and validating the dataset
# Logistic Regression
# Initialize Classifier with 10 iterations
lr = LogisticRegression(maxIter = 10)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=logisticregression#pyspark.ml.classification.LogisticRegression
# Set f1 score as evaluator
evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# builder for a param grid used in grid search-based model selection.
grid = ParamGridBuilder().build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# 3-fold cross validation performs LogisticRegression
cv_lr = CrossValidator(estimator = lr, estimatorParamMaps = grid,\
evaluator = evaluator, numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# start time
start = time()
# fit trainset
model_lr = cv_lr.fit(train)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=fit#pyspark.ml.classification.LogisticRegression.fit
# end time
end = time()
model_lr.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# print the time span of the process
print('The training process took {} seconds'.format(end - start))
# +
# transform validationset
result_lr = model_lr.transform(validation)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.classification.LogisticRegressionModel.transform
# set evaluator for prediction
evaluator = MulticlassClassificationEvaluator(predictionCol ='prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# Print Accuacy and F1 score of Logistic Regression
print('Logistic Regression Metrics: ')
print('Accuracy: {}'.format(evaluator.evaluate(result_lr, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_lr, {evaluator.metricName: 'f1'})))
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# +
# Gradient-Boosted Trees
# Initialize Classifier with 10 iterations and 42 samples
gbt = GBTClassifier(maxIter = 10, seed = 42)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=gbtclassifier#pyspark.ml.classification.GBTClassifier
# Set Evaluator
evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# build paramGrid
grid = ParamGridBuilder().build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# 3-fold cross valiation with paramgrid
cv_gbt = CrossValidator(estimator = gbt,\
evaluator = evaluator,\
estimatorParamMaps = grid,\
numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# +
# Set start time
start = time()
# fit trainset
model_gbt = cv_gbt.fit(train)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=fit#pyspark.ml.classification.LogisticRegression.fit
# end time
end = time()
model_gbt.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# print the time span of the process
print('The training process took {} seconds'.format(end - start))
# +
# transform validation set
result_gbt = model_gbt.transform(validation)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.classification.LogisticRegressionModel.transform
# set evaluator for prediction
evaluator = MulticlassClassificationEvaluator(predictionCol ='prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# print accurcay & f1 score of GBT
print('Gradient Boosted Trees Metrics: ')
print('Accuracy: {}'.format(evaluator.evaluate(result_gbt, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_gbt, {evaluator.metricName: 'f1'})))
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# +
# Support Vector Machine
## Initialize Classifier with 10 iterations
svm = LinearSVC(maxIter = 10)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=linearsvc#pyspark.ml.classification.LinearSVC
# Set Evaluator
evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# build paramGrid
grid = ParamGridBuilder().build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# 3-fold cross valiation with paramgrid
cv_svm = CrossValidator(estimator = svm,\
evaluator = evaluator,\
estimatorParamMaps = grid,\
numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# get start time
start = time()
# fit trainset
model_svm = cv_svm.fit(train)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=fit#pyspark.ml.classification.LogisticRegression.fit
# end time
end = time()
model_svm.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# Print the time span of the process
print('The training process took {} seconds'.format(end - start))
# transform validation set
result_svm = model_svm.transform(validation)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.classification.LogisticRegressionModel.transform
# set evaluator for prediction
evaluator = MulticlassClassificationEvaluator(predictionCol ='prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# print accuracy and f1 score of SVM
print('Support Vector Machine Metrics: ')
print('Accuracy: {}'.format(evaluator.evaluate(result_svm, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_svm, {evaluator.metricName: 'f1'})))
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# +
# Random Forest
# Initialize Classifier
rf = RandomForestClassifier()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=randomforest#pyspark.ml.classification.RandomForestClassifier
# Set Evaluator
evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# build paramGrid
grid = ParamGridBuilder().build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# 3-fold cross validation
cv_rf = CrossValidator(estimator = rf,\
evaluator = evaluator,\
estimatorParamMaps = grid,\
numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# set start time
start = time()
# fit train set
model_rf = cv_rf.fit(train)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=fit#pyspark.ml.classification.LogisticRegression.fit
# end time
end = time()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
model_rf.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# print the time span of the process
print('The training process took {} seconds'.format(end - start))
# transform validation set
result_rf = model_rf.transform(validation)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.classification.LogisticRegressionModel.transform
# set evaluator for prediction
evaluator = MulticlassClassificationEvaluator(predictionCol ='prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# print accuracy and f1 score of RF
print('Random Forest Metrics: ')
print('Accuracy: {}'.format(evaluator.evaluate(result_rf, {evaluator.metricName: 'accuracy'})))
print('F1: {}'.format(evaluator.evaluate(result_rf, {evaluator.metricName: 'f1'})))
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# -
# ### 模型选择
# - 逻辑回归模型的准确率为: 0.7959 & F1 score: 0.7871, using 524.71 seconds.
# - 梯度提升树模型的准确率为: 0.7143 & F1 score: 0.7143, using 1088.89 seconds.
# - 支持向量机模型的准确率为: 0.7959 & F1 score: 0.7055, using 741.00 seconds.
# - 随机森林模型的准确率为: 0.7959 & F1 score: 0.7871, using 587.97 seconds.
#
# 通过比较上述结构,以目前小数据量的数据集上的表现上来看,逻辑回归和随机森林同时达到了最高的准确率和F1值,并且耗时几乎差不多。梯度提升树模型的准确率和F1值一致,并高于支持向量机的F1值,但时间开销是目前四个模型中最大的,几乎为随机森林的一倍。考虑到12G的全数据下,逻辑回归将显出更大的欠拟合,和支持向量机一样会大幅度增加时间开销。梯度提升树和随机森林被选择为进一步调优的模型,同时其算法所带的功能可直接反应出各特征对于分类的重要性,这将对进一步优化特征选择起到极大的作用。
#
#
# +
# Hyperparams Tunning GBT
# initalize GBT classifier
gbt = GBTClassifier()
# build paramGrid in 5-10 maxDepth and 10-15 maxIter
paramGrid_gbt = ParamGridBuilder()\
.addGrid(gbt.maxDepth, [5,10])\
.addGrid(gbt.maxIter, [10,15])\
.build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# set evaluator
f1_evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# 3-fold cross validation
crossval_gbt = CrossValidator(estimator = gbt, \
estimatorParamMaps = paramGrid_gbt,\
evaluator = f1_evaluator,\
numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# -
# fit train set
cvModel_gbt = crossval_gbt.fit(train)
# get the best f1 score
cvModel_gbt.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# the best f1 scores are higher than the prelimiary test
# get the corresponding maxDepth and maxIter of the best f1 score
cvModel_gbt.getEstimatorParamMaps()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=getestimatorparammaps#pyspark.ml.tuning.CrossValidator.getEstimatorParamMaps
# best GBT model with maxDepth 5 and maxIter 10
gbt_best = GBTClassifier( maxDepth = 5, maxIter = 10, seed =42)
# fit train set
gbt_best_model = gbt_best.fit(train)
# transform test set
results_final = gbt_best_model.transform(test)
# +
# final resuls metrics from GBT
evaluator = MulticlassClassificationEvaluator(predictionCol = 'prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# Print accuracy and f1 score
print("Test Set Metrics: ")
print('Accuracy: {}'.format(evaluator.evaluate(results_final, {evaluator.metricName: 'accuracy'})))
print('F-1 Score: {}'.format(evaluator.evaluate(results_final, {evaluator.metricName: 'f1'})))
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# +
# features importance from GBT model
feat_imp_GBT = gbt_best_model.featureImportances.values
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=featureimportances#pyspark.ml.classification.GBTClassificationModel.featureImportances
# set y axis
cols = ['lifetime', 'num_songs', 'num_thumb_up', \
'num_thumb_down', 'add_to_playlist','add_friend', 'total_listen',\
'avg_songs_played', 'gender', 'mum_artist']
y_pos = np.arange(len(cols))
# draw horizontal barplot to show feature importances
plt.barh(y_pos, feat_imp_GBT, align = 'center')
plt.yticks(y_pos, cols)
plt.xlabel("Importance Score")
plt.title("GBT Feature Importances")
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.barh.html?highlight=barh#matplotlib.pyplot.barh
# +
# Hyperparams Tuing Random Forest
rf = RandomForestClassifier()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=randomforest#pyspark.ml.classification.RandomForestClassifier
# Set Evaluator
evaluator = MulticlassClassificationEvaluator(metricName = 'f1')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# build paramGrid with 5-50 maxDepth and 10-100 numTrees
paramGrid_rf = ParamGridBuilder().addGrid(rf.maxDepth, [5,50])\
.addGrid(rf.numTrees, [10,100]).build()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=paramgrid#pyspark.ml.tuning.ParamGridBuilder
# 3-fold cross validation
cv_rf = CrossValidator(estimator = rf,\
evaluator = evaluator,\
estimatorParamMaps = grid,\
numFolds = 3)
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=crossvalidator#pyspark.ml.tuning.CrossValidator
# -
# fit train set
cvModel_rf = cv_rf.fit(train)
# get the best f1
cvModel_rf.avgMetrics
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=avgmetric#pyspark.ml.tuning.CrossValidatorModel.avgMetrics
# the best f1 score is lower than the score obtained in the preliminary test
# extract Param Map of the best f1
cvModel_rf.extractParamMap()
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=extractparammap#pyspark.ml.classification.RandomForestClassifier.extractParamMap
# estimatorParamMaps is [{}]
# best RF model as the default model
rf_best = RandomForestClassifier()
# fit train set
rf_best_model = rf_best.fit(train)
# transfomr test set
results_final_rf = rf_best_model.transform(test)
# +
# final resuls metrics from RF
# set evaluator
evaluator = MulticlassClassificationEvaluator(predictionCol = 'prediction')
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=multiclassclassificationevaluator#pyspark.ml.evaluation.MulticlassClassificationEvaluator
# Print accuracy and f1 score
print("Test Set Metrics: ")
print('Accuracy: {}'.format(evaluator.evaluate(results_final_rf, {evaluator.metricName: 'accuracy'})))
print('F-1 Score: {}'.format(evaluator.evaluate(results_final_rf, {evaluator.metricName: 'f1'})))
# # http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=evaluate#pyspark.ml.evaluation.MulticlassClassificationEvaluator.evaluate
# +
# features importance from RF model
# get feature importances
feat_imp_rf= rf_best_model.featureImportances.values
# http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=featureimportances#pyspark.ml.classification.GBTClassificationModel.featureImportances
# set y axis
cols = ['lifetime', 'num_songs', 'num_thumb_up', \
'num_thumb_down', 'add_to_playlist','add_friend', 'total_listen',\
'avg_songs_played', 'gender', 'mum_artist']
y_pos = np.arange(len(cols))
# draw horizontal barplot
plt.barh(y_pos, feat_imp_rf, align = 'center')
plt.yticks(y_pos, cols)
plt.xlabel("Importance Score")
plt.title("RF Feature Importances")
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.barh.html?highlight=barh#matplotlib.pyplot.barh
# -
# # 结论
# - 由梯度提升树和随机森林模型调优后所得准确率和f1值对比可知,随机森林的结果略优于梯度提升树。
# - 从特征重要性对比可知,两种模型所要求对于分类的特征重要性基本一致,lefttime即用户的使用时间对于是否churn及其重要,num_thumb_down,add_friends, avg_songs_played,及num_artist。
#
# +
# final resuls metrics
evaluator = MulticlassClassificationEvaluator(predictionCol = 'prediction')
print("Test Set Metrics in GBT: ")
print('Accuracy: {}'.format(evaluator.evaluate(results_final, {evaluator.metricName: 'accuracy'})))
print('F-1 Score: {}'.format(evaluator.evaluate(results_final, {evaluator.metricName: 'f1'})))
print('-----------------------------------------')
print("Test Set Metrics in RF: ")
print('Accuracy: {}'.format(evaluator.evaluate(results_final_rf, {evaluator.metricName: 'accuracy'})))
print('F-1 Score: {}'.format(evaluator.evaluate(results_final_rf, {evaluator.metricName: 'f1'})))
# +
# draw the comprison of feature importances between two models
# set y axis
cols = ['lifetime', 'num_songs', 'num_thumb_up', \
'num_thumb_down', 'add_to_playlist','add_friend', 'total_listen',\
'avg_songs_played', 'gender', 'mum_artist']
y_pos = np.arange(len(cols))
# set figure size 20*10
plt.figure(figsize = (20, 10))
# left subplot
plt.subplot(1,2,1)
plt.barh(y_pos, feat_imp_rf, align = 'center')
plt.yticks(y_pos, cols)
plt.xlabel("Importance Score")
plt.title("RF Feature Importances")
# right subplot
plt.subplot(1,2,2)
plt.yticks(y_pos, '')
plt.barh(y_pos, feat_imp, align = 'center')
plt.xlabel("Importance Score")
plt.title("GBT Feature Importances")
# https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html?highlight=figure%20figsize
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html?highlight=subplot#matplotlib.pyplot.subplot
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.barh.html?highlight=barh#matplotlib.pyplot.barh
# -
# # 最后一步
# 清理你的代码,添加注释和重命名变量,使得代码更易读和易于维护。参考 Spark 项目概述页面和数据科学家毕业项目审阅要求,确保你的项目包含了毕业项目要求的所有内容,并且满足所有审阅要求。记得在 GitHub 代码库里包含一份全面的文档——README文件,以及一个网络应用程序或博客文章。
| Sparkify-zh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="animated.gif" width=500>
#
# # How to make an animated GIF using a canvas
#
# This notebook demonstrates how to capture image arrays from a jp_doodle canvas
# and save the collected images as an animated GIF.
#
# For the illustration we develop an animation which projects a hypercube into three
# dimensions using a projectiom matrix.
import numpy as np
# +
def projection_array(t, delta=0.01, gamma=0.0111):
"projection matrix for a time point."
alpha = delta * t
beta = 1.0 + gamma * t
delta = 2.0 + (delta + gamma) * t
ca, sa = np.sin(alpha), np.cos(alpha)
cb, sb = np.sin(beta), np.cos(beta)
cd, sd= np.sin(delta), np.cos(delta)
return np.array([
[sa, -ca, 0, sd],
[ca, sa, sd, 0],
[cd, 0, sb, cb],
]).transpose()
#projection_array(100)
# -
# hypercube vertices
vertices4 = np.array([
[i, j, k, l]
for i in [-1,1]
for j in [-1,1]
for k in [-1,1]
for l in [-1,1]
])
#vertices4
# hypercube edges
from numpy.linalg import norm
edges = []
for (i, v) in enumerate(vertices4):
for (j, w) in enumerate(vertices4):
if i > j and norm(v - w) == 2:
edges.append((i,j))
#edges
# +
def project_vertices(t):
"project 4d vertices into 3d at a time point."
P = projection_array(t)
return vertices4.dot(P)
#project_vertices(555)
# -
# # Create a canvas for the animation
# +
from jp_doodle.nd_frame import swatch3d
s = swatch3d(pixels=700, model_height=6)
def draw_hcube(t=220):
s.reset()
s.frame_rect((-7, -7, -7), 10, 10, color="yellow")
vertices = project_vertices(t)
for (i, j) in edges:
s.line(vertices[i], vertices[j], color="#e94", lineWidth=5)
s.orbit_all(3)
draw_hcube()
# -
# # Draw an animation on the canvas and capture images for each frame
# +
image_arrays = []
def array_callback(array):
image_arrays.append(array)
import time
for t in range(200):
with s.in_canvas.delay_redraw():
draw_hcube(t)
#array = s.in_canvas.pixels_array()
#image_arrays.append(array)
s.in_canvas.pixels_array_async(array_callback)
time.sleep(0.1)
# -
# View one of the saved images
from jp_doodle.array_image import show_array
show_array(image_arrays[5])
# # Save the image sequence as an animated GIF using `imageio`
import imageio
exportname = "animated.gif"
imageio.mimsave(exportname, image_arrays, format='GIF', duration=0.1)
# <img src="animated.gif" width=500>
| notebooks/misc/Making an animated GIF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Types
#
# When reading in a data set, pandas will try to guess the data type of each column like float, integer, datettime, bool, etc. In Pandas, strings are called "object" dtypes.
#
# However, Pandas does not always get this right. That was the issue with the World Bank projects data. Hence, the dtype was specified as a string:
# ```
# df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
# ```
#
# Run the code cells below to read in the indicator and projects data. Then run the following code cell to see the dtypes of the indicator data frame.
# +
# Run this code cell
import pandas as pd
# read in the population data and drop the final column
df_indicator = pd.read_csv('../data/population_data.csv', skiprows=4)
df_indicator.drop(['Unnamed: 62'], axis=1, inplace=True)
# read in the projects data set with all columns type string
df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
# -
# Run this code cell
df_indicator.dtypes
# These results look reasonable. Country Name, Country Code, Indicator Name and Indicator Code were all read in as strings. The year columns, which contain the population data, were read in as floats.
#
# # Exercise 1
#
# Since the population indicator data was read in correctly, you can run calculations on the data. In this first exercise, sum the populations of the United States, Canada, and Mexico by year.
df_indicator
# +
# TODO: Calculate the population sum by year for Canada,
# the United States, and Mexico.
# the keepcol variable makes a list of the column names to keep. You can use this if you'd like
keepcol = ['Country Name']
for i in range(1960, 2018, 1):
keepcol.append(str(i))
# TODO: In the df_nafta variable, store a data frame that only contains the rows for
# Canada, United States, and Mexico.
df_nafta = df[df['Country Name'].isin(['Canada', 'United States', 'Mexico'])]
df_pop = df_nafta.sum()
# TODO: Calculate the sum of the values in each column in order to find the total population by year.
# You can use the keepcol variable if you want to control which columns get outputted
# -
df_nafta
df_pop
# # Exercise 2
#
# Now, run the code cell below to look at the dtypes for the projects data set. They should all be "object" types, ie strings, because that's what was specified in the code when reading in the csv file. As a reminder, this was the code:
# ```
# df_projects = pd.read_csv('../data/projects_data.csv', dtype=str)
# ```
# Run this code cell
df_projects.dtypes
# Many of these columns should be strings, so there's no problem; however, a few columns should be other data types. For example, `boardapprovaldate` should be a datettime and `totalamt` should be an integer. You'll learn about datetime formatting in the next part of the lesson. For this exercise, focus on the 'totalamt' and 'lendprojectcost' columns. Run the code cell below to see what that data looks like
# Run this code cell
df_projects[['totalamt', 'lendprojectcost']].head()
# Run this code cell to take the sum of the total amount column
df_projects['totalamt'].sum()
# What just happened? Pandas treated the totalamts like strings. In Python, adding strings concatenates the strings together.
#
# There are a few ways to remedy this. When using pd.read_csv(), you could specify the column type for every column in the data set. The pd.read_csv() dtype option can accept a dictionary mapping each column name to its data type. You could also specify the `thousands` option with `thousands=','`. This specifies that thousands are separated by a comma in this data set.
#
# However, this data is somewhat messy, contains missing values, and has a lot of columns. It might be faster to read in the entire data set with string types and then convert individual columns as needed. For this next exercise, convert the `totalamt` column from a string to an integer type.
# +
# TODO: Convert the totalamt column from a string to a float and save the results back into the totalamt column
# Step 1: Remove the commas from the 'totalamt' column
# HINT: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.replace.html
# Step 2: Convert the 'totalamt' column from an object data type (ie string) to an integer data type.
# HINT: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.to_numeric.html
df_projects['totalamt'] = df_projects['totalamt'].str.replace(',','')
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'])
# -
df_projects['totalamt']
# # Conclusion
#
# With messy data, you might find it easier to read in everything as a string; however, you'll sometimes have to convert those strings to more appropriate data types. When you output the dtypes of a dataframe, you'll generally see these values in the results:
# * float64
# * int64
# * bool
# * datetime64
# * timedelta
# * object
#
# where timedelta is the difference between two datetimes and object is a string. As you've seen here, you sometimes need to convert data types from one type to another type. Pandas has a few different methods for converting between data types, and here are link to the documentation:
#
# * [astype](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.astype.html#pandas.DataFrame.astype)
# * [to_datetime](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.to_datetime.html#pandas.to_datetime)
# * [to_numeric](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.to_numeric.html#pandas.to_numeric)
# * [to_timedelta](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.to_timedelta.html#pandas.to_timedelta)
| lessons/ETLPipelines/7_datatypes_exercise/7_datatypes_exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#IMPORTACIÓN DE LIBRERÍAS
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# CARGA DE DATOS
train = pd.read_csv('./../data/train.csv')
test = pd.read_csv('./../data/test.csv')
train.sample(5)
print(train.columns.values)
print(train.shape)
train.info()
train.describe()
# VARIABLES CATEGÓRICAS
train.describe(include=['O'])
train.groupby(['Survived']).count()['PassengerId']
train.groupby(['Survived', 'Sex']).count()['PassengerId']
# +
group_sex = train.groupby(['Survived', 'Sex']).count()['PassengerId']
group_sex.unstack(level=0).plot.bar()
plt.title('Sobrevivientes por sexo')
plt.xlabel('Sexo')
plt.xticks(fontsize=15, rotation=0)
plt.ylabel('N° Sobrevivientes')
plt.yticks(fontsize=15)
plt.legend(labels=['No sobrevive', 'Sobreviviente'])
plt.show()
# +
women = train.loc[train.Sex == 'female']["Survived"]
rate_women = sum(women)/len(women)
print("Porcentaje mujeres sobrevivientes:", rate_women*100,"%")
# +
men = train.loc[train.Sex == 'male']["Survived"]
rate_men = sum(men)/len(men)
print("Porcentaje mujeres sobrevivientes:", rate_men*100, "%")
# +
from sklearn.ensemble import RandomForestClassifier
y = train["Survived"]
features = ["Pclass", "Sex", "SibSp", "Parch"]
X = pd.get_dummies(train[features])
X_test = pd.get_dummies(test[features])
model = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=1)
model.fit(X, y)
predictions = model.predict(X_test)
output = pd.DataFrame({'PassengerId': test.PassengerId, 'Survived': predictions})
output.to_csv('submission.csv', index=False)
print("Your submission was successfully saved!")
# -
train.info()
print('_'*40)
test.info()
train[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
train[['Survived', 'Sex', 'Age', 'Pclass']].info()
train['Age'].isna()
(train[train['Age'].isna()]
.groupby(['Sex', 'Pclass'])
.count()['PassengerId']
.unstack(level=0))
(train[train['Age'].isna()]
.groupby(['SibSp', 'Parch'])
.count()['PassengerId']
.unstack(level=0))
print(train['Age'].median())
train['Age'] = train['Age'].fillna(28.0)
train[[]].info()
train['Sex'] = train['Sex'].map({'female':1, 'male':0}).astype(int)
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
train['FlagSolo'] = np.where(
(train['SibSp'] == 0) & (train['Parch'] == 0), 1, 0)
grouped_flag = train.groupby(['Survived', 'FlagSolo']).count()['PassengerId']
print(grouped_flag)
(grouped_flag.unstack(level=0).plot.bar())
plt.show()
train[['Survived', 'Sex', 'Age', 'Pclass', 'FlagSolo']].head(3)
# +
Y_train = train['Survived']
features = ['Sex', 'Age', 'Pclass', 'FlagSolo']
X_train = train[features]
print(Y_train.shape, X_train.shape)
# +
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, Y_train)
# +
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
# +
from sklearn.metrics import plot_confusion_matrix
def conf_mat_acc(modelo):
disp = plot_confusion_matrix(modelo, X_train, Y_train,
cmap=plt.cm.Blues, values_format="d")
true_pred = disp.confusion_matrix[0,0]+disp.confusion_matrix[1,1]
total_data = np.sum(disp.confusion_matrix)
accuracy = true_pred/total_data
print('accuracy: ', np.round(accuracy, 2))
plt.show()
# -
conf_mat_acc(log_reg)
conf_mat_acc(decision_tree)
print(test.head(3))
test.info()
# +
test['Sex'] = test['Sex'].map({'female':1, 'male':0}).astype(int)
test['Age'] = test['Age'].fillna(28.0)
test['FlagSolo'] = np.where(
(test['SibSp'] == 0) & (test['Parch'] == 0), 1, 0)
# -
print(test.info())
test[features].head(3)
X_test = test[features]
print(X_test.shape)
Y_pred_log = log_reg.predict(X_test)
Y_pred_tree = decision_tree.predict(X_test)
print(Y_pred_log[0:10])
print(Y_pred_log[0:20])
print(Y_pred_tree[0:20])
def download_output(y_pred, name):
output = pd.DataFrame({'PassengerId': test.PassengerId,
'Survived': y_pred})
output.to_csv(name, index=False)
download_output(Y_pred_log, 'pred_log.csv')
download_output(Y_pred_tree, 'pred_tree.csv')
| notebooks/titanic_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ToumaTanaka/Data_Science/blob/main/Simulation/Random_walk.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="jHGmhxG4BxfK"
import numpy as np
from random import random
import matplotlib.pyplot as plt
from math import *
# + [markdown] id="fZjS-YtrP6Mv"
# # ブラウン運動のシミュレーション
# + colab={"base_uri": "https://localhost:8080/"} id="Ht5pJrU8CBhN" outputId="394e8c17-a33c-4989-a9df-c3bc950b332f"
#回数
N = 10000
#拡散係数
D = 10
DT = 1/N
std = sqrt(2 * D * DT)
#平均二乗変位の理論値
S = 2 * D * 1
#初期値
x,y = 0,0
x_list1 = [0]
y_list1 = [0]
a_list1 = []
for n in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
x = x+x_ran # x方向への移動。cos(θ)。
y = y+y_ran # y方向への移動。sin(θ)
r = sqrt(x*x + y*y)
x_list1.append(x) # xの値をx_listに格納していく
y_list1.append(y) # yの値をx_listに格納していく
a = (0 - r)**2
a_list1.append(a)
S_1 = sum(a_list1) / N
print('平均二乗変位の理論値')
print(S)
print('平均二乗変位の実験値')
print(S_1)
# + [markdown] id="C8sd9ys7byj7"
# ### 複数回、平均二乗変位を求めグラフにすると以下のような対数正規分布に従った
#
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="4G1ny87HYjtQ" outputId="1434daeb-d031-4a16-e484-3a6633e51c2f"
#平均二乗変位のリスト
SS1 = []
N = 1000
#繰り返し回数
J = 10000
for j in range(J):
x,y = 0,0
a_list1 = []
for i in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
x = x+x_ran # x方向への移動。cos(θ)。
y = y+y_ran # y方向への移動。sin(θ)
r = sqrt(x*x + y*y)
a = (0 - r)**2
a_list1.append(a)
S_1 = sum(a_list1) / N
SS1.append(S_1)
SS1 = np.log(SS1)
plt.hist(SS1,bins = 30)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 392} id="DKVLVZGGDDHg" outputId="546b4b58-84e4-47af-f942-6d21a743a6c2"
plt.figure(figsize=(10,6))
plt.plot(x_list1,y_list1,lw=0.5) # (x,y)のプロット
plt.xlabel('x') # x軸のラベル
plt.ylabel('y') # y軸のラベル
plt.xlim([-5,2]) # x軸の範囲
plt.ylim([-2,6]) # y軸の範囲
plt.show()
# + [markdown] id="P8nbbCRIVGqx"
# # 調和ポテンシャル中のブラウン運動のシミュレーション
# + colab={"base_uri": "https://localhost:8080/"} id="f-k6PGM2VF9j" outputId="276ad78b-978a-4bca-9f1d-d6042adb5f57"
#回数
N = 10000
#拡散係数
D = 10
DT = 1/N
#平均二乗変位の理論値
S = 2 * D * 1
#標準偏差
std = sqrt(2 * D * DT)
#ポテンシャル強度(ここを色々な数値で計算する)
k = 0.01
#初期値
x,y = 0,0
x_list2 = [0]
y_list2 = [0]
a_list2 = []
for n in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
x = x - (k * x * DT) + x_ran
y = y - (k * x * DT) + y_ran
r = sqrt(x*x + y*y)
a = (0 - r)**2
x_list2.append(x)
y_list2.append(y)
a_list2.append(a)
S_2 = sum(a_list2) / N
print('平均二乗変位の理論値')
print(S)
print('平均二乗変位の実験値')
print(S_2)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="qgpRh90UgYBJ" outputId="faba380e-b3d0-4252-b3d5-f10489b47a0e"
#平均二乗変位のリスト
SS2 = []
N = 1000
#繰り返し回数
J = 10000
for j in range(J):
x,y = 0,0
a_list2 = []
for i in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
x = x - (k * x * DT) + x_ran
y = y - (k * x * DT) + y_ran
r = sqrt(x*x + y*y)
a = (0 - r)**2
a_list2.append(a)
S_2 = sum(a_list2) / N
SS2.append(S_2)
SS2 = np.log(SS2)
plt.hist(SS2,bins = 30)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 392} id="E0Iw6YDkQC8X" outputId="f12cdcfc-51e9-47df-ead2-2842489aa61a"
plt.figure(figsize=(10,6))
plt.plot(x_list2,y_list2,lw=0.5) # (x,y)のプロット
plt.xlabel('x') # x軸のラベル
plt.ylabel('y') # y軸のラベル
plt.xlim([-4,1]) # x軸の範囲
plt.ylim([-1,4]) # y軸の範囲
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="vKIla3ziXTL0" outputId="e6d0769d-d315-4fc6-bde8-9d9e5a9b8a51"
N_list = list(range(1,N+2))
plt.plot(N_list,x_list2) # (x,y)のプロット
plt.xlabel('t') # x軸のラベル
plt.ylabel('x') # y軸のラベル
plt.xlim([0,N]) # x軸の範囲
plt.ylim([-10,10]) # y軸の範囲
plt.show()
# + [markdown] id="wKCMfKfmYnAU"
# # ランジュバン方程式のシミュレーション
# * ポテンシャル中でのブラウン運動
# + colab={"base_uri": "https://localhost:8080/"} id="nf-mpbGYYmYA" outputId="f8682b24-b756-46f1-c013-3cfd9b439c34"
#回数
N = 1000
#拡散係数
D = 10
DT = 1/N
#平均二乗変位の理論値
S = 2 * D * 1
#標準偏差
std = sqrt(2 * D * DT)
#質量
m = 1
#摩擦係数
j = 0.1
#ポテンシャルの初期値
v = 0
#初期値
x,y = 0,0
x_list3 = [0]
y_list3 = [0]
a_list3 = []
for n in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
v_x = v - (j/m * v * DT) + x_ran/m
x = x + v_x
v_y = v - (j/m * v * DT) + y_ran/m
y = y + v_y
r = sqrt(x*x + y*y)
a = (0 - r)**2
x_list3.append(x)
y_list3.append(y)
a_list3.append(a)
S_1 = sum(a_list3) / N
print('平均二乗変位の理論値')
print(S)
print('平均二乗変位の実験値')
print(S_1)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="VBXOBJPbg3QL" outputId="9121c9dc-8286-4a73-8754-a0a60a28f780"
#平均二乗変位のリスト
SS3 = []
N = 1000
#繰り返し回数
J = 10000
for j in range(J):
x,y = 0,0
a_list3 = []
for i in range(N):
x_ran = np.random.normal(0,std) #ホワイトガウスノイズの生成
y_ran = np.random.normal(0,std)
v_x = v - (j/m * v * DT) + x_ran/m
x = x + v_x
v_y = v - (j/m * v * DT) + y_ran/m
y = y + v_y
r = sqrt(x*x + y*y)
a = (0 - r)**2
a_list3.append(a)
S_3 = sum(a_list3) / N
SS3.append(S_3)
SS3 = np.log(SS3)
plt.hist(SS3,bins = 30)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 392} id="HOmKtAh9RL6W" outputId="78205127-3aec-40e3-ff52-b68fe9185aab"
plt.figure(figsize=(10,6))
plt.plot(x_list3,y_list3,lw=0.5) # (x,y)のプロット
plt.xlabel('x') # x軸のラベル
plt.ylabel('y') # y軸のラベル
plt.xlim([-2,3]) # x軸の範囲
plt.ylim([-3,2]) # y軸の範囲
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="a-fgyDrRacPI" outputId="88a36a26-996e-4e94-862b-2e154dacbf85"
N_list = list(range(1,N+2))
plt.plot(N_list,x_list3) # (x,y)のプロット
plt.xlabel('t') # x軸のラベル
plt.ylabel('x') # y軸のラベル
plt.xlim([0,N]) # x軸の範囲
plt.ylim([-30,30]) # y軸の範囲
plt.show()
| Simulation/Random_walk.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from shapely.geometry import Point, Polygon
import geopandas as gpd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from sklearn.preprocessing import OneHotEncoder
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.max_columns', None)
# %run ../python_files/feature_selection_blocks
# -
# import data
puds = pd.read_csv('../data/final_datasets/master_puds_blocks.csv')
# feature engineering
puds = create_demo_col(puds)
minipuds = agg_puds(puds)
# set up dependent var
outcome = 'eviction-rate'
# +
# does number of PUDs in a census tract work as a predictor for eviction rate?
# set up single linear regression
x_cols = minipuds['pud_count']
X = minipuds['pud_count'].values
y = minipuds[outcome]
# fit model
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + ['pud_count']
result.summary(xname=labels)
# -
# Based on R-squared of 0.003, pud_count does **not** explain any of the variance in eviction-rate
sns.scatterplot(x = minipuds['pud_count'],
y = minipuds['eviction-rate'],
hue = [0 if el == 0 else 1 for el in minipuds['% Affordable Units']]);
# +
# can you predict eviction rate based on ward?
# set up single linear regression
encoder = OneHotEncoder(handle_unknown="error", drop='first')
X_cat = encoder.fit_transform(np.array(minipuds['ward']).reshape(-1, 1)).toarray()
X = X_cat
y = minipuds[outcome]
# fit model
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + [("ward_"+str(i)) for i in range(0,7)]
result.summary(xname=labels)
# +
# what about looking at more variables?
# set up multiple linear regression
x_cols = ['pct-non-white','poverty-rate', 'pct-renter-occupied','pud_count']
minitest = minipuds[x_cols]
X = minitest.values
encoder = OneHotEncoder(handle_unknown="error", drop='first')
X_cat = encoder.fit_transform(np.array(minipuds['ward']).reshape(-1, 1)).toarray()
X = np.concatenate((X, X_cat), axis = 1)
y = minipuds[outcome]
# fit model01
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + x_cols + [("ward_"+str(i)) for i in range(0,7)]
result.summary(xname=labels)
# +
# set up single linear regression
x_cols = 'pct-non-white'
X = minipuds[x_cols].values
y = minipuds[outcome]
# # fit model03
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + [x_cols]
result.summary(xname=labels)
# +
# set up single linear regression
x_cols = 'poverty-rate'
X = minipuds[x_cols].values
y = minipuds[outcome]
# # fit model04
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + [x_cols]
result.summary(xname=labels)
# +
# set up single linear regression
x_cols = 'pct-renter-occupied'
X = minipuds[x_cols].values
y = minipuds[outcome]
# # fit model05
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + [x_cols]
result.summary(xname=labels)
# +
# looking at top 2 predictor cols
# set up multiple linear regression
x_cols = ['pct-non-white','poverty-rate']
minitest = minipuds[x_cols]
X = minitest.values
y = minipuds[outcome]
# fit model01
X = sm.add_constant(X)
model = sm.OLS(y, X, hasconst=True )
result = model.fit()
labels = ['intercept'] + x_cols
result.summary(xname=labels)
# -
# # Graveyard
# +
# set up co-linearity check
y_vif = minipuds[outcome]
## remove Passenger from predictor list
## prepare data for the linear model
X_vif = minipuds[x_cols]
## add intercept term
X_vif = sm.add_constant(X_vif.values)
## fit model
model_vif = sm.OLS(y_vif, X_vif, hasconst=True)
result_vif = model_vif.fit()
## check the r2-score
result_vif.summary()
## calculate vif score directly from r2-score
passenger_vif = 1/(1 - result_vif.rsquared)
passenger_vif
# +
## standard scaling
# for col in x_cols:
# ## Here we don't have to do this but still it is a good practice
# if (type(minipuds[col]) == int) | (type(minipuds[col]) == float):
# minipuds[col] = (minipuds[col] - minipuds[col].mean())/minipuds[col].std()
# -
test['ward'] = [int(el[-1]) for el in minipuds.ward]
| notebooks/.ipynb_checkpoints/modeling_block0-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2. Interacting with APIs to import data from the web
# **In this chapter, you will gain a deeper understanding of how to import data from the web. You will learn the basics of extracting data from APIs, gain insight on the importance of APIs, and practice extracting data by diving into the OMDB and Library of Congress APIs.**
# ## Introduction to APIs and JSONs
# ### APIs
# - Application Programming Interface
# - Protocols and routines
# - Building and interacting with software applications
# - ex) [OMDb API (the Open Movie Datavase API)](https://www.omdbapi.com/)
#
# ### JSONs
# - JavaScript Object Notation
# - Real-time server-to-browser communication that wouldn't necessarily rely on Flash or Java and was first specified
# - Popularized by <NAME>, an American programmer and entrepreneur
# - Human readable
#
# ### Loading JSONs in Python
# ```python
# import json
# with open('data.json', 'r') as json_file:
# json_data = json.load(json_file)
# ```
# ```python
# type(json_data)
# ```
# ```
# dict
# ```
#
# ### Exploring JSONs in Python
# ```python
# for key, value in json_data.items():
# print(key + ':', value)
# ```
# ## Pop quiz: What exactly is a JSON?
# Which of the following is **NOT** true of the JSON file format?
#
# 1. JSONs consist of key-value pairs.
# 2. JSONs are human-readable.
# 3. The JSON file format arose out of a growing need for real-time server-to-browser communication.
# 4. ~~The function `json.load()` will load the JSON into Python as a `list`.~~
# 5. The function `json.load()` will load the JSON into Python as a `dictionary`.
#
# **Answer: 4**
# ## Loading and exploring a JSON
# Now that you know what a JSON is, you'll load one into your Python environment and explore it yourself. Here, you'll load the JSON `'a_movie.json'` into the variable `json_data`, which will be a dictionary. You'll then explore the JSON contents by printing the key-value pairs of `json_data` to the shell.
# - Load the JSON `'a_movie.json'` into the variable `json_data` *within the context* provided by the `with` statement. To do so, use the function `json.load()` *within the context manager*.
# - Use a `for` loop to print all key-value pairs in the dictionary `json_data`. Recall that you can access a value in a dictionary using the syntax: *dictionary*`[`key`]`.
# +
import json
# Load JSON: json_data
with open('a_movie.json') as json_file:
json_data = json.load(json_file)
# Print each key-value pair in json_data
for k in json_data.keys():
print(k + ': ', json_data[k])
# -
# ---
# ## APIs and interacting with the world wide web
# JSONs are everywhere and one of the main motivating reasons for getting to know how to work with them as a Data Scientist is that much of the data that you'll get from APIs are packaged as JSONs.
#
# ### What is an API?
# - Set of protocols and routines
# - Bunch of code
# - Allows two software programs to communicate with each other
#
# ### What was that URL?
# - http - making an HTTP request
# - www.ombdapi.com - querying the OMBD API
# - `?t=hackers`
# - Query string
# - Return data for a movie with title (t) 'Hackers'
# - `http://www.ombdapi.com/?t=hackers`
# ## Pop quiz: What's an API?
# Which of the following statements about APIs is **NOT** true?
#
# 1. An API is a set of protocols and routines for building and interacting with software applications.
# 2. API is an acronym and is short for Application Program interface.
# 3. It is common to pull data from APIs in the JSON file format.
# 4. ~~All APIs transmit data only in the JSON file format.~~
# 5. An API is a bunch of code that allows two software programs to communicate with each other.
#
# **Answer: 4**
# ## API requests
# Now it's your turn to pull some movie data down from the Open Movie Database (OMDB) using their API. The movie you'll query the API about is *The Social Network*. To query the API about the movie *Hackers*, the query string is `'http://www.omdbapi.com/?t=hackers'` and has a single argument `t=hackers`.
#
# Note: recently, OMDB has changed their API: you now also have to specify an API key. This means you'll have to add another argument to the URL: `apikey=72bc447a`.
# - Import the `requests` package.
# - Assign to the variable `url` the URL of interest in order to query `'http://www.omdbapi.com'` for the data corresponding to the movie *The Social Network*. The *query string* should have two arguments: `apikey=72bc447a` and `t=the+social+network`. You can combine them as follows: `apikey=72bc447a&t=the+social+network`.
# - Print the text of the response object `r` by using its `text` attribute and passing the result to the `print()` function.
# +
# Import requests package
import requests
# Assign URL to variable: url
url = 'http://www.omdbapi.com/?apikey=72bc447a&t=the+social+network'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Print the text of the response
print(r.text)
# -
# ## JSON–from the web to Python
# You've just queried your first API programmatically in Python and printed the text of the response to the shell. However, as you know, your response is actually a JSON, so you can do one step better and decode the JSON. You can then print the key-value pairs of the resulting dictionary.
# - Pass the variable `url` to the `requests.get()` function in order to send the relevant request and catch the response, assigning the resultant response message to the variable `r`.
# - Apply the `json()` method to the response object `r` and store the resulting dictionary in the variable `json_data`.
# - Print the key-value pairs of the dictionary `json_data`.
# +
# Import package
import requests
# Assign URL to variable: url
url = 'http://www.omdbapi.com/?apikey=72bc447a&t=social+network'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Decode the JSON data into a dictionary: json_data
json_data = r.json()
# Print each key-value pair in json_data
for k in json_data.keys():
print(k + ': ', json_data[k])
# -
# ## Checking out the Wikipedia API
# We're going to throw one more API: the Wikipedia API (documented [here](https://www.mediawiki.org/wiki/API:Main_page)). You'll figure out how to find and extract information from the Wikipedia page for *Pizza*. What gets a bit wild here is that your query will return nested JSONs, that is, JSONs with JSONs, but Python can handle that because it will translate them into dictionaries within dictionaries.
#
# The URL that requests the relevant query from the Wikipedia API is `https://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=json&exintro=&titles=pizza`
# - Assign the relevant URL to the variable `url`.
# - Apply the `json()` method to the response object `r` and store the resulting dictionary in the variable `json_data`.
# - The variable `pizza_extract` holds the HTML of an extract from Wikipedia's *Pizza* page as a string; use the function `print()` to print this string.
# +
# Import package
import requests
# Assign URL to variable: url
url = 'https://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=json&exintro=&titles=pizza'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Decode the JSON data into a dictionary: json_data
json_data = r.json()
# Print the Wikipedia page extract
pizza_extract = json_data['query']['pages']['24768']['extract']
print(pizza_extract)
# -
| Data Analyst with Python/10_Intermediate_Importing_Data_in_Python/10_2_Interacting with APIs to import data from the web.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: projects_env
# language: python
# name: projects_env
# ---
import nltk
#nltk.download('wordnet')
from nltk.corpus import wordnet as wn
wn.synsets('lion')
text=' '.join(open('input.txt').readlines())
text='''We introduce a stochastic graph-based method for computing relative\nimportance of textual units for Natural Language Processing. We test the\ntechnique on the problem of Text Summarization (TS). Extractive TS relies on\nthe concept of sentence salience to identify the most important sentences in a\ndocument or set of documents. Salience is typically defined in terms of the\npresence of particular important words or in terms of similarity to a centroid\npseudo-sentence. We consider a new approach, LexRank, for computing sentence\nimportance based on the concept of eigenvector centrality in a graph\nrepresentation of sentences. In this model, a connectivity matrix based on\nintra-sentence cosine similarity is used as the adjacency matrix of the graph\nrepresentation of sentences. Our system, based on LexRank ranked in first place\nin more than one task in the recent DUC 2004 evaluation. In this paper we\npresent a detailed analysis of our approach and apply it to a larger data set\nincluding data from earlier DUC evaluations. We discuss several methods to\ncompute centrality using the similarity graph. The results show that\ndegree-based methods (including LexRank) outperform both centroid-based methods\nand other systems participating in DUC in most of the cases. Furthermore, the\nLexRank with threshold method outperforms the other degree-based techniques\nincluding continuous LexRank. We also show that our approach is quite\ninsensitive to the noise in the data that may result from an imperfect topical\nclustering of documents.\n'''
text
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
doc = nlp(str(sents[1]))
displacy.serve(doc, style="ent")
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
text='''We introduce Maximal Marginal Classifier a new language representa-
tion model called BERT, which stands for
Bidirectional Encoder Representations from
Transformers. Unlike recent language repre-
sentation models (Peters et al., 2018a; Rad-
ford et al., 2018), BERT is designed to pre-
train deep bidirectional representations from
unlabeled text by jointly conditioning on both
left and right context in all layers. As a re-
sult, the pre-trained BERT model can be fine-
tuned with just one additional output layer
to create state-of-the-art models for a wide
range of tasks, such as question answering and
language inference, without substantial task-
specific architecture modifications.
BERT is conceptually simple and empirically
powerful. It obtains new state-of-the-art re-
sults on eleven natural language processing
tasks, including pushing the GLUE score to
80.5% (7.7% point absolute improvement),
MultiNLI accuracy to 86.7% (4.6% absolute
improvement), SQuAD v1.1 question answer-
ing Test F1 to 93.2 (1.5 point absolute im-
provement) and SQuAD v2.0 Test F1 to 83.1
(5.1 point absolute improvement).'''
doc = nlp(text)
displacy.serve(doc, style="dep")
# displacy.serve(doc, style="ent")
text
import spacy
nlp=spacy.load('en_core_web_sm')
doc=nlp(text)
sents=list(doc.sents)
doc=nlp(str(sents[1]))
for tok in doc:
print(tok, '->', tok.dep_)
sents[0]
def get_entities(sent):
## chunk 1
ent1 = ""
ent2 = ""
prv_tok_dep = "" # dependency tag of previous token in the sentence
prv_tok_text = "" # previous token in the sentence
prefix = ""
modifier = ""
#############################################################
for tok in nlp(sent):
## chunk 2
# if token is a punctuation mark then move on to the next token
if tok.dep_ != "punct":
# check: token is a compound word or not
if tok.dep_ == "compound":
prefix = tok.text
# if the previous word was also a 'compound' then add the current word to it
if prv_tok_dep == "compound":
prefix = prv_tok_text + " "+ tok.text
# check: token is a modifier or not
if tok.dep_.endswith("mod") == True:
modifier = tok.text
# if the previous word was also a 'compound' then add the current word to it
if prv_tok_dep == "compound":
modifier = prv_tok_text + " "+ tok.text
## chunk 3
if tok.dep_.find("subj") == True:
ent1 = modifier +" "+ prefix + " "+ tok.text
prefix = ""
modifier = ""
prv_tok_dep = ""
prv_tok_text = ""
## chunk 4
if tok.dep_.find("obj") == True:
ent2 = modifier +" "+ prefix +" "+ tok.text
## chunk 5
# update variables
prv_tok_dep = tok.dep_
prv_tok_text = tok.text
#############################################################
return [ent1.strip(), ent2.strip()]
get_entities(str(sents[0]))
import pandas as pd
df=pd.read_json('/home/hs/Downloads/arxiv-metadata-oai-snapshot.json', lines=True)#, nrows=200000)
df[df.title.str.contains('leveraging bert', case=False)]['abstract'].iloc[1]
# ### Tf-IDF values
def get_IDF(text):
tfidf_matrix= tfidf.transform([text]).todense()
feature_index = tfidf_matrix[0,:].nonzero()[1]
tfidf_scores = zip([feature_names[i] for i in feature_index], [tfidf_matrix[0, x] for x in feature_index])
return dict(tfidf_scores)
# ### Reading file
import json
from tqdm import notebook
papers=open('/home/hs/Downloads/arxiv-metadata-oai-snapshot.json').readlines()
for idx, paper in enumerate(notebook.tqdm(papers)):
papers[idx]=json.loads(paper)#['abstract']
def search(id, papers):
for idx, paper in enumerate(notebook.tqdm(papers)):
title = paper['title']
if term in title.lower():
print(idx, title)
search('momentum contrast'.lower(), papers)
papers[0]
papers[1204316]['abstract']
text=papers[286496]['abstract']
papers[345]
# ## Keyword extraction using Gensim
from gensim.summarization import keywords
from gensim.summarization.keywords import get_graph
import networkx as nx
import matplotlib.pyplot as plt
kwords=keywords(text).split('\n')
kwords
def displayGraph(textGraph):
graph = nx.Graph()
for edge in textGraph.edges():
graph.add_node(edge[0])
graph.add_node(edge[1])
graph.add_weighted_edges_from([(edge[0], edge[1], textGraph.edge_weight(edge))])
textGraph.edge_weight(edge)
pos = nx.spring_layout(graph)
plt.figure()
nx.draw(graph, pos, edge_color='black', width=1, linewidths=1,
node_size=500, node_color='seagreen', alpha=0.9,
labels={node: node for node in graph.nodes()})
plt.axis('off')
plt.show()
displayGraph(get_graph(text))
# +
from keybert import KeyBERT
doc = """
We introduce a new language representa-
tion model called BERT, which stands for
Bidirectional Encoder Representations from
Transformers. Unlike recent language repre-
sentation models (Peters et al., 2018a; Rad-
ford et al., 2018), BERT is designed to pre-
train deep bidirectional representations from
unlabeled text by jointly conditioning on both
left and right context in all layers. As a re-
sult, the pre-trained BERT model can be fine-
tuned with just one additional output layer
to create state-of-the-art models for a wide
range of tasks, such as question answering and
language inference, without substantial task-
specific architecture modifications.
BERT is conceptually simple and empirically
powerful. It obtains new state-of-the-art re-
sults on eleven natural language processing
tasks, including pushing the GLUE score to
80.5% (7.7% point absolute improvement),
MultiNLI accuracy to 86.7% (4.6% absolute
improvement), SQuAD v1.1 question answer-
ing Test F1 to 93.2 (1.5 point absolute im-
provement) and SQuAD v2.0 Test F1 to 83.1
(5.1 point absolute improvement).
"""
model = KeyBERT('distilbert-base-nli-mean-tokens')
keywords = model.extract_keywords(doc)
# -
keywords
| notebooks/approaches.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python3
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Foundations of Computational Economics #31
#
# by <NAME>, ANU
#
# <img src="_static/img/dag3logo.png" style="width:256px;">
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Function approximation in Python
#
# <img src="_static/img/lecture.png" style="width:64px;">
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="_static/img/youtube.png" style="width:65px;">
#
# [https://youtu.be/liNputEfcXQ](https://youtu.be/liNputEfcXQ)
#
# Description: How to approximate functions which are only defined on grid of points. Spline and polynomial interpolation.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Interpolation problem
#
# - $ f(x) $ is function of interest, hard to compute
# - Have data on values of $ f(x) $ in $ n $ points
# $ (x_1,\dots,x_n) $
#
#
# $$
# f(x_1), f(x_2), \dots f(x_n)
# $$
#
# - Need to find the approximate value of the function $ f(x) $ in
# arbitrary points $ x \in [x_1,x_n] $
# + [markdown] slideshow={"slide_type": "slide"}
# #### Approaches
#
# 1. *Piece-wise* approach (connect the dots)
#
#
# - Which functional form to use for connections?
# - What are advantages and disadvantages?
#
#
# 1. Use a *similar* function $ s(x) $ to represent $ f(x) $
# between the data points
#
#
# - Which simpler function?
# - What data should be used?
# - How to control the accuracy of the approximation?
# + [markdown] slideshow={"slide_type": "slide"}
# #### Distinction between function approximation (interpolation) and curve fitting
#
# - Functions approximation and interpolation refers to the situations
# when **data** on function values is matched **exactly**
# - The approximation curve passes through the points of the data
# - Curve fitting refers to the statistical problem when the data has
# **noise**, the task is to find an approximation for the central
# tendency in the data
# - Linear and non-linear regression models, econometrics
# - The model is *over-identified* (there is more data than needed to
# exactly identify the regression function)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Extrapolation
#
# Extrapolation is computing the approximated function outside of the
# original data interval
#
# **Should be avoided in general**
#
# - Exact *only* when theoretical properties of the extrapolated function
# are known
# - Can be used with extreme caution and based on the analysis of the model
# - Always try to introduce wider bounds for the grid instead
# + [markdown] slideshow={"slide_type": "slide"}
# ### Spline interpolation
#
# Spline = curve composed of independent pieces
#
# **Definition** A function $ s(x) $ on $ [a,b] $ is a spline of
# order $ n $ ( = degree $ n-1 $) iff
#
# - $ s $ is $ C^{n-2} $ on $ [a,b] $ (has continuous derivatives
# up to order $ n-2 $),
# - given *knot* points $ a=x_0<x_1<\dots<x_m=b $, $ s(x) $ is a
# polynomial of degree $ n-1 $ on each subinterval
# $ [x_i,x_{i+1}] $, $ i=0,\dots,m-1 $
# + [markdown] slideshow={"slide_type": "slide"}
# #### Cubic splines = spline of order 4
#
# - Data set $ \{(x_i,f(x_i), i=0,\dots,n\} $
# - Functional form $ s(x) = a_i + b_i x + c_i x^2 + d_i x^3 $ on
# $ [x_{i-1},x_i] $ for $ i=1,\dots,n $
# - $ 4n $ unknown coefficients:
# - $ 2n $ equations to make sure each segment passes through its interval points +
# $ 2(n-1) $ equations to ensure two continuous derivatives at each interior point
# - Additional 2 equation for the $ x_0 $ and $ x_n $
# - $ s''(x_0)=s''(x_n)=0 $ (natural spline)
# - $ s'(x_0)=\frac{s(x_1)-s(x_0)}{x_1-x_0} $,
# $ s'(x_n)=\frac{s(x_n)-s(x_{n-1})}{x_n-x_{n-1}} $
# (secant-Hermite)
# + hide-output=false slideshow={"slide_type": "slide"}
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(2008) # fix random number sequences
x = np.sort(np.random.uniform(-5,10,12)) # sorted random numbers on [-5,10]
xr = np.linspace(-5,10,12) # regular grid on [-5,10]
func=lambda x: np.exp(-x/4)*np.sin(x) + 1/(1+x**2) # function to interpolate
# + hide-output=false slideshow={"slide_type": "slide"}
def plot1(ifunc,fdata=(x,func(x)),f=func,color='b',label='',extrapolation=False):
'''helper function to make plots'''
xd = np.linspace(-5,10,1000) # for making continuous lines
plt.figure(num=1, figsize=(10,8))
plt.scatter(fdata[0],fdata[1],color='r') # interpolation data
plt.plot(xd,f(xd),color='grey') # true function
if extrapolation:
xdi = xd
else:
# restriction for interpolation only
xdi=xd[np.logical_and(xd>=fdata[0][0],xd<=fdata[0][-1])]
if ifunc:
plt.plot(xdi,ifunc(xdi),color=color,label=label)
if label:
plt.legend()
elif label:
plt.title(label)
# + hide-output=false slideshow={"slide_type": "slide"}
plot1(None,label='True function')
# + hide-output=false slideshow={"slide_type": "slide"}
from scipy import interpolate # Interpolation routines
fi = interpolate.interp1d(x,func(x)) # returns the interpolation function
plot1(fi,label='interp1d')
# + hide-output=false slideshow={"slide_type": "slide"}
help(interpolate.interp1d)
# + hide-output=false slideshow={"slide_type": "slide"}
fi = interpolate.interp1d(x,func(x),kind='linear')
plot1(fi,label='Linear')
# + hide-output=false slideshow={"slide_type": "slide"}
for knd, clr in ('previous','m'),('next','b'),('nearest','g'):
fi = interpolate.interp1d(x,func(x),kind=knd)
plot1(fi,label=knd,color=clr)
plt.show()
# + hide-output=false slideshow={"slide_type": "slide"}
for knd, clr in ('slinear','m'),('quadratic','b'),('cubic','g'):
fi = interpolate.interp1d(x,func(x),kind=knd)
plot1(fi,color=clr,label=knd)
# + hide-output=false slideshow={"slide_type": "slide"}
# Approximation errors
# x = np.sort(np.random.uniform(-5,10,11)) # generate new data
for knd, clr in ('slinear','m'),('quadratic','b'),('cubic','g'):
fi = interpolate.interp1d(x,func(x),kind=knd,bounds_error=False)
xd = np.linspace(-5,10,1000)
erd=np.abs(func(xd)-fi(xd))
plt.plot(xd,erd,color=clr)
print('Max error with %s splines is %1.5e'%(knd,np.nanmax(erd)))
# + hide-output=false slideshow={"slide_type": "slide"}
# Approximation errors for regular grid
for knd, clr in ('slinear','m'),('quadratic','b'),('cubic','g'):
fi = interpolate.interp1d(xr,func(xr),kind=knd,bounds_error=False)
xd = np.linspace(-5,10,1000)
erd=np.abs(func(xd)-fi(xd))
plt.plot(xd,erd,color=clr)
print('Max error with %s splines is %1.5e'%(knd,np.nanmax(erd)))
# + [markdown] slideshow={"slide_type": "slide"}
# #### Accuracy of the interpolation
#
# How to reduce approximation errors?
# + [markdown] slideshow={"slide_type": "fragment"}
# - Number of nodes (more is better)
# - Location of nodes (regular is better)
# - Interpolation type (match function of interest)
#
#
# *In economic models we usually can control all of these*
# + [markdown] slideshow={"slide_type": "slide"}
# ### Polynomial approximation/interpolation
#
# Back to the beginning to explore the idea of replacing original
# $ f(x) $ with simpler $ g(x) $
#
# - Data set $ \{(x_i,f(x_i)\}, i=0,\dots,n $
# - Functional form is polynomial of degree $ n $ such that $ g(x_i)=f(x_i) $
# - If $ x_i $ are distinct, coefficients of the polynomial are uniquely identified
#
#
# Does polynomial $ g(x) $ converge to $ f(x) $ when there are
# more points?
# + hide-output=false slideshow={"slide_type": "slide"}
from numpy.polynomial import polynomial
degree = len(x)-1 # passing through all dots
p = polynomial.polyfit(x,func(x),degree)
fi = lambda x: polynomial.polyval(x,p)
plot1(fi,label='Polynomial of degree %d'%degree,extrapolation=True)
# + hide-output=false slideshow={"slide_type": "slide"}
# now with regular grid
degree = len(x)-1 # passing through all dots
p = polynomial.polyfit(xr,func(xr),degree)
fi = lambda x: polynomial.polyval(x,p)
plot1(fi,fdata=(xr,func(xr)),label='Polynomial of degree %d'%degree,extrapolation=True)
# + hide-output=false slideshow={"slide_type": "slide"}
# how number of points affect the approximation (with degree=n-1)
for n, clr in (5,'m'),(10,'b'),(15,'g'),(25,'r'):
x2 = np.linspace(-5,10,n)
p = polynomial.polyfit(x2,func(x2),n-1)
fi = lambda x: polynomial.polyval(x,p)
plot1(fi,fdata=(x2,func(x2)),label='%d points'%n,color=clr,extrapolation=True)
plt.show()
# + hide-output=false slideshow={"slide_type": "slide"}
# how locations of points affect the approximation (with degree=n-1)
np.random.seed(2025)
n=8
for clr in 'b','g','c':
x2 = np.linspace(-4,9,n) + np.random.uniform(-1,1,n) # perturb points a little
p = polynomial.polyfit(x2,func(x2),n-1)
fi = lambda x: polynomial.polyval(x,p)
plot1(fi,fdata=(x2,func(x2)),label='%d points'%n,color=clr,extrapolation=True)
plt.show()
# + hide-output=false slideshow={"slide_type": "slide"}
# how degree of the polynomial affects the approximation
for degree, clr in (7,'b'),(9,'g'),(11,'m'):
p=polynomial.polyfit(xr,func(xr),degree)
fi=lambda x: polynomial.polyval(x,p)
plot1(fi,fdata=(xr,func(xr)),label='Polynomial of degree %d'%degree,color=clr,extrapolation=True)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Least squares approximation
#
# We could also go back to **function approximation** and fit polynomials
# of lower degree
#
# - Data set $ \{(x_i,f(x_i)\}, i=0,\dots,n $
# - **Any** functional form $ g(x) $ from class $ G $ that best
# approximates $ f(x) $
#
#
# $$
# g = \arg\min_{g \in G} \lVert f-g \rVert ^2
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Orthogonal polynomial approximation/interpolation
#
# - Polynomials over domain $ D $
# - Weighting function $ w(x)>0 $
#
#
# Inner product
#
# $$
# \langle f,g \rangle = \int_D f(x)g(x)w(x)dx
# $$
#
# $ \{\phi_i\} $ is a family of orthogonal polynomials w.r.t.
# $ w(x) $ iff
#
# $$
# \langle \phi_i,\phi_j \rangle = 0, i\ne j
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# #### Best polynomial approximation in L2-norm
#
# Let $ \mathcal{P}_n $ denote the space of all polynomials of degree $ n $ over $ D $
#
# $$
# \lVert f - p \rVert_2 = \inf_{q \in \mathcal{P}_n} \lVert f - q \rVert_2
# = \inf_{q \in \mathcal{P}_n} \left[ \int_D ( f(x)-g(x) )^2 dx \right]^{\tfrac{1}{2}}
# $$
#
# if and only if
#
# $$
# \langle f-p,q \rangle = 0, \text{ for all } q \in \mathcal{P}_n
# $$
#
# *Orthogonal projection is the best approximating polynomial in L2-norm*
# + [markdown] slideshow={"slide_type": "slide"}
# #### Uniform (infinity, sup-) norm
#
# $$
# \lVert f(x) - g(x) \rVert_{\infty} = \sup_{x \in D} | f(x) - g(x) |
# = \lim_{n \rightarrow \infty} \left[ \int_D ( f(x)-g(x) )^n dx \right]^{\tfrac{1}{n}}
# $$
#
# Measures the absolute difference over the whole domain $ D $
# + [markdown] slideshow={"slide_type": "slide"}
# #### Chebyshev (minmax) approximation
#
# What is the best polynomial approximation in the uniform (infinity, sup) norm?
#
# $$
# \lVert f - p \rVert_{\infty} = \inf_{q \in \mathcal{P}_n} \lVert f - q \rVert_{\infty}
# = \inf_{q \in \mathcal{P}_n} \sup_{x \in D} | f(x) - g(x) |
# $$
#
# Chebyshev proved existence and uniqueness of the best approximating polynomial in uniform norm.
# + [markdown] slideshow={"slide_type": "slide"}
# #### Chebyshev polynomials
#
# - $ [a,b] = [-1,1] $ and $ w(x)=(1-x^2)^{(-1/2)} $
# - $ T_n(x)=\cos\big(n\cos^{-1}(x)\big) $
# - Recursive formulas:
#
#
# $$
# \begin{eqnarray}
# T_0(x)=1,\\
# T_1(x)=x,\\
# T_{n+1}(x)=2x T_n(x) - T_{n-1}(x)
# \end{eqnarray}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# #### Accuracy of Chebyshev approximation
#
# Suppose $ f: [-1,1]\rightarrow R $ is $ C^k $ function for some
# $ k\ge 1 $, and let $ I_n $ be the degree $ n $ polynomial
# interpolation of $ f $ with nodes at zeros of $ T_{n+1}(x) $.
# Then
#
# $$
# \lVert f - I_n \rVert_{\infty} \le \left( \frac{2}{\pi} \log(n+1) +1 \right) \frac{(n-k)!}{n!}\left(\frac{\pi}{2}\right)^k \lVert f^{(k)}\rVert_{\infty}
# $$
#
# 📖 Judd (1988) Numerical Methods in Economics
#
# - achieves *best polynomial approximation in uniform norm*
# - works for smooth functions
# - easy to compute
# - but *does not* approximate $ f'(x) $ well
# + [markdown] slideshow={"slide_type": "slide"}
# #### General interval
#
# - Not hard to adapt the polynomials for the general interval
# $ [a,b] $ through linear change of variable
#
#
# $$
# y = 2\frac{x-a}{b-a}-1
# $$
#
# - Orthogonality holds with weights function with the same change of
# variable
# + [markdown] slideshow={"slide_type": "slide"}
# #### Chebyshev approximation algorithm
#
# 1. Given $ f(x) $ and $ [a,b] $
# 1. Compute Chebyshev interpolation nodes on $ [-1,1] $
# 1. Adjust nodes to $ [a,b] $ by change of variable, $ x_i $
# 1. Evaluate $ f $ at the nodes, $ f(x_i) $
# 1. Compute Chebyshev coefficients $ a_i = g\big(f(x_i)\big) $
# 1. Arrive at approximation
#
#
# $$
# f(x) = \sum_{i=0}^n a_i T_i(x)
# $$
# + hide-output=false slideshow={"slide_type": "slide"}
import numpy.polynomial.chebyshev as cheb
for degree, clr in (7,'b'),(9,'g'),(11,'m'):
fi=cheb.Chebyshev.interpolate(func,degree,[-5,10])
plot1(fi,fdata=(None,None),color=clr,label='Chebyshev with n=%d'%degree,extrapolation=True)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Multidimensional interpolation
#
# - there are multidimensional generalization to all the methods
# - curse of dimensionality in the number of interpolation points when number of dimensions increase
# - sparse Smolyak grids and adaptive sparse grids
# - irregular grids require computationally expensive triangulation in the general case
# - good application for machine learning!
#
#
# **Generally much harder!**
# + [markdown] slideshow={"slide_type": "slide"}
# ### Further learning resources
#
# - [https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html)
# - [https://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html](https://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html)
# - <NAME>’s thesis on Chebyshev approximation [http://fse.studenttheses.ub.rug.nl/15406/1/Marieke_Mudde_2017_EC.pdf](http://fse.studenttheses.ub.rug.nl/15406/1/Marieke_Mudde_2017_EC.pdf)
| 31_approximation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generate Random Networks Based on Delaunay Triangulations, Voronoi Tessellations, or Both
#
# A random network offers several advantages over the traditional Cubic arrangement: the topology is more 'natural' looking, and a wider pore size distribution can be achieved since pores are not constrained by the lattice spacing. Random networks can be tricky to generate however, since the connectivity between pores is difficult to determine or define. One surprisingly simple option is to use Delaunay triangulation to connect base points (which become pore centers) that are randomly distributed in space. The Voronoi tessellation is a complementary graph that arises directly from the Delaunay graph which also connects essentially randomly distributed points in space into a network. OpenPNM offers both of these types of network, plus the ability to create a network containing *both* including interconnections between Delaunay and Voronoi networks via the ```DelaunayVoronoiDual``` class. In fact, creating the dual, interconnected network is the starting point, and any unwanted elements can be easily trimmed.
import openpnm as op
import matplotlib.pyplot as plt
pn = op.network.DelaunayVoronoiDual(num_points=100, shape=[1, 1, 1])
print(pn)
# The above line of code is deceptively simple. The returned network (```pn```) contains a fully connected Delaunay network, its complementary Voronoi network, and interconnecting throats (or bonds) between each Delaunay pore (node) and its neighboring Voronoi pores. Such a highly complex network would be useful for modeling pore phase transport (i.e. diffusion) on one network (i.e. Delaunay), solid phase transport (i.e. heat transfer) on the other network (i.e. Voronoi), and exchange of a species (i.e. heat) between the solid and void phases via the interconnecting bonds. Each pore and throat is labelled accordingly (i.e. 'pore.delaunay', 'throat.voronoi'), and the interconnecting throats are labelled 'throat.interconnect'. Moreover, pores and throats lying on the surface of the network are labelled 'surface'.
#
# A quick visualization of this network can be accomplished using OpenPNM's built-in graphing tool. The following shows only the Voronoi connections that lie on the surface of the cube:
Ts = pn.throats(['voronoi', 'boundary'], mode='and')
op.topotools.plot_connections(network=pn, throats=Ts)
# One central feature of these networks are the flat boundaries, which are essential when performing transport calculations since they provide well-defined control surfaces for calculating flux. This flat surfaces are accomplished by reflecting the base points across each face prior to performing the tessellations.
# Plotting the internal Voronoi throats with a different color gives a good idea of the topology:
fig, ax = plt.subplots()
Ts = pn.throats(['voronoi', 'boundary'], mode='and')
op.topotools.plot_connections(network=pn, throats=Ts, alpha=0.5, ax=ax)
Ts = pn.throats(['voronoi', 'internal'], mode='and')
op.topotools.plot_connections(network=pn, throats=Ts, c='g', ax=ax)
Ts = pn.throats(['voronoi', 'surface'], mode='and')
op.topotools.plot_connections(network=pn, throats=Ts, c='r', ax=ax)
# The green lines are internal connections, and red lines are connections between internal notes and boundary nodes.
# ## Delaunay Network
#
# As the name suggests, the VoronoiDelaunayDual contains both the Delaunay triangulation and the Voronoi tessellation within the same topology. It is simple to delete one network (or the other) by trimming all of the other network's pores, which also removes all connected throats including the interconnections:
Ps = pn.pores(['voronoi'])
op.topotools.trim(network=pn, pores=Ps)
Ts = pn.throats(['surface'])
op.topotools.trim(network=pn, throats=Ts)
# NBVAL_IGNORE_OUTPUT
op.topotools.plot_connections(network=pn)
# ## Create Random Networks of Spherical or Cylindrical Shape
#
# Many porous materials come in spherical or cylindrical shapes, such as catalyst pellets. The ```DelaunayVoronoiDual``` Network class can produce these geometries by specifying the ```domain_size``` in cylindrical [r, z] or spherical [r] coordinates:
cyl = op.network.DelaunayVoronoiDual(num_points=200, shape=[2, 5])
op.topotools.plot_connections(network = cyl, throats=cyl.throats('boundary'))
sph = op.network.DelaunayVoronoiDual(num_points=500, shape=[2])
op.topotools.plot_connections(network = sph, throats=sph.throats('surface'))
# Note that the cylindrical and spherical networks don't look very nice when too few points are used, so at least about 200 is recommended.
# ## Assign Pore Sizes to the Random Network
#
# With pore centers randomly distributed in space it becomes challenging to know what pore size to assign to each location. Assigning pores that are too large results in overlaps, which makes it impossible to properly account for porosity and transport lengths. OpenPNM includes a Geometry model called ```largest_sphere``` that solves this problem. Let's assign the largest possible pore size to each Voronoi node in the ```sph``` network just created:
Ps = sph.pores('voronoi')
Ts = sph.throats('voronoi')
geom = op.geometry.GenericGeometry(network=sph, pores=Ps, throats=Ts)
mod = op.models.geometry.pore_size.largest_sphere
geom.add_model(propname='pore.diameter', model=mod)
mod = op.models.geometry.throat_length.ctc
geom.add_model(propname='throat.length', model=mod)
mod = op.models.geometry.throat_size.from_neighbor_pores
geom.add_model(propname='throat.diameter', model=mod)
geom.show_hist(['pore.diameter', 'throat.length', 'throat.diameter'])
# The resulting geometrical properties can be viewed with ```geom.plot_histograms()``` (note that each realization will differ slightly):
| examples/notebooks/networks/generation/random_networks_based_on_delaunay_and_voronoi_tessellations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### How to run pulsequantum from ipython notebook
# +
# %matplotlib qt
import qcodes as qc
from pulsequantum.mainwindow import pulsetable
#To use "Show gateplot" pulsequantum needs to know your QCoDeS db Location
#qc.config["core"]["db_location"] = 'C:\\Users\\rbcma\\repos\\PulsedMeasurementCode\\Amber.db'
# +
#mypulsetable = pulsetable()
# -
| docs/example_notebooks/How_to_run_pulsequantum_from_ipython.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from IPython.display import display
feature_dict = {i:label for i,label in zip(range(4),["sepal length in cm",
"sepal width in cm",
"petal length in cm",
"petal width in cm"])}
display(feature_dict)
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(np.concatenate((iris["data"], iris["target"].reshape(-1, 1)), axis=1), columns=iris.feature_names + ["class label"])
df.tail()
df["class label"] = df["class label"].where(df["class label"] != 0.0, other="iris-setosa").where(df["class label"] != 1.0, other="iris-versicolor").where(df["class label"] != 2.0, other="iris-virginica")
df.dtypes
from sklearn.preprocessing import LabelEncoder
# +
X = df.iloc[:, [0,1,2,3]].values
y = df["class label"].values
lenc = LabelEncoder()
y = lenc.fit_transform(y)
# -
label_dict = {0:"Setosa", 1:"Versicolor", 2:"Virginica"}
attr_dict = {0:"sepal length", 1:"sepal width", 2:"petal length", 3:"petal width"}
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# +
plt.pie([X[y==i].shape[0] for i in range(3)],labels=[label_dict[i] for i
in range(3)], shadow=True, startangle=90, autopct="%1.1f%%")
plt.title("Class distribution of the 3 different flower species")
plt.legend(loc="lower right")
plt.axis("equal")
plt.tight_layout()
# -
| other/pattern_classification/matplotlib_viz_gallery.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Seldon Kafka Integration Example with CIFAR10 Model
#
# In this example we will run SeldonDeployments for a CIFAR10 Tensorflow model which take their inputs from a Kafka topic and push their outputs to a Kafka topic. We will experiment with both REST and gRPC Seldon graphs. For REST we will load our input topic with Tensorflow JSON requests and for gRPC we will load Tensorflow PredictRequest protoBuffers.
# ## Requirements
#
# * [Install gsutil](https://cloud.google.com/storage/docs/gsutil_install)
#
# !pip install -r requirements.txt
# ## Setup Kafka
# Install Strimzi on cluster
# !helm repo add strimzi https://strimzi.io/charts/
# !helm install my-release strimzi/strimzi-kafka-operator
# Set the following to whether you are running a local Kind cluster or a cloud based cluster.
clusterType="kind"
#clusterType="cloud"
if clusterType == "kind":
# !kubectl apply -f cluster-kind.yaml
else:
# !kubectl apply -f cluster-cloud.yaml
# Get broker endpoint.
if clusterType == "kind":
# res=!kubectl get service my-cluster-kafka-external-bootstrap -n default -o=jsonpath='{.spec.ports[0].nodePort}'
port=res[0]
# %env BROKER=172.17.0.2:$port
else:
# res=!kubectl get service my-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}'
if len(res) == 1:
hostname=res[0]
# %env BROKER=$h:9094
else:
# res=!kubectl get service my-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
ip=res[0]
# %env BROKER=$ip:9094
# %%writefile topics.yaml
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: cifar10-rest-input
labels:
strimzi.io/cluster: "my-cluster"
spec:
partitions: 2
replicas: 1
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: cifar10-rest-output
labels:
strimzi.io/cluster: "my-cluster"
spec:
partitions: 2
replicas: 1
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: cifar10-grpc-input
labels:
strimzi.io/cluster: "my-cluster"
spec:
partitions: 2
replicas: 1
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: cifar10-grpc-output
labels:
strimzi.io/cluster: "my-cluster"
spec:
partitions: 2
replicas: 1
# !kubectl apply -f topics.yaml
# ## Install Seldon
#
# * [Install Seldon](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html)
# * [Follow our docs to intstall the Grafana analytics](https://docs.seldon.io/projects/seldon-core/en/latest/analytics/analytics.html).
# ## Download Test Request Data
# We have two example datasets containing 50,000 requests in tensorflow serving format for CIFAR10. One in JSON format and one as length encoded proto buffers.
# !gsutil cp gs://seldon-datasets/cifar10/requests/tensorflow/cifar10_tensorflow.json.gz cifar10_tensorflow.json.gz
# !gunzip cifar10_tensorflow.json.gz
# !gsutil cp gs://seldon-datasets/cifar10/requests/tensorflow/cifar10_tensorflow.proto cifar10_tensorflow.proto
# ## Test CIFAR10 REST Model
# Upload tensorflow serving rest requests to kafka. This may take some time dependent on your network connection.
# !python ../../../util/kafka/test-client.py produce $BROKER cifar10-rest-input --file cifar10_tensorflow.json
# res=!kubectl get service my-cluster-kafka-external-bootstrap -o=jsonpath='{.spec.clusterIP}'
ip=res[0]
# %env BROKER_CIP=$ip
# %%writefile cifar10_rest.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
spec:
protocol: tensorflow
transport: rest
serverType: kafka
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
- --enable_batching
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
svcOrchSpec:
env:
- name: KAFKA_BROKER
value: BROKER_IP
- name: KAFKA_INPUT_TOPIC
value: cifar10-rest-input
- name: KAFKA_OUTPUT_TOPIC
value: cifar10-rest-output
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
name: model
replicas: 1
# !cat cifar10_rest.yaml | sed s/BROKER_IP/$BROKER_CIP:9094/ | kubectl apply -f -
# Looking at the metrics dashboard for Seldon you should see throughput we are getting. For a single replica on GKE with n1-standard-4 nodes we can see roughly 150 requests per second being processed.
#
# 
# !kubectl delete -f cifar10_rest.yaml
# ## Test CIFAR10 gRPC Model
# Upload tensorflow serving rest requests to kafka. This is a file of protobuffer `tenserflow.serving.PredictRequest` ([defn](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto)). Each binary protobuffer is prefixed by the numbre of bytes. Out test-client python script reads them and sends to our topic. This may take some time dependent on your network connection.
# !python ../../../util/kafka/test-client.py produce $BROKER cifar10-grpc-input --file cifar10_tensorflow.proto --proto_name tensorflow.serving.PredictRequest
# res=!kubectl get service my-cluster-kafka-external-bootstrap -o=jsonpath='{.spec.clusterIP}'
ip=res[0]
# %env BROKER_CIP=$ip
# %%writefile cifar10_grpc.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
spec:
protocol: tensorflow
transport: grpc
serverType: kafka
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
- --enable_batching
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8500
name: http
svcOrchSpec:
env:
- name: KAFKA_BROKER
value: BROKER_IP
- name: KAFKA_INPUT_TOPIC
value: cifar10-grpc-input
- name: KAFKA_OUTPUT_TOPIC
value: cifar10-grpc-output
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8500
name: model
replicas: 2
# !cat cifar10_grpc.yaml | sed s/BROKER_IP/$BROKER_CIP:9094/ | kubectl apply -f -
# Looking at the metrics dashboard for Seldon you should see throughput we are getting. For a single replica on GKE with n1-standard-4 nodes we can see around 220 requests per second being processed.
#
# 
# !kubectl delete -f cifar10_grpc.yaml
| examples/kafka/cifar10/cifar10_kafka.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #Strings
# Strings are used in Python to record text information, such as name. Strings in Python are actually a *sequence*, which basically means Python keeps track of every element in the string as a sequence. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).
#
# This idea of a sequence is an important one in Python and we will touch upon it later on in the future.
#
# In this lecture we'll learn about the following:
#
# 1.) Creating Strings
# 2.) Printing Strings
# 3.) Differences in Printing in Python 2 vs 3
# 4.) String Indexing and Slicing
# 5.) String Properties
# 6.) String Methods
# 7.) Print Formatting
# ## Creating a String
# To create a string in Python you need to use either single quotes or double quotes. For example:
# Single word
'hello'
# Entire phrase
'This is also a string'
# We can also use double quote
"String built with double quotes"
# Be careful with quotes!
' I'm using single quotes, but will create an error'
# The reason for the error above is because the single quote in I'm stopped the string. You can use combinations of double and single quotes to get the complete statement.
"Now I'm ready to use the single quotes inside a string!"
# Now let's learn about printing strings!
# ## Printing a String
#
# Using Jupyter notebook with just a string in a cell will automatically output strings, but the correct way to display strings in your output is by using a print function.
# We can simply declare a string
'Hello World'
# note that we can't output multiple strings this way
'Hello World 1'
'Hello World 2'
# We can use a print statement to print a string.
print 'Hello World 1'
print 'Hello World 2'
print 'Use \n to print a new line'
print '\n'
print 'See what I mean?'
# ### <font color='red'>Python 3 Alert!</font>
# Something to note. In Python 3, print is a function, not a statement. So you would print statements like this:
# print('Hello World')
#
# If you want to use this functionalty in Python2, you can import form the __future__ module.
#
# **A word of caution, after importing this you won't be able to choose the print statement method anymore. So pick whichever one you prefer depending on your Python installation and continue on with it.**
# +
# To use print function from Python 3 in Python 2
from __future__ import print_function
print('Hello World')
# -
# ## String Basics
# We can also use a function called len() to check the length of a string!
len('Hello World')
# ## String Indexing
# We know strings are a sequence, which means Python can use indexes to call parts of the sequence. Let's learn how this works.
#
# In Python, we use brackets [] after an object to call its index. We should also note that indexing starts at 0 for Python. Let's create a new object called s and the walk through a few examples of indexing.
# Assign s as a string
s = 'Hello World'
#Check
s
# Print the object
print(s)
# Let's start indexing!
# Show first element (in this case a letter)
s[0]
s[1]
s[2]
# We can use a : to perform *slicing* which grabs everything up to a designated point. For example:
# Grab everything past the first term all the way to the length of s which is len(s)
s[1:]
# Note that there is no change to the original s
s
# Grab everything UP TO the 3rd index
s[:3]
# Note the above slicing. Here we're telling Python to grab everything from 0 up to 3. It doesn't include the 3rd index. You'll notice this a lot in Python, where statements and are usually in the context of "up to, but not including".
#Everything
s[:]
# We can also use negative indexing to go backwards.
# Last letter (one index behind 0 so it loops back around)
s[-1]
# Grab everything but the last letter
s[:-1]
# We can also use index and slice notation to grab elements of a sequence by a specified step size (the default is 1). For instance we can use two colons in a row and then a number specifying the frequency to grab elements. For example:
# Grab everything, but go in steps size of 1
s[::1]
# Grab everything, but go in step sizes of 2
s[::2]
# We can use this to print a string backwards
s[::-1]
# ## String Properties
# Its important to note that strings have an important property known as immutability. This means that once a string is created, the elements within it can not be changed or replaced. For example:
s
# Let's try to change the first letter to 'x'
s[0] = 'x'
# Notice how the error tells us directly what we can't do, change the item assignment!
#
# Something we can do is concatenate strings!
s
# Concatenate strings!
s + ' concatenate me!'
# We can reassign s completely though!
s = s + ' concatenate me!'
print(s)
s
# We can use the multiplication symbol to create repetition!
letter = 'z'
letter*10
# ## Basic Built-in String methods
#
# Objects in Python usually have built-in methods. These methods are functions inside the object (we will learn about these in much more depth later) that can perform actions or commands on the object itself.
#
# We call methods with a period and then the method name. Methods are in the form:
#
# object.method(parameters)
#
# Where parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. Later on we will be creating our own objects and functions!
#
# Here are some examples of built-in methods in strings:
s
# Upper Case a string
s.upper()
# Lower case
s.lower()
# Split a string by blank space (this is the default)
s.split()
# Split by a specific element (doesn't include the element that was split on)
s.split('W')
# There are many more methods than the ones covered here. Visit the advanced String section to find out more!
# ## Print Formatting
#
# We can use the .format() method to add formatted objects to printed string statements.
#
# The easiest way to show this is through an example:
'Insert another string with curly brackets: {}'.format('The inserted string')
# We will revisit this string formatting topic in later sections when we are building our projects!
# ## Next up: Lists!
| notebooks/Complete-Python-Bootcamp-master/Strings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/wel51x/DS-Sprint-01-Dealing-With-Data/blob/master/module3-basicdatavisualizations/LS_DS_113_Plotting_Playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="IG1v46jnGkax" colab_type="code" outputId="1b61dcea-3109-4824-a2d6-a136b1571720" colab={"base_uri": "https://localhost:8080/", "height": 472}
# https://matplotlib.org/gallery/lines_bars_and_markers/barh.html#sphx-glr-gallery-lines-bars-and-markers-barh-py
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
plt.rcdefaults()
fig, ax = plt.subplots()
# Example data
people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos = np.arange(len(people))
performance = 3 + 10 * np.random.rand(len(people))
error = np.random.rand(len(people))
ax.barh(y_pos, performance, xerr=error, align='center',
color='green', ecolor='black')
ax.set_yticks(y_pos)
ax.set_yticklabels(people)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Performance')
ax.set_title('How fast do you want to go today?')
plt.show()
# + id="DWcnKAt4H9PT" colab_type="code" outputId="d14b5d40-bacf-485e-c7e3-927b0919c3ed" colab={"base_uri": "https://localhost:8080/", "height": 432}
# Adapted to piechart
# https://matplotlib.org/gallery/pie_and_polar_charts/pie_features.html#sphx-glr-gallery-pie-and-polar-charts-pie-features-py
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
plt.rcdefaults()
fig, ax = plt.subplots()
# Example data
people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
performance = 3 + 10 * np.random.rand(len(people))
error = np.random.rand(len(people))
ax.pie(performance, labels=people)
ax.set_title('How fast do you want to go today?')
plt.show()
# + id="Y26IktTfIZmO" colab_type="code" outputId="2344c79d-474a-49bf-f5be-7b55601f14ac" colab={"base_uri": "https://localhost:8080/", "height": 487}
# https://matplotlib.org/gallery/lines_bars_and_markers/scatter_demo2.html#sphx-glr-gallery-lines-bars-and-markers-scatter-demo2-py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
# Load a numpy record array from yahoo csv data with fields date, open, close,
# volume, adj_close from the mpl-data/example directory. The record array
# stores the date as an np.datetime64 with a day unit ('D') in the date column.
with cbook.get_sample_data('goog.npz') as datafile:
price_data = np.load(datafile)['price_data'].view(np.recarray)
price_data = price_data[-250:] # get the most recent 250 trading days
delta1 = np.diff(price_data.adj_close) / price_data.adj_close[:-1]
# Marker size in units of points^2
volume = (15 * price_data.volume[:-2] / price_data.volume[0])**2
close = 0.003 * price_data.close[:-2] / 0.003 * price_data.open[:-2]
fig, ax = plt.subplots()
ax.scatter(delta1[:-1], delta1[1:], c=close, s=volume, alpha=0.5)
ax.set_xlabel(r'$\Delta_i$', fontsize=15)
ax.set_ylabel(r'$\Delta_{i+1}$', fontsize=15)
ax.set_title('Volume and percent change')
ax.grid(True)
fig.tight_layout()
plt.show()
# + id="DaEiVQD2K0T1" colab_type="code" outputId="094a3011-9db7-4135-ab17-446fd4505cc6" colab={"base_uri": "https://localhost:8080/", "height": 413}
# https://matplotlib.org/gallery/mplot3d/scatter3d.html#sphx-glr-gallery-mplot3d-scatter3d-py
# This import registers the 3D projection, but is otherwise unused.
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
def randrange(n, vmin, vmax):
'''
Helper function to make an array of random numbers having shape (n, )
with each number distributed Uniform(vmin, vmax).
'''
return (vmax - vmin)*np.random.rand(n) + vmin
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
n = 100
# For each set of style and range settings, plot n random points in the box
# defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh].
for c, m, zlow, zhigh in [('r', 'o', -50, -25), ('b', '^', -30, -5)]:
xs = randrange(n, 23, 32)
ys = randrange(n, 0, 100)
zs = randrange(n, zlow, zhigh)
ax.scatter(xs, ys, zs, c=c, marker=m)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
# + id="02mt_9taK6FT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 361} outputId="59d68327-55d2-47e9-a19d-e978aa677dc6"
# Next three plots are for assignment
# 1 Bar Chart
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data_url = ('https://archive.ics.uci.edu/ml/machine-learning-databases/'
'haberman/haberman.data')
cols = [
'Year of Operation',
'Age',
'Positive Nodes',
'Survived 5 Years'
]
df = pd.read_csv(data_url, header=None, names=cols)
df['Year of Operation'] += 1900
df['Survived 5 Years'] -= 1
#print(df)
#df['Survived 5 Years'] == 1
plt.xlabel('Year of Operation')
plt.ylabel('Age (plus 55)')
plt.bar(df['Year of Operation'], df['Age'] - 55, color='orange')
plt.show()
# + id="uO79j3IO_j3I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 401} outputId="7370dc2c-4eea-4835-b700-b18c1f3a2370"
# 2 Scatter Chart
fig, ax = plt.subplots()
ax.set_xlabel('Age')
ax.set_ylabel('Number of Positive Nodes')
ax.scatter(df['Age'], df['Positive Nodes'], alpha=0.5)
ax.grid(True)
fig.tight_layout()
plt.show()
# + id="_l9lAUs1_7Ka" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1037} outputId="44575d83-b0a3-42dc-944e-4d3672f8af5d"
# 3 Pie Chart (yes!)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 18.5)
ax.pie(df['Survived 5 Years'], labels=df['Age'])
plt.show()
| module3-basicdatavisualizations/LS_DS_113_Plotting_Playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plan for Python work
#
# 1) My _suggestion_ is to proceed as before and work through the notebook , switching every 5 or so minutes, discussing in each cell what the lines mean. You may do something differently if you both agree.
#
# 2) I provide a number of commands below (and prompts) , some of the things you need to figure out yourself and some of the tasks I have completed for you
#
# 3) Make sure you understand how much time you have (ask me if you don't know) and plan accordingly. There is a lot of infromation in here!
#
# 4) Plenty of suggestions at the bottom for more things to try - you should take a look and make sure you can do all of these things...
#
# #### You may need to `conda install` some stuff, e.g.
# `conda install xlrd`
# +
#A bunch of libraries and packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import xlrd
from sklearn import linear_model
from pandas.tools.plotting import scatter_matrix
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import Ridge
# %matplotlib inline
# +
# UCI ML database - energy efficiency
# Database of many ML data available here: https://archive.ics.uci.edu/ml/
UCI_energy = pd.read_excel('https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx')
# definition of dataframes
# X1 Relative Compactness
# X2 Surface Area
# X3 Wall Area
# X4 Roof Area
# X5 Overall Height
# X6 Orientation
# X7 Glazing Area
# X8 Glazing Area Distribution
# y1 Heating Load
# y2 Cooling Load
# -
# ### Take a moment to rename the columns in the data frame
#
# Also, head, tail, describe and scatterplot the data as a means of basic data exploration.
UCI_energy = UCI_energy.rename(columns={'X1':'Relative Compactness', 'X2':'Surface Area', 'X3':'Wall Area', 'X4':'Roof Area', 'X5':'Overall Height',
'X6':'Orientation', 'X7':'Glazing Area', 'X8':'Glazing Area Distribution', 'Y1':'Heating Load',
'Y2':'Cooling Load'})
UCI_energy.head()
# #### Now make a test-train split
# +
# This is a naive validation set approach. Please understand and briefly discuss this is just for teaching
# What would you do in the real world based on our bootstrap/resampling lessons?
train,test = train_test_split(UCI_energy, test_size=0.05, random_state=1010)
# have you had a look at the data yet? quickly do so before moving on...
# -
# # Part 1: Multiple linear regression on X1-X8 predicting Y1
# If you took the time to rename your columns, good for you. Now you need to fix up the code below. That will make you read it carefully!
#
# BTW, when you do something like rename a variable across all your code, that is called refactoring. It is dangerous and it often is good to use something like PyCharm's `refactor` tool.
# +
# train linear model
MLR=linear_model.LinearRegression()
MLR.fit(train[train.columns.values[0:8]],train[train.columns.values[8]])
# make predictions on test and train set
trainpred=MLR.predict(train[train.columns.values[0:8]])
testpred=MLR.predict(test[train.columns.values[0:8]])
#make parity plot
plt.figure(figsize=(4,4))
plt.xlim([0,50]);
plt.ylim([0,50]);
plt.scatter(train[train.columns.values[8]],trainpred)
plt.scatter(test[train.columns.values[8]],testpred,color='r')
plt.plot([0,50],[0,50],lw=4,color='black')
#calculate the test and train error
print("Train error",mean_squared_error(train[train.columns.values[8]],trainpred))
print("Test error",mean_squared_error(test[train.columns.values[8]],testpred))
# -
# # Part 2: Ridge Regression (same data as Part 1)
#
# * The Ridge coefficients minimize $RSS + \lambda \sum_{j=1}^{p}\beta_j^2$
# * There is an additional **penalty** in error for having nonzero coefficients!
# * Note: Eq 6.5 in ISLR shows the tuning parameter as $\lambda$, it is $\alpha$ in SKLearn
# * Goal here: train models as a function of the regularization parameter
# * The X's should be normalized as in Eq 6.6, there is a normalization feature, but we will do it manually using $x_{ij}=\frac{x_{ij}}{s_j}$
# * I suggest on your own you test out what normalization in Ridge does
# * Some methods in sklearn also do automatic selection of shrinkage coefficient! Cool!
#normalized data for Ridge / LASSO
train_normalized=train/train.std()
test_normalized=test/test.std()
# ## 2-1 Example of single instance of RR
#
heat_ridge=Ridge()
a=1e0
heat_ridge.set_params(alpha=a)
heat_ridge.fit(train_normalized[train.columns.values[0:8]],train_normalized[train.columns.values[8]])
# +
print (mean_squared_error(train_normalized[train.columns.values[8]],heat_ridge.predict(
train_normalized[train.columns.values[0:8]])))
print (mean_squared_error(test_normalized[train.columns.values[8]],heat_ridge.predict(
test_normalized[train.columns.values[0:8]])))
# -
# ## 2-2 Example of searching the $\alpha$ space in RR
# +
# RR vs lambda (based on sklearn tutorial)
coefs = []
trainerror = []
testerror = []
# do you know what is happening here?
lambdas = np.logspace(-6,6,200)
model=Ridge()
# loop over lambda values (strength of regularization)
for l in lambdas:
model.set_params(alpha=l)
model.fit(train_normalized[train.columns.values[0:8]],train_normalized[train.columns.values[8]])
coefs.append(model.coef_)
trainerror.append(mean_squared_error(train_normalized[train.columns.values[8]],model.predict(
train_normalized[train.columns.values[0:8]])))
testerror.append(mean_squared_error(test_normalized[train.columns.values[8]],model.predict(
test_normalized[train.columns.values[0:8]])))
# -
# what is being plotted here?
plt.figure(figsize=(10,3))
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('RR coefs vs $\lambda$')
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.plot(lambdas,testerror,label='test error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc=1)
plt.title('error vs $\lambda$')
# ### RR questions
#
# 1) Explain to each other what is happening in these two plots
# 2) Why does the blue curve have a minimum at the smallest $\lambda$ value?
# # Part 3: LASSO regression (same data as Part 1)
#
# * The lasso improves over ridge by also providing a variable selection tool!
# * The lasso minimizer is $RSS + \lambda \sum_{j=1}^{p}\lvert\beta_j\rvert$
# +
# also based on sklearn tutorials
# what the hell is happening in this cell?
coefs = []
trainerror = []
testerror = []
lambdas = np.logspace(-6,6,200)
model=linear_model.Lasso()
# loop over lambda values (strength of regularization)
for l in lambdas:
model.set_params(alpha=l,max_iter=1e6)
model.fit(train_normalized[train.columns.values[0:8]],train_normalized[train.columns.values[8]])
coefs.append(model.coef_)
trainerror.append(mean_squared_error(train_normalized[train.columns.values[8]],model.predict(
train_normalized[train.columns.values[0:8]])))
testerror.append(mean_squared_error(test_normalized[train.columns.values[8]],model.predict(
test_normalized[train.columns.values[0:8]])))
# +
plt.figure(figsize=(10,3))
#plt.locator_params(nbins=5)
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('RR coefs vs $\lambda$')
#plt.xlim(1e-4,1e0)
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.plot(lambdas,testerror,label='test error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
#plt.xlim(1e-4,1e0)
#plt.ylim(0,0.5)
plt.legend(loc=1)
plt.title('error vs $\lambda$')
# -
# ### Other things to consider if you have more time
#
# * Note we did not scale the features in the MLR, try it out and verify the final error doesnt' change!
# * Make sure you undersand how to make _predictions_ with supervised learning models that are trained on scaled/normalized data
# * Plot the residuals and verify if errors are distributed normally
# * Make a parity plot including the predictions from ridge and LASSO
# * Compare errors between all three
# * Explore the effect of training/testing split
# * Look at the shrinkage/regularization situation when predicting Y2 vs Y1...
#
| Wi19_content/DSMCER/L8_SubsetRegularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/horizontal-primary-light.png" alt="he-black-box" width="600"/>
#
#
# # Homomorphic Encryption using Duet: Data Owner
# ## Tutorial 2: Encrypted image evaluation
#
#
# Welcome!
# This tutorial will show you how to evaluate Encrypted images using Duet and TenSEAL. This notebook illustrates the Data Owner view on the operations.
#
# We recommend going through Tutorial 0 and 1 before trying this one.
# ### Setup
#
# All modules are imported here, make sure everything is installed by running the cell below.
# +
import os
import requests
import syft as sy
import tenseal as ts
from torchvision import transforms
from random import randint
import numpy as np
from PIL import Image
from matplotlib.pyplot import imshow
import torch
from syft.grid.client.client import connect
from syft.grid.client.grid_connection import GridHTTPConnection
from syft.core.node.domain.client import DomainClient
sy.load_lib("tenseal")
# -
# ## Connect to PyGrid
#
# Connect to PyGrid Domain server.
client = connect(
url="http://localhost:5000", # Domain Address
credentials={"email":"<EMAIL>", "password":"<PASSWORD>"},
conn_type= GridHTTPConnection, # HTTP Connection Protocol
client_type=DomainClient) # Domain Client type
# ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 1 : Now STOP and run the Data Scientist notebook until the same checkpoint.
# ### Data Owner helpers
# +
# Create the TenSEAL security context
def create_ctx():
"""Helper for creating the CKKS context.
CKKS params:
- Polynomial degree: 8192.
- Coefficient modulus size: [40, 21, 21, 21, 21, 21, 21, 40].
- Scale: 2 ** 21.
- The setup requires the Galois keys for evaluating the convolutions.
"""
poly_mod_degree = 8192
coeff_mod_bit_sizes = [40, 21, 21, 21, 21, 21, 21, 40]
ctx = ts.context(ts.SCHEME_TYPE.CKKS, poly_mod_degree, -1, coeff_mod_bit_sizes)
ctx.global_scale = 2 ** 21
ctx.generate_galois_keys()
return ctx
def download_images():
try:
os.mkdir("data/mnist-samples")
except BaseException as e:
pass
url = "https://raw.githubusercontent.com/OpenMined/TenSEAL/master/tutorials/data/mnist-samples/img_{}.jpg"
path = "data/mnist-samples/img_{}.jpg"
for idx in range(6):
img_url = url.format(idx)
img_path = path.format(idx)
r = requests.get(img_url)
with open(img_path, 'wb') as f:
f.write(r.content)
# Sample an image
def load_input():
download_images()
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
idx = randint(1, 5)
img_name = "data/mnist-samples/img_{}.jpg".format(idx)
img = Image.open(img_name)
return transform(img).view(28, 28).tolist(), img
# Helper for encoding the image
def prepare_input(ctx, plain_input):
enc_input, windows_nb = ts.im2col_encoding(ctx, plain_input, 7, 7, 3)
assert windows_nb == 64
return enc_input
# -
# ### Prepare the context
context = create_ctx()
# ### Sample and encrypt an image
# +
image, orig = load_input()
encrypted_image = prepare_input(context, image)
print("Encrypted image ", encrypted_image)
print("Original image ")
imshow(np.asarray(orig))
# -
ctx_ptr = context.send(client, searchable=True, tags=["context"])
enc_image_ptr = encrypted_image.send(client, searchable=True, tags=["enc_image"])
client.store.pandas
# ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 2 : Now STOP and run the Data Scientist notebook until the same checkpoint.
# ### Approve the requests
client.requests.pandas
client.requests[0].accept()
client.requests[0].accept()
client.requests.pandas
# ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 3 : Now STOP and run the Data Scientist notebook until the same checkpoint.
# ### Retrieve and decrypt the evaluation result
# +
result = client.store["result"].get(delete_obj=False)
result.link_context(context)
result = result.decrypt()
# -
# ### Run the activation and retrieve the label
# +
probs = torch.softmax(torch.tensor(result), 0)
label_max = torch.argmax(probs)
print("Maximum probability for label {}".format(label_max))
# -
# ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 4 : Well done!
# # Congratulations!!! - Time to Join the Community!
#
# Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
#
# ### Star PySyft and TenSEAL on GitHub
#
# The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
#
# - [Star PySyft](https://github.com/OpenMined/PySyft)
# - [Star TenSEAL](https://github.com/OpenMined/TenSEAL)
#
# ### Join our Slack!
#
# The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org). #lib_tenseal and #code_tenseal are the main channels for the TenSEAL project.
#
# ### Donate
#
# If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
#
# [OpenMined's Open Collective Page](https://opencollective.com/openmined)
| examples/pygrid/homomorphic-encryption/Tutorial_2_TenSEAL_Syft_Data_Owner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## View movie rating data
import pandas
import webbrowser
import os
# Read the dataset into a data table using Pandas
data_table = pandas.read_csv("movie_ratings_data_set.csv")
# Create a web page view of the data for easy viewing
html = data_table[0:100].to_html()
# Save the html to a temporary file
with open("data.html", "w") as f:
f.write(html)
# Open the web page in our web browser
full_filename = os.path.abspath("data.html")
webbrowser.open("file://{}".format(full_filename))
# ## Movie List
# +
# Read the dataset into a data table using Pandas
data_table = pandas.read_csv("movies.csv", index_col="movie_id")
# Create a web page view of the data for easy viewing
html = data_table.to_html()
# Save the html to a temporary file
with open("movie_list.html", "w") as f:
f.write(html)
# Open the web page in our web browser
full_filename = os.path.abspath("movie_list.html")
webbrowser.open("file://{}".format(full_filename))
| Codes/Machine Learning and AI Foundations - Recommendations/2. View Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Xarray Tutorial
#
# <div class="alert-info">
#
# ### Overview
#
# * **teaching:** 30 minutes
# * **exercises:** 0
# * **questions:**
# * What is xarray designed to do?
# * How do I create an xarray Dataset?
# * How does indexing work with xarray?
# * How do I run computations on xarray datasets?
# * What are ways to ploy xarray datasets?
# </div>
#
# ### Table of contents
# 1. [**Xarray primer**](#Xarray-primer)
# 1. [**Creating data**](#Creating-data)
# 1. [**Loading data**](#Loading-data)
# 1. [**Selecting data**](#Selecting-data)
# 1. [**Basic computations**](#Basic-Computations)
# 1. [**Advanced computations**](#Advanced-Computations)
# 1. [**ENSO excercise**](#ENSO-exercise)
# ## Xarray primer
# We've seen that [Pandas](https://pandas.pydata.org/pandas-docs/stable/) and [Geopandas](http://geopandas.org) are excellent libraries for analyzing tabular "labeled data". [Xarray](http://xarray.pydata.org/en/stable/) is designed to make it easier to work with with _labeled multidimensional data_. By _multidimensional data_ (also often called _N-dimensional_), we mean data with many independent dimensions or axes. For example, we might represent Earth's surface temperature $T$ as a three dimensional variable
#
# $$ T(x, y, t) $$
#
# where $x$ and $y$ are spatial dimensions and and $t$ is time. By _labeled_, we mean data that has metadata associated with it describing the names and relationships between the variables. The cartoon below shows a "data cube" schematic dataset with temperature and preciptation sharing the same three dimensions, plus longitude and latitude as auxilliary coordinates.
#
# 
#
# ### Xarray data structures
#
# Like Pandas, xarray has two fundamental data structures:
# * a `DataArray`, which holds a single multi-dimensional variable and its coordinates
# * a `Dataset`, which holds multiple variables that potentially share the same coordinates
#
# #### DataArray
#
# A `DataArray` has four essential attributes:
# * `values`: a `numpy.ndarray` holding the array’s values
# * `dims`: dimension names for each axis (e.g., `('x', 'y', 'z')`)
# * `coords`: a dict-like container of arrays (coordinates) that label each point (e.g., 1-dimensional arrays of numbers, datetime objects or strings)
# * `attrs`: an `OrderedDict` to hold arbitrary metadata (attributes)
#
# #### DataSet
#
# A dataset is simply an object containing multiple DataArrays indexed by variable name
#
#
# ## Creating data
#
# Let's start by constructing some DataArrays manually
import numpy as np
import xarray as xr
print('Numpy version: ', np.__version__)
print('Xarray version: ', xr.__version__)
# Here we model the simple function
#
# $$f(x) = sin(x)$$
#
# on the interval $-\pi$ to $\pi$. We start by creating the data as numpy arrays.
x = np.linspace(-np.pi, np.pi, 19)
f = np.sin(x)
# Now we are going to put this into an xarray DataArray.
#
# A simple DataArray without dimensions or coordinates isn't much use.
da_f = xr.DataArray(f)
da_f
# We can add a dimension name...
da_f = xr.DataArray(f, dims=['x'])
da_f
# But things get most interesting when we add a coordinate:
da_f = xr.DataArray(f, dims=['x'], coords={'x': x})
da_f
# Xarray has built-in plotting, like pandas.
da_f.plot(marker='o')
# ### Selecting Data
#
# We can always use regular numpy indexing and slicing on DataArrays to get the data back out.
# get the 10th item
da_f[10]
# get the first 10 items
da_f[:10]
# However, it is often much more powerful to use xarray's `.sel()` method to use label-based indexing. This allows us to fetch values based on the value of the coordinate, not the numerical index.
da_f.sel(x=0)
da_f.sel(x=slice(0, np.pi)).plot()
# ### Basic Computations
#
# When we perform mathematical manipulations of xarray DataArrays, the coordinates come along for the ride.
# Imagine we want to calcuate
#
# $$ g = f^2 + 1 $$
#
# We can apply familiar numpy operations to xarray objects.
#
da_g = da_f**2 + 1
da_g
da_g.plot()
# ### Exercise
#
# - Multipy the DataArrays `da_f` and `da_g` together.
# - Select the range $-1 < x < 1$
# - Plot the result
(da_f * da_g).sel(x=slice(-1, 1)).plot(marker='o')
# ## Multidimensional Data
#
# If we are just dealing with 1D data, Pandas and Xarray have very similar capabilities. Xarray's real potential comes with multidimensional data.
#
# At this point we will load data from a netCDF file into an xarray dataset.
# + language="bash"
#
# git clone https://github.com/pangeo-data/tutorial-data.git
# -
ds = xr.open_dataset('./tutorial-data/sst/NOAA_NCDC_ERSST_v3b_SST-1960.nc')
ds
## Xarray > v0.14.1 has a new HTML output type!
xr.set_options(display_style="html")
ds
# +
# both do the exact same thing
# dictionary syntax
sst = ds['sst']
# attribute syntax
sst = ds.sst
sst
# -
# ### Multidimensional Indexing
#
# In this example, we take advantage of the fact that xarray understands time to select a particular date
sst.sel(time='1960-06-15').plot(vmin=-2, vmax=30)
# But we can select along any axis
sst.sel(lon=180).transpose().plot()
sst.sel(lon=180, lat=40).plot()
# ### Label-Based Reduction Operations
#
# Usually the process of data analysis involves going from a big, multidimensional dataset to a few concise figures.
# Inevitably, the data must be "reduced" somehow. Examples of simple reduction operations include:
#
# - Mean
# - Standard Deviation
# - Minimum
# - Maximum
#
# etc. Xarray supports all of these and more, via a familiar numpy-like syntax. But with xarray, you can specify the reductions by dimension.
#
# First we start with the default, reduction over all dimensions:
sst.mean()
sst_time_mean = sst.mean(dim='time')
sst_time_mean.plot(vmin=-2, vmax=30)
sst_zonal_mean = sst.mean(dim='lon')
sst_zonal_mean.transpose().plot()
sst_time_and_zonal_mean = sst.mean(dim=('time', 'lon'))
sst_time_and_zonal_mean.plot()
# some might prefer to have lat on the y axis
sst_time_and_zonal_mean.plot(y='lat')
# ### More Complicated Example: Weighted Mean
#
# The means we calculated above were "naive"; they were straightforward numerical means over the different dimensions of the dataset. They did not account, for example, for spherical geometry of the globe and the necessary weighting factors. Although xarray is very useful for geospatial analysis, **it has no built-in understanding of geography**.
#
# Below we show how to create a proper weighted mean by using the formula for the area element in spherical coordinates. This is a good illustration of several xarray concepts.
#
# The [area element for lat-lon coordinates](https://en.wikipedia.org/wiki/Spherical_coordinate_system#Integration_and_differentiation_in_spherical_coordinates) is
#
# $$ \delta A = R^2 \delta \phi \delta \lambda \cos(\phi) $$
#
# where $\phi$ is latitude, $\delta \phi$ is the spacing of the points in latitude, $\delta \lambda$ is the spacing of the points in longitude, and $R$ is Earth's radius. (In this formula, $\phi$ and $\lambda$ are measured in radians.) Let's use xarray to create the weight factor.
R = 6.37e6
# we know already that the spacing of the points is one degree latitude
dϕ = np.deg2rad(1.)
dλ = np.deg2rad(1.)
dA = R**2 * dϕ * dλ * np.cos(np.deg2rad(ds.lat))
dA.plot()
dA.where(sst[0].notnull())
pixel_area = dA.where(sst[0].notnull())
pixel_area.plot()
total_ocean_area = pixel_area.sum(dim=('lon', 'lat'))
sst_weighted_mean = (sst * pixel_area).sum(dim=('lon', 'lat')) / total_ocean_area
sst_weighted_mean.plot()
# ### Maps
#
# Xarray integrates with cartopy to enable you to plot your data on a map
# +
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
plt.figure(figsize=(12, 8))
ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine())
ax.coastlines()
sst[0].plot(transform=ccrs.PlateCarree(), vmin=-2, vmax=30,
cbar_kwargs={'shrink': 0.4})
# -
# ## IV. Opening Many Files
#
# One of the killer features of xarray is its ability to open many files into a single dataset. We do this with the `open_mfdataset` function.
ds_all = xr.open_mfdataset('./tutorial-data/sst/*nc', combine='by_coords')
ds_all
# Now we have 57 years of data instead of one!
# ## V. Groupby
#
# Now that we have a bigger dataset, this is a good time to check out xarray's groupby capabilities.
sst_clim = ds_all.sst.groupby('time.month').mean(dim='time')
sst_clim
# Now the data has dimension `month` instead of time!
# Each value represents the average among all of the Januaries, Februaries, etc. in the dataset.
(sst_clim[6] - sst_clim[0]).plot()
plt.title('June minus July SST Climatology')
# ## VI. Resample and Rolling
#
# Resample is meant specifically to work with time data (data with a `datetime64` variable as a dimension).
# It allows you to change the time-sampling frequency of your data.
#
# Let's illustrate by selecting a single point.
sst_ts = ds_all.sst.sel(lon=300, lat=10)
sst_ts_annual = sst_ts.resample(time='A').mean(dim='time')
sst_ts_annual
sst_ts.plot()
sst_ts_annual.plot()
# An alternative approach is a "running mean" over the time dimension.
# This can be accomplished with xarray's `.rolling` operation.
sst_ts_rolling = sst_ts.rolling(time=24, center=True).mean()
sst_ts_annual.plot(marker='o')
sst_ts_rolling.plot()
# ## Finale: Calculate the ENSO Index
#
# [This page from NOAA](https://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst) explains how the El Niño Southern Oscillation index is calculated.
#
#
# - The Nino 3.4 region is defined as the region between +/- 5 deg. lat, 170 W - 120 W lon.
# - Warm or cold phases of the Oceanic Nino Index are defined by a five consecutive 3-month running mean of sea surface temperature (SST) anomalies in the Niño 3.4 region that is above (below) the threshold of +0.5°C (-0.5°C). This is known as the Oceanic Niño Index (ONI).
#
# (Note that "anomaly" means that the seasonal cycle is removed.)
#
# _Try working on this on your own for 5 minutes._
# Once you're done, try comparing the ENSO Index you calculated with the NINO3.4 index published by [NOAA](https://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst/). The pandas snippet below will load the official time series for comparison.
import pandas as pd
noaa_nino34 = pd.read_csv('https://www.cpc.ncep.noaa.gov/data/indices/sstoi.indices',
sep=r" ", skipinitialspace=True,
parse_dates={'time': ['YR','MON']},
index_col='time')['NINO3.4']
noaa_nino34.head()
# ## Getting Help with Xarray
#
# Here are some important resources for learning more about xarray and getting help.
#
# - [Xarray Documentation](http://xarray.pydata.org/en/latest/)
# - [Xarray GitHub Issue Tracker](https://github.com/pydata/xarray/issues)
# - [Xarray questions on StackOverflow](https://stackoverflow.com/questions/tagged/python-xarray)
| xarray.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3. Quantum mechanics
#
#
# Quantum mechanics is a theory which describes physical phenomena. The ideas behind quantum mechanics often appear counter intuitive and in conflict with our everyday experience of the world but they are particularly useful for understanding the behavior of particles at small scales, such as atoms and molecules. In fact, it was at these rather small scales that the breakdown of the classical laws of physics was first encountered. Nonetheless, the realm of quantum mechanics is not confined to the atomic world but it extends almost to all known scales (some problems arise in the interior of black holes and other exotic places for example). When dealing with objects of macroscopic size, the predictions of quantum mechanics agree with the one which can be made using classical laws of physics such as Newtonian mechanics and Maxwell's equations of the electromagnetic field.
#
# In the previous chapter, we encountered almost all the mathematical machinery needed to do calculations in quantum mechanics. In fact, in quantum mechanics physical systems and the interaction between them can be described using the matrix representation of the Hilbert space formalism. Here, we will make a connection between the mathematical concepts introduced and their relation to the physical world. The mathematics works extremely well, it has been tested countless times over the years without ever being unsuccessful in describing experiments. What has turned out to be very difficult is to associate an interpretation to the meaning of these mathematical abstractions. We will give here the standard and most widespread interpretation of the connection between the mathematical framework of quantum mechanics to our physical reality.
#
# ## 3.1 Postulates of quantum mechanics
#
# Here, we present quantum mechanics as a tool for a new paradigm of computing. Thus, we will introduce it in an axiomatic way and quickly put it to use. First, we introduce the axioms of the theory, then we describe the connection to the physical interpretation of these abstract concepts.
# Let us briefly go over the axioms, or postulates, of quantum mechanics. These make up the mathematical foundation of the theory.
#
# <ol>
# <li> All the possible states of a system form an Hilbert space.</li>
# <li> The wave function $\lvert \psi \rangle$, which is a ket vector in the Hilbert space, encodes the state of the system. Complex linear combinations of the states of the system are still valid states of the system. </li>
# <li> Physical properties of a system, called observables, are encoded into the real eigenvalues of Hermitian operators on a Hilbert space.</li>
# <li> The evolution of the system in time is generated by a particular operator, the Hamiltonian. The Hamiltonian is the operator associated with the total energy of the system. </li>
# <li> The expected value of a physical property $\hat{F}$ of the system in the state $\lvert \psi \rangle $ is found by calculating $ \langle \psi \rvert \hat{F} \lvert \psi \rangle$.</li>
# </ol>
#
# Using the rules given above one can make accurate predictions about properties of interest in any given setting.
#
# #### Example
# For example, consider a two-level system, a qubit, let us see how the postulates are used in this simple case.
# The possible state of the qubit are either high/low (on/off, 1/0, up/down,..), so the Hilbert space is just made up by these two vectors $\lvert 0 \rangle$, $\lvert 1 \rangle$. One usually says that these form a "basis" for the Hilbert space, as any vector in the space can be found as a complex linear combination of these two.
#
# The state of the qubit is represented by a vector in this Hilbert space. The most general vector is $\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle $.
#
# Consider the Hamiltonian $\hat{H} = E_0 \hat{\sigma}^{+}\hat{\sigma}^{-}$ for the system, where $\hat{\sigma}^{+} \lvert 0 \rangle = \lvert 1 \rangle $, $\hat{\sigma}^{+} \lvert 1 \rangle = 0 $ and $\hat{\sigma}^{-} \lvert 1 \rangle = \lvert 0 \rangle $, $\hat{\sigma}^{-} \lvert 0 \rangle = 0 $.
#
# The Hamiltonian encodes the energy of the system. Therefore, we can calculate the energy of the qubit in the state $\lvert 0 \rangle$: $E = \langle 0 \rvert \hat{H} \lvert 0 \rangle = 0$.
# On the other hand, if the qubit is in the "on" state $\lvert 1 \rangle$, its energy is: $E = \langle 1 \rvert \hat{H} \lvert 1 \rangle = E_0$.
# ## 3.2 The wave function
#
# The state of a system is described by a ket vector in Hilbert space, called wave function $\vert \psi\left( \vec{r} , t \right)\rangle $, where the vector $\vec{r}$ specifies the position at time $t$. The wave function has the following properties:
# <ol>
# <li> It is a complex function of $\vec{r}$ and $t$. </li>
# <li> It is a continuous function.</li>
# <li> Generally its derivative is also continuous function.</li>
# <li> Its modulo square, $\left\vert \psi \right\vert ^{2}$, is integrable and has physical meaning as the probability of observing the system in the state represented by $\lvert \psi \rangle$.</li>
# </ol>
#
# The wave function, being a ket vector, inherits the superposition principle. Therefore, it is possible to build any wave function as a weighted combination of other wave functions.
# <ol>
# <li> If the system can be both in the state $\vert \psi_1 \rangle $ and in state $\vert \psi_2 \rangle $, then the total state of the system can be written as: $\vert \psi \rangle = \alpha_1 \lvert \psi_1 \rangle + \alpha_2 \lvert \psi_2 \rangle $,
# where $\alpha_1$, $\alpha_2$ are complex numbers whose modulo square represent the probability of finding the system in the corresponding states. </li>
# <li> If the wave function is multiplied by any number $\alpha \neq 0$, the state described by the corresponding wave function will not change: $\vert \psi \rangle \leftrightarrow \alpha \lvert \psi \rangle$ </li>
# </ol>
#
# The fact that the superposition principle applies to the states of a system does not have any analog in classical mechanics. A similar idea appears in the description of wave phenomena but it is difficult to understand it in terms of physical states. This is where the intuition we have about the physical world starts failing us. For instance, if one interprets $\vert \psi_1 \rangle $ to mean that the system is located at site $1$ and $\vert \psi_2 \rangle$ that the system is at site $2$, then the superposition state $\vert \psi \rangle = \alpha_1 \lvert \psi_1 \rangle + \alpha_2 \lvert \psi_2 \rangle $ would mean that the system is located at two different sites at the same time. The role of the observation, or measurement, comes here into play to save us from the weirdness of the quantum world. Even though the system might be described as being in a superposition of several states, the system's properties won't be an average between the properties of the different states of the superposition. Whenever a measurement is performed, the system "chooses" one of the possible states in a non-deterministic way. Which means that when the system is measured, it will be found to be in a particular state of the superposition with a certain probability. This probability is given by the modulo square of the complex coefficient of that state. Therefore, when the system is actually measured, it will be found with probability $\lvert \alpha_1 \rvert^2$ at site $1$ and with probability $\lvert \alpha_2 \rvert^2$ at site $2$. To summarize, at any given time a system may be in a superposition of several different states. However, when a measurement is done on the system, the system will show the properties of one of the states of the superposition with a certain probability.
#
# A consequence of the superposition principle, is that the wave function, $\vert \psi \rangle$, must be the solution of a linear equation. Thus, the equations describing the connection between the state of the system at one time and the state of the system at another time must be linear equations.
# # 3.3 The operators
#
# To each physical quantity $F$ corresponds an operator $\hat{F}$. Operators specify a mathematical prescription that is carried out on the state. In the matrix representation, we say that operators "act" on states, meaning that we have to multiply the vector which represent the state by the matrix which represents the operator. The outcome is a new vector, which represents a new state of the system. The operation of finding the value of a quantity $F$ corresponding to a certain operator $\hat{F}$ for the system in a state $\vert \psi \rangle$ is denoted by the symbol $\langle F \rangle $ which is a short-hand notation for
#
#
#
# \begin{equation}
# \langle \hat{F} \rangle \equiv \langle \psi \vert \hat{F} \vert \psi \rangle .
# \end{equation}
#
# To obtain the expected value of the property $F$ for the system in the state $\lvert \psi \rangle$, one needs to decompose the state $\lvert \psi \rangle$ in terms of the eigenstates $\lvert \phi_i \rangle$ of the operator $\hat{F}$. Those are the vectors that satisfy:
#
# \begin{equation}
# \hat{F} \lvert \phi_i \rangle = F_i \lvert \phi_i \rangle .
# \end{equation}
#
# The decomposition of $\lvert \psi \rangle$ in terms of the $\lvert \phi_i \rangle$ then reads
#
# \begin{equation}
# \lvert \psi \rangle = \sum_i a_i \lvert \phi_i \rangle,
# \end{equation}
#
#
# where $a_i = \langle \phi_i \vert \psi \rangle$. Once we have found $\lvert \psi \rangle = \sum_i a_i \lvert \phi_i \rangle$, we can then find the expected value of $\hat{F}$ for the system as
#
# \begin{equation}
# \langle \hat{F} \rangle = \sum_i \vert a_i \vert^2 F_i .
# \end{equation}
#
#
#
# Operators must have the following properties:
#
# <ol>
# <li> The principle of superposition must be satisfied
# $\hat{F} \alpha_1 \lvert \psi_1 \rangle + \alpha_2 \lvert \psi_2 \rangle = \alpha_1 \hat{F} \lvert \psi_1 \rangle + \alpha_2 \hat{F} \lvert \psi_2 \rangle $.
#
# Thus, we require the operators to be linear. </li>
# <li> The value of quantities represented by operators must be real
#
# $ \langle \hat{F} \rangle = \langle \hat{F} \rangle^{ *}$,
#
# which means
#
# $ \langle \psi \vert \hat{F} \vert \psi \rangle = \left( \langle \psi \vert \hat{F} \vert \psi \rangle \right)^{*} = \langle \psi \vert \hat{F}^{\dagger} \vert \psi \rangle $,
#
# Therefore,
#
# $ \hat{F} = \hat{F}^{\dagger} $.
#
# Operators are Hermitian. </li>
# </ol>
# ## 3.4 The time evolution
#
# The evolution in time of the state $\vert \psi\left( \vec{r} , t \right)\rangle$ of the system is determined by its Hamiltonian operator. One way to mathematically describe the time evolution of the state of the system is with the Schrodinger equation
#
# \begin{equation}
# -i \hbar \frac{\partial\vert \psi\left( \vec{r} , t \right)\rangle}{\partial t} = \hat{H} \vert \psi \rangle .
# \end{equation}
#
# The equation above can be solved by specifying the Hamiltonian $\hat{H}$ operator of the system and the initial state of the system $\vert \psi\left( \vec{r} , t = 0 \right)\rangle$. The Hamiltonian operator is the operator associated with the total energy of the system i.e. it is given by the sum of its potential and kinetic energy.
#
# The Schrodinger equation is one of the most important equations of modern physics. It provides an accurate description of all quantum systems which move at non-relativistic speed. In particular, a quantum computer is built by using the Schrodinger equation to understand the behavior of each of its quantum components.
#
# However, in quantum computing the picture is simpler and there is no need to use the Schrodinger equation to find the time-evolution of the system. The same happens in a classical computer, where Maxwell's equations are not needed to describe it but the rules of Boolean's logic for the electrical circuits suffice. So, in a quantum computation time evolves in steps where at each step some quantum gate is applied to the qubits. Therefore, the state of the qubits at some time $t$, after $k$ steps of the computation, is completely specified by the quantum circuit which is used in the computation.
#
# We can reconcile the two descriptions by introducing the "time-evolution operator" $\hat{U}$, which is defined from his action on the state of the system at $t=0$:
#
# \begin{equation}
# \lvert \psi \left( \vec{r} , t \right)\rangle = \hat{U} \vert \psi\left( \vec{r} , t=0 \right)\rangle
# \end{equation}
#
# In general, $\hat{U}$ can be obtained by integrating the Schrodinger equation. For the case of a time-independent Hamiltonian one finds $\hat{U}=e^{i\hat{H}t/\hbar}$
#
# In quantum computing, $\hat{U}$ will be the product of all successive unitary operations encoded in the quantum gates of a circuit. Therefore, the time-evolution operator at time $t_k$, after $k$ steps of the computation, is $\hat{U}(t=t_k) \equiv \hat{U}_k $ and the state of the system evolves as
#
# \begin{equation}
# \lvert \psi\left( t_k \right)\rangle = \hat{U}_k \vert \psi \left( t_0 \right)\rangle
# \end{equation}
#
#
# #### Example
#
# Consider the following quantum circuit
#
# <img src="figures/3/example1.jpeg" width="300">
# $$\text{1. Quantum circuit.}$$
#
# Let's calculate the time-evolution operators at each time step in order to give the evolution of the state in time
# $$\hat{U}_1 = \hat{I} $$
# $$\hat{U}_2 = \hat{X} \hat{I} $$
# $$\hat{U}_3 = \hat{X} \hat{X} \hat{I} $$
#
# where $\hat{X}$ is the quantum NOT gate which flips the state of a qubit $ \lvert 0 \rangle \rightarrow \lvert 1 \rangle$, $ \lvert 1 \rangle \rightarrow \lvert 0 \rangle$
# We initialize the system to the $\lvert 0 \rangle$ at the beginning of the computation, thus at time $t_0$ we have
# $$\lvert \psi \left( t_0 \right)\rangle = \lvert 0 \rangle$$
#
# At following times we find
# $$\lvert \psi \left( t_1 \right)\rangle = \hat{U}_1 \lvert 0 \rangle = \hat{I} \lvert 0 \rangle = \lvert 0 \rangle $$
# $$\lvert \psi \left( t_2 \right)\rangle = \hat{U}_2 \lvert 0 \rangle = \hat{X} \hat{I} \lvert 0 \rangle = \hat{X} \lvert 0 \rangle = \lvert 1 \rangle $$
# $$\lvert \psi \left( t_3 \right)\rangle = \hat{U}_3 \lvert 0 \rangle = \hat{X} \hat{X} \hat{I} \lvert 0 \rangle = \hat{X} \hat{X} \lvert 0 \rangle = \hat{X} \lvert 1 \rangle = \lvert 0 \rangle $$
# ## Exercises
#
#
# <ol>
#
# <li>
# Given the following wavefunctions describing the state of a system
#
# <ol>
# <li>
# $ \lvert \psi \left( \vec{r} , t \right)\rangle = \frac{1}{\sqrt{2}} e^{i3t} \lvert \psi \rangle_1 + \frac{1}{\sqrt{2}} e^{i3t} \lvert \psi \rangle_2 $
# </li>
#
# <li>
# $ \lvert \psi \left( \vec{r} , t \right)\rangle = \frac{1}{\sqrt{8}} e^{i5t} \lvert \psi \rangle_1 + \frac{\sqrt{5}}{\sqrt{8}} e^{i5t} \lvert \psi \rangle_2 + \frac{\sqrt{2}}{\sqrt{8}} e^{i2t} \lvert \psi \rangle_2 $
# </li>
#
# <li>
# $ \lvert \psi \left( \vec{r} , t \right)\rangle = \frac{1}{\sqrt{3}} e^{i2t} \lvert \psi \rangle_1 + \frac{\sqrt{2}}{\sqrt{3}} e^{i7t} \lvert \psi \rangle_2 $
# </li>
#
# What is the probability that the system is in the state $\lvert \psi \rangle_1$? and the probability of $\lvert \psi \rangle_2$?
# </ol>
#
#
# </li>
#
#
# <li>
# Find the expectaion value of the operator $\hat{X}$ for a qubit in the following states:
#
# <ol>
# <li>
# $ \lvert \psi \rangle = \lvert 0 \rangle$
# </li>
#
# <li>
# $ \lvert \psi \rangle = \lvert 1 \rangle$
# </li>
#
# <li>
# $ \lvert \psi \rangle = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle + \lvert 1 \rangle \right)$
# </li>
# </ol>
#
# </li>
#
# <li>
# Given a system in the initial state $\lvert \psi \left( t_0 \right)\rangle = \lvert 0 \rangle$. Find the state of the system at following times, given the time evolution operators
#
#
# <ol>
# <li>
# $ \hat{U}_1 = \hat{X}$
# </li>
#
# <li>
# $ \hat{U}_2 = \hat{X}$
# </li>
#
# <li>
# $ \hat{U}_3 = \hat{X}$
# </li>
# </ol>
#
#
# </li>
#
#
#
#
#
#
# </ol>
# ## References
#
# [1] <NAME>, The Principles of Quantum Mechanics (1947 Clarendon Press, Oxford).
#
# [2] <NAME>, Mathematische Grundlagen der Quanten-Mechanik, Springer-Verlag, Berlin, 1932.
| community/awards/teach_me_quantum_2018/intro2qc/3.Quantum mechanics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark3
# name: pyspark3kernel
# ---
# # Configuring a Spark session using configure-f
# Refer to [Spark Configurations](https://spark.apache.org/docs/latest/configuration.html) for specific parameters
# %%configure -f
{"conf": {
"spark.executor.memory": "4g",
"spark.driver.memory": "4g",
"spark.executor.cores": 2,
"spark.driver.cores": 1,
"spark.executor.instances": 4
}
}
# +
datafile = "/spark_data/AdultCensusIncome.csv"
df = spark.read.format('csv').options(header='true', inferSchema='true').load(datafile)
df.show(5)
# +
from pyspark import SparkConf
from pyspark.sql import SparkSession
def isConfiguredItem(cfg_items):
if(cfg_items == 'spark.executor.instances' or cfg_items == 'spark.executor.memory' or \
cfg_items == 'spark.executor.cores' or cfg_items == 'spark.driver.memory' or \
cfg_items == 'spark.driver.cores'):
return True
spark = SparkSession.builder.getOrCreate()
conf = SparkConf().getAll()
for cfg_items in conf:
if(isConfiguredItem(cfg_items[0])):
print(cfg_items)
| bdc-samples/config-install/configure_spark_session.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### Import Library
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
# ### Introduction to PCA
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.components_)
print(pca.explained_variance_)
# +
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
color='black',
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
# -
# ### PCA as dimensionality reduction
pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("Original shape: ", X.shape)
print("Transformed shape:", X_pca.shape)
X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal');
# ### PCA for Visualiaztion
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('inferno', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
# ### Choosing the number of components
pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
# ### PCA as Noise Filtering
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
# Add some random noise to create a noisy dataset and re-ploit it
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
pca = PCA(0.50).fit(noisy)
pca.n_components_
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
# ### Testing on the Eigenfaces
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
from sklearn.decomposition import PCA as RandomizedPCA
pca = RandomizedPCA(150)
pca.fit(faces.data)
fig, axes = plt.subplots(3, 8, figsize=(9, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('Number of components')
plt.ylabel('Cumulative explained variance');
# Compute the components and projected faces
pca = RandomizedPCA(150).fit(faces.data)
components = pca.transform(faces.data)
projected = pca.inverse_transform(components)
# +
# Plot the results
fig, ax = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(10):
ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')
ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')
ax[0, 0].set_ylabel('full-dim\ninput')
ax[1, 0].set_ylabel('150-dim\nreconstruction');
| 13 - Principal Component Analysis/PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import glob
import itertools
import os
from pathlib import Path
input_filespecs = [
# '../../data/virat/ground/VIRAT_S_000201_08_001652_001838.mp4',
# '../../data/virat/ground/VIRAT_S_000200_01_000226_000268.mp4'
# '../../data/virat/aerial/*.mpg',
# '../../data/virat/ground/*.mp4',
# '../../data/drone/*.MP4',
# '../../../data/video/meva/2018-03-05/09/*.avi',
'../../../data/video/meva/uav-drop-01/2018-03-13/17/2018-03-13.17-30-58.17-40-14.uav1.mp4',
]
#size = (2048, 1080) # 2K
#size = (1365, 720)
size = (1270, 720) # 720p
#size = None
def transcode(input_filename, quality=50, skip_if_exists=True, size=None):
output_dir = Path(input_filename).with_suffix('')
print(f'{input_filename} -> {output_dir}')
if skip_if_exists and output_dir.exists():
print('Output directory already exists. Skipping.')
return
vidcap = cv2.VideoCapture(input_filename)
output_dir.mkdir(exist_ok=True)
frame_number = 0
while True:
success, image = vidcap.read()
if not success:
break
if size:
image = cv2.resize(image, size, interpolation=cv2.INTER_AREA)
output_filename = output_dir / ('%08d.jpg' % frame_number)
print(f'{output_filename}', end='\r')
cv2.imwrite(str(output_filename), image, [int(cv2.IMWRITE_JPEG_QUALITY), quality])
frame_number += 1
print('')
for input_filespec in input_filespecs:
for input_filename in sorted(glob.glob(input_filespec)):
transcode(input_filename, size=size, skip_if_exists=False)
1
| jupyterhub/notebooks/transcode_video_to_jpeg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="As9tPpRz5_UW"
# # Chapter 01
# # Chapter 02
# # Chapter 03
# # Chapter 04
# # Chapter 05
# # Chapter 06
# # Chapter 07
#
# # Chapter 08
#
# ## Loading Images
# + colab={"base_uri": "https://localhost:8080/", "height": 263} colab_type="code" id="jcYNBf8G45gn" outputId="04654ec2-96b4-471e-aaa8-c83d7c36a9ca"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane.jpg", cv2.IMREAD_GRAYSCALE)
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 263} colab_type="code" id="rmlMTuU48OdZ" outputId="dab5e1b9-0dd5-4ee5-f446-6ba9da9c9fd7"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane.jpg", cv2.IMREAD_COLOR)
# Convert to RGB
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_rgb, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="zG9sSpAs8zf9"
# ## Saving Images
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7y35DTaw8Omf" outputId="764022bc-a34f-42f9-9bdd-4b8050fcb0d3"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane.jpg", cv2.IMREAD_GRAYSCALE)
# Save Image
cv2.imwrite('deneme.jpg', image)
# + [markdown] colab_type="text" id="t2l1k4kS9-SS"
# ## Resizing Images
# + colab={"base_uri": "https://localhost:8080/", "height": 267} colab_type="code" id="ifvdDio98wz0" outputId="b471a618-d629-421b-db3a-76d98329189a"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Resize image to 50 pixels by 50 pixels
image_50x50 = cv2.resize(image, (50,50))
# View image
plt.imshow(image_50x50, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="Ep58U1XK_Spr"
# ## Cropping Images
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="G7ZHz9ph8w2-" outputId="3cd9661e-bcfe-4d61-92f3-fe3825e3b3e9"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Select first half of the columns and all rows
image_cropped = image[:,:128]
# Show image
plt.imshow(image_cropped, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="2AtAbTyd_wDi"
# ## Blurring Images
#
# Each pizel is transformed to be the average value of its neighbors. Large amount of kernels produce smoother images.
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="qavsrlha8w5G" outputId="62a98daf-89e5-426f-ec39-fba35366f7fd"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Blur image
image_blurry = cv2.blur(image, (5,5))
# Show image
plt.imshow(image_blurry, cmap='gray')
plt.axis('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="IJXzqrxd8w7-" outputId="50e1b70c-72f6-4ead-9dab-f96e2f517313"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Blur image
image_blurry = cv2.blur(image, (100,100))
# Show image
plt.imshow(image_blurry, cmap='gray')
plt.axis('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="5kUrA2yx8w-w" outputId="a9084120-6229-4409-b147-f649e99e7dbf"
# Alternatively filtering with kernels
# Create kernel
kernel = np.ones((5,5)) / 25.0
# Apply kernel
image_kernel = cv2.filter2D(image, -1, kernel)
# Show image
plt.imshow(image_kernel, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="kPGB9dKTByEF"
# ## Sharpening Images
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="Ko1uTy1WByTe" outputId="74539317-9f9f-4e08-c992-e9f50212744d"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Create kernel
kernel = np.array([[0, -1, 0],
[-1, 5, -1],
[0, -1, 0]])
# Sharpen image
image_sharp = cv2.filter2D(image, -1, kernel)
# Show image
plt.imshow(image_sharp, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="x-vRAnQgCsTh"
# It highlights the pixel itself. It makes contrasts in edges.
#
# ## Enhancing Contrast
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="-tpsxvBTByVp" outputId="cfe6b238-e06b-490a-b1bf-cf6f22136d5f"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Enhance image
image_enhanced = cv2.equalizeHist(image)
# Show image
plt.imshow(image_enhanced, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab_type="text" id="YghV0URzERwg"
# It transforms the image so that it uses a wider range of pixel intensities.
# It can make objects more distinguishable from other objects or backgrounds.
# + [markdown] colab={} colab_type="code" id="yQ-A94lKByXp"
# ## Isolating Colors
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Conver BGR to HSV
image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Define range of blue values in HSV
lower_blue = np.array([50,100,50])
upper_blue = np.array([130,255,255])
# Create mask
mask = cv2.inRange(image_hsv, lower_blue, upper_blue)
# Mask image
image_masked = cv2.bitwise_and(image, image_hsv, mask=mask)
# Conver BGR to RGB
image_rgb = cv2.cvtColor(image_masked, cv2.COLOR_BGR2RGB)
# Show image
plt.imshow(image_rgb, cmap='gray')
plt.axis('off')
plt.show()
# -
# Show image
plt.imshow(mask, cmap='gray')
plt.axis('off')
plt.show()
# ## Binarizing Images
# + colab={} colab_type="code" id="joG7TigrByac"
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Apply adaptive thresholding
max_output_value = 255
neighborhood_size = 99
subtract_from_mean = 10
image_binarized = cv2.adaptiveThreshold(image,
max_output_value,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY,
neighborhood_size,
subtract_from_mean)
# Show image
plt.imshow(image_binarized, cmap='gray')
plt.axis('off')
plt.show()
# + [markdown] colab={} colab_type="code" id="9ydrYxfh8Ook"
# Thresholding is the process of setting pixels with intensity greater than some value to white and less than the value to be black. A major benefit of thresholding is denoising an image, keeping only the most important elements.
# -
# ## Removing Backgrounds
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Rectangular values: start x, start y, width, height
rectangle = (0, 56, 256, 150)
# Create initial mask
mask = np.zeros(image.shape[:2], np.uint8)
# Create temporary array used by grabCut
bgdModel = np.zeros((1,65), np.float64)
fgdModel = np.zeros((1,65), np.float64)
# Run grabCut
cv2.grabCut(image, # our image
mask,
rectangle,
bgdModel, # temp array for background
fgdModel, # temp array for background
5,
cv2.GC_INIT_WITH_RECT)
# Create mask where sure and likely backgrounds set to 0, otherwise 1
mask_2 = np.where((mask==2) | (mask==0), 0, 1).astype('uint8')
image_nobg = image * mask_2[:, :, np.newaxis]
# Show image
plt.imshow(image_nobg, cmap='gray')
plt.axis('off')
plt.show()
# -
# ## Detecting Edges
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
#image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Calculate median intensity
median_intensity = np.median(image)
# Set thresholds to be one standard deviation above and below median intensity
lower_threshold = int(max(0, (1.0 - 0.33) * median_intensity ) )
upper_threshold = int(min(0, (1.0 + 0.33) * median_intensity ) )
# Apply canny edge detector
image_canny = cv2.Canny(image, lower_threshold, upper_threshold)
# Show image
plt.imshow(image_canny, cmap='gray')
plt.axis('off')
plt.show()
# -
# Canny requires two parameters denoting low and high gradient threshold values. Potential edge pixels between the low and high thresholds are considered weak edge pixels, while those above the high threshold are considered strong edge pixels.
# ## Detecting Corners
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image_bgr = cv2.imread("img/plane_256x256.jpg")
image_gray = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2GRAY)
image_gray = np.float32(image_gray)
# Set corner detector parameters
block_size = 2
aperture = 29
free_parameter = 0.04
# Detect corners
detector_responses = cv2.cornerHarris(image_gray, block_size, aperture, free_parameter)
# Large corner markers
detector_responses = cv2.dilate(detector_responses, None)
# Only keep detector responses greater than threshold, mark as while
threshold = 0.02
image_bgr[detector_responses > threshold * detector_responses.max()] = [255,255,255]
image_gray = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2GRAY)
# Show image
plt.imshow(image_gray, cmap='gray')
plt.axis('off')
plt.show()
# -
# Harris corner detector looks for windows also called neighboorhoods where small movements of the window creates big changes in the contents of the pixels inside the window
# Show image
plt.imshow(detector_responses, cmap='gray')
plt.axis('off')
plt.show()
# ## Creating Features for Machine Learning
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_GRAYSCALE)
# Resize image to 10 pixels by 10 pixels
image_10x10 = cv2.resize(image, (10,10))
# Convert image data to one-dimenstional vector
image_10x10.flatten()
# -
# Show image
plt.imshow(image_10x10, cmap='gray')
plt.axis('off')
plt.show()
# ## Encoding Mean Color as a Feature
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_COLOR)
# Calculate the mean of each channel
channels = cv2.mean(image)
# Swap blue and red values (making it RGB, not BGR)
observation = np.array([(channels[2], channels[1], channels[0])])
observation
# -
# Show image
plt.imshow(observation, cmap='gray')
plt.axis('off')
plt.show()
# ## Encoding Color Histograms as Features
# +
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread("img/plane_256x256.jpg", cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Create a list for feature values
features = []
# Calculate the histogram for each color channel
colors = ("r", "g", "b")
# For each channel: calculate histogram and add to feature value list
for i, channel in enumerate(colors):
histogram = cv2.calcHist([image],
[i], # index of channel
None, # no mask
[256], # histogram size
[0,256]) # range
features.extend(histogram)
# Create a vector for an observation's feature values
observation = np.array(features).flatten()
# show the first five
observation[:5]
# +
# Import pandas
import pandas as pd
# Create some data
data = pd.Series([1, 1, 2, 2, 3, 3, 3, 4, 5])
# Show histogram
data.hist(grid=False)
plt.show()
# +
# Calculate the histogram for each color channel
colors = ("r", "g", "b")
# For each channel: calculate histogram, make plot
for i, channel in enumerate(colors):
histogram = cv2.calcHist([image],
[i], # index of channel
None, # no mask
[256], # histogram size
[0,256]) # range
#features.extend(histogram)
plt.plot(histogram, color= channel)
plt.xlim([0,256])
# Show plot
plt.show()
| Chapter_08_Handling_Images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="oTii8GU3qNpO"
# 「PyTorch入門 2. データセットとデータローダー」
# ===============================================================
# 【原題】DATASETS & DATALOADERS
#
# 【原著】
# [<NAME>](https://github.com/suraj813)、[<NAME>](https://github.com/sethjuarez/) 、[<NAME>](https://github.com/cassieview/) 、[<NAME>](https://soshnikov.com/)、[<NAME>](https://github.com/aribornstein/)
#
#
# 【元URL】https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
#
# 【翻訳】電通国際情報サービスISID AIトランスフォーメーションセンター 小川 雄太郎
#
# 【日付】2021年03月20日
#
# 【チュトーリアル概要】
#
# 本チュートリアルでは、PyTorchでサンプルデータを扱う基本要素である、DatasetとDataLoaderについて解説を行います。
#
# ---
#
# + [markdown] id="xrrYlOuMoH4p"
#
# Datasets & Dataloaders
# ===================
#
# サンプルデータを処理するコードは複雑であり、メンテナンスも大変です。
#
# データセットに関するコードは可読性とモジュール性を考慮し、モデルの訓練コードから切り離すのが理想的です。
#
#
#
# + [markdown] id="iFT48e6ZrDPs"
# PyTorchにはデータセットを扱う基本要素が2つあります。
#
# ``torch.utils.data.DataLoader``と、``torch.utils.data.Dataset``です。
#
# これらを活用することであらかじめ用意されたデータセットや自分で作成したデータを使用することができます。
#
#
# + [markdown] id="hZo4AoqhpKLz"
# ``Dataset``にはサンプルとそれに対応するラベルが格納され、``DataLoader``にはイテレート処理が可能なデータが格納されます。
#
# ``DataLoader``は、サンプルを簡単に利用できるように、``Dataset``をイテレート処理可能なものへとラップします。
# + [markdown] id="4SFeigADrsK0"
# PyTorch domain librariesでは、多くのデータセット(FashionMNISTなど)を提供しています。
#
# これらは ``torch.utils.data.Dataset`` を継承しており、各ドメインのデータに対して必要な、固有の機能を実装しています。
#
# また、皆様が実装したモデルのベンチマークにも使うことができます。
#
# さらなる詳細は以下をご覧ください。
#
#
#
# + [markdown] id="hr2mv1wWsVSr"
# - [Image Datasets](https://pytorch.org/docs/stable/torchvision/datasets.html)
#
# - [Text Datasets](https://pytorch.org/text/stable/datasets.html)
#
# - [Audio Datasets](https://pytorch.org/audio/stable/datasets.html)
# + [markdown] id="QrurYlcGr6hi"
# ---
#
#
# + [markdown] id="SKrmwAbwoH4q"
# Datasetの読み込み
# -------------------
#
# TorchVisionから[Fashion-MNIST](https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/)をロードする例を紹介します。
#
# Fashion-MNISTは、60,000個の訓練データと10,000個のテストデータから構成された、Zalandoの記事画像のデータセットです。
#
# 各サンプルは、28×28のグレースケール画像と、10クラスのうちの1つのラベルから構成されています。
#
#
#
# + [markdown] id="wj8yhfb_tJsI"
# [FashionMNIST Dataset](https://pytorch.org/docs/stable/torchvision/datasets.html#fashion-mnist)を読み込む際には、以下のパラメータを使用します。
#
#
# - ``root`` :訓練/テストデータが格納されているパスを指定
# - ``train`` :訓練データまたはテストデータセットを指定
# - ``download=True``:``root`` にデータが存在しない場合は、インターネットからデータをダウンロードを指定
# - ``transform`` と ``target_transform``:特徴量とラベルの変換を指定
# + id="-C5DwpmZoH4k" executionInfo={"status": "ok", "timestamp": 1616109885937, "user_tz": -540, "elapsed": 819, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
# %matplotlib inline
# + id="GULvjXuioH4r" colab={"base_uri": "https://localhost:8080/", "height": 446, "referenced_widgets": ["445e01fd536d437abe59a0593f26a92c", "9ef8ea052a4e419b8d2b29bcb22cd997", "c6ebce0e8f7a4a11b2114d720b084494", "9a9140c2a0c54dc085e8acafa6623670", "28e06b09925a4fcd8e92314dcb6b9ec9", "<KEY>", "<KEY>", "<KEY>", "043433f304bd4079a821b28471192e88", "aac70eee402e47248ac274b29ba50337", "630e197b77104ea097c325de78529645", "<KEY>", "52a32fbd77bb4c3ea18736ffd917ce53", "2950b76dfa1540c2a97e76dda8b3587e", "d22b8d5931084768bb3497420e980011", "<KEY>", "8bd84d34c3854962ae1c95057c4e1894", "<KEY>", "4d87ce5eec5c4f9ab59147726bb844ac", "9ea5cbc94c1e48739b66b5ba52c32a6d", "<KEY>", "2a2131ab65614c10a6a27d6c4a9470d9", "<KEY>", "49c4710cf6b04d9e829fa4d9ba10931f", "<KEY>", "<KEY>", "9c5b5e9e455247ad82cc8f99a0ed5a22", "f3d5281ce2ab4e9d9cde576ee16add86", "2a4229a5bfa042b684c3065efadb9186", "<KEY>", "0b002b5e2e854d5d91f6e069bf1a5d16", "dc9c527adeaa44a1be42c83de205cdb1"]} executionInfo={"status": "ok", "timestamp": 1616109925344, "user_tz": -540, "elapsed": 40204, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}} outputId="996f36f3-2b1b-4f8f-e1b4-20b8541cd3c5"
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
# + [markdown] id="BkDdIkWRr9V-"
# ---
#
#
# + [markdown] id="jyJhWCy4oH4s"
# データセットの反復処理と可視化
# -----------------
#
# Datasetの特定indexを指定する際には、リスト操作と同様に、``training_data[index]``と記載します。
#
# ``matplotlib``を使用し、訓練データのいくつかのサンプルを可視化しましょう。
#
#
# + id="f2Xi46AToH4s" colab={"base_uri": "https://localhost:8080/", "height": 482} executionInfo={"status": "ok", "timestamp": 1616109930109, "user_tz": -540, "elapsed": 1307, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}} outputId="f8639fe9-d222-494d-def1-fd33229c67ae"
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
# + [markdown] id="2WRSevQroH4t"
# --------------
#
#
#
# + [markdown] id="6DYdw1peoH4t"
# カスタムデータセットの作成
# ---------------------------------------------------
#
# 自分でカスタムしたDatasetクラスを作る際には、 `__init__`、`__len__`、`__getitem__`の3つの関数は必ず実装する必要があります。
#
# これらの関数の実装を確認します。
#
# FashionMNISTの画像データを``img_dir``フォルダに、ラベルはCSVファイル``annotations_file``として保存します。
#
# これから、各関数がどのような操作を行っているのか詳細に確認します。
#
#
# + id="o9Jw_x39oH4u" executionInfo={"status": "ok", "timestamp": 1616109972844, "user_tz": -540, "elapsed": 791, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
# + [markdown] id="OkUEyaTKoH4u"
# **__init__**
#
#
# `__init__`関数はDatasetオブジェクトがインスタンス化される際に1度だけ実行されます。
#
# 画像、アノテーションファイル、そしてそれらに対する変換処理(transforms:次のセクションで解説します)の初期設定を行います。
#
# <br>
#
# ここで、labels.csvファイルは以下のような内容となっています。
#
# tshirt1.jpg, 0
# tshirt2.jpg, 0
# ......
# ankleboot999.jpg, 9
#
#
# + id="7wo_ipQkoH4v" executionInfo={"status": "ok", "timestamp": 1616110029560, "user_tz": -540, "elapsed": 836, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
# + [markdown] id="hYFvaagxoH4v"
# **__len__**
#
#
# `__len__`関数はデータセットのサンプル数を返す関数です。
#
#
# + id="OCcr5zKVoH4v" executionInfo={"status": "ok", "timestamp": 1616110033063, "user_tz": -540, "elapsed": 769, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
def __len__(self):
return len(self.img_labels)
# + [markdown] id="yz_6ekutoH4v"
# **__getitem__**
#
# `__getitem__`関数は指定された``idx``に対応するサンプルをデータセットから読み込んで返す関数です。
#
# `index`に基づいて、画像ファイルのパスを特定し、``read_image``を使用して画像ファイルをテンソルに変換します。
#
# 加えて、``self.img_labels``から対応するラベルを抜き出します。
#
# そしてtransform functionsを必要に応じて画像およびラベルに適用し、最終的にPythonの辞書型変数で画像とラベルを返します。
#
#
# + id="VWAIzFaloH4w" executionInfo={"status": "ok", "timestamp": 1616110108161, "user_tz": -540, "elapsed": 752, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
# + [markdown] id="vchQsP0XoH4w"
# --------------
#
#
#
# + [markdown] id="8K9esMdhoH4x"
# DataLoaderの使用方法
# -------------------------------------------------
#
# ``Dataset``を使用することで1つのサンプルの、データとラベルを取り出せます。
#
# ですが、モデルの訓練時にはミニバッチ("minibatches")単位でデータを扱いたく、また各epochでデータはシャッフルされて欲しいです(訓練データへの過学習を防ぐ目的です)。
#
# 加えて、Pythonの ``multiprocessing``を使用し、複数データの取り出しを高速化したいところです。
#
# ``DataLoader``は上記に示した複雑な処理を簡単に実行できるようにしてくれるAPIとなります。
#
# + id="EMIQLpQtoH4y" executionInfo={"status": "ok", "timestamp": 1616110176930, "user_tz": -540, "elapsed": 698, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}}
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
# + [markdown] id="klErSZApoH4y"
# DataLoaderを用いた繰り返し処理
# --------------------------
#
# データセットを ``Dataloader`` に読み込ませ、必要に応じてデータセットを反復処理することができます。
#
# 以下の各反復処理では``train_features`` と ``train_labels``のミニバッチを返します(それぞれ、64個のサンプルで構成されるミニバッチです)。
#
# 今回``shuffle=True``と指定しているので、データセットのデータを全て取り出したら、データの順番はシャッフルされます。
#
# <br>
#
# さらなるデータ読み込み操作の詳細については、[こちらのSamplers](https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler)をご覧ください。
#
# + id="UXzRi_hMoH4y" colab={"base_uri": "https://localhost:8080/", "height": 318} executionInfo={"status": "ok", "timestamp": 1616110234577, "user_tz": -540, "elapsed": 942, "user": {"displayName": "\u5c0f\u5ddd\u96c4\u592a\u90ce", "photoUrl": "", "userId": "06190430902934159529"}} outputId="a4cecd67-7718-4642-e987-35a4065f18e6"
# Display image and label.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
# + [markdown] id="Y1aEdIk5oH4z"
# --------------
#
#
#
# + [markdown] id="vXRAreSnoH4z"
# さらなる詳細
# --------------
# 以下のページも参考ください。
#
# - [torch.utils.data API](https://pytorch.org/docs/stable/data.html)
#
#
# + [markdown] id="4Y8HTqFi57Ig"
# 以上。
| notebook/0_Learn the Basics/0_2_data_tutorial_jp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Project Repository:** https://github.com/GokulKarthik/deep-learning-projects-pytorch
# +
import os
import sys
import glob
import time
from tqdm.notebook import tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
#from torch.utils.tensorboard import SummaryWriter
#from torchsummary import summary
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
import unicodedata
import string
import copy
# +
#writer = SummaryWriter("runs/word-classification")
# -
# ## 1. Load data
# Download and extract the data zip file from [this link](https://download.pytorch.org/tutorial/data.zip). All the .txt files in the names folder can be used to train this word classification model. I have placed those text files in `data/language-words` directory.
data_folder = os.path.join("storage", "data", "language-words")
file_paths = glob.glob(os.path.join(data_folder, "*.txt"))
print(file_paths)
words_dict = {}
for file_path in file_paths:
language = file_path.split("/")[3][:-4]
with open(file_path, "r") as file:
words = file.readlines()
words = [word.strip().lower() for word in words]
words_dict[language] = words
print(words_dict['German'][-10:])
for language, words in words_dict.items():
print(language, len(words))
# ## 2. Clean text
characters = set()
for language, words in words_dict.items():
for word in words:
characters.update(list(word))
characters = sorted(list(characters))
print(characters)
# There are many accented characers in the dataset which have to transformed into the raw format.
c = list('ñ')
c_normalised = list(unicodedata.normalize("NFD", 'ñ'))
print(c, c_normalised)
characters_normalised = []
for character in characters:
character_normalised = unicodedata.normalize("NFD", character)[0]
characters_normalised.append(character_normalised)
print(characters_normalised)
characters_all = list(string.ascii_lowercase + " -',:;")
print(len(characters_all), characters_all)
def clean_word(word):
cleaned_word = ""
for character in word:
for character_raw in unicodedata.normalize('NFD', character):
if character_raw in characters_all:
cleaned_word += character_raw
return cleaned_word
words_dict_cleaned = {}
for language, words in words_dict.items():
cleaned_words = []
for word in words:
cleaned_word = clean_word(word)
cleaned_words.append(cleaned_word)
words_dict_cleaned[language] = cleaned_words
print(words_dict['German'][-10:])
print(words_dict_cleaned['German'][-10:])
print(words_dict['Portuguese'][-10:])
print(words_dict_cleaned['Portuguese'][-10:])
print(words_dict['Polish'][-10:])
print(words_dict_cleaned['Polish'][-10:])
words_dict = copy.deepcopy(words_dict_cleaned)
del words_dict_cleaned
# ## 3. Define utilities
num_langs = len(words_dict.keys())
num_chars = len(characters_all)
print(num_langs, num_chars)
max_timesteps = 0
for language, words in words_dict.items():
for word in words:
if len(word) > max_timesteps:
max_timesteps = len(word)
print(max_timesteps)
lang_to_id = {k:v for k, v in zip(sorted(list(words_dict.keys())), range(len(words_dict.keys())))}
print(lang_to_id)
id_to_lang = {v:k for k, v in lang_to_id.items()}
print(id_to_lang)
char_to_id = {k:v for k, v in zip(characters_all, range(len(characters_all)))}
print(char_to_id)
id_to_char = {v:k for k, v in char_to_id.items()}
print(id_to_char)
# ## 4. Split data
words_df = []
for language, words in tqdm(words_dict.items()):
for word in words:
words_df.append({"word":word, "language":language})
words_df = pd.DataFrame(words_df)
print(words_df.shape)
words_df.head()
words_df_train, words_df_test = train_test_split(words_df, train_size=0.8, stratify=words_df["language"], random_state=0)
words_df_train = words_df_train.reset_index(drop=True)
words_df_test = words_df_test.reset_index(drop=True)
print(words_df_train.shape)
print(words_df_test.shape)
train_count = words_df_train["language"].value_counts().rename("Train")
test_count = words_df_test["language"].value_counts().rename("Test")
count = pd.concat([train_count, test_count], axis=1, sort=True).T
count.loc["Total", :] = count.sum(axis=0) # add row
count.loc[:, "Total"] = count.sum(axis=1) # add col
count = count.astype("int")
count
# ## 5. Define dataset
# Our PyTorch RNN based model will take input of `Size([mini_batch_size, timesteps, input_size])` and produce output of `Size([mini_batch_size, output_size])`. In order to make uniform timesteps across different words, let us pad with space.
class WordDataset(Dataset):
def __init__(self, words_df):
self.words_df = words_df
def __len__(self):
self.len = len(self.words_df)
return self.len
def __getitem__(self, idx):
row = self.words_df.iloc[idx, :]
word = row['word'].ljust(max_timesteps)
x = torch.zeros((max_timesteps, num_chars)) # [timesteps, input_size]
for i, char in enumerate(word):
x[i, char_to_id[char]] = 1
# y = np.zeros(num_langs) # [output_size]
# y[lang_to_id[row['language']]] = 1
y = lang_to_id[row['language']]
return x, y
train_set = WordDataset(words_df_train)
test_set = WordDataset(words_df_test)
# ## 6. Define dataloader
train_batch_size = 128
test_batch_size = 4
num_cpus = os.cpu_count()
print(num_cpus)
train_loader = DataLoader(train_set, batch_size=train_batch_size, shuffle=True, num_workers=num_cpus)
test_loader = DataLoader(test_set, batch_size=test_batch_size, shuffle=False, num_workers=num_cpus)
train_iter = iter(train_loader)
X, Y = train_iter.next()
print(X.size(), Y.size())
len_train_loader = len(train_loader)
print(len_train_loader)
# ## 7. Define model
hidden_size = 24
num_layers = 2
device = "cuda:0" if torch.cuda.is_available() else "cpu"
device = torch.device(device)
print(device)
class Model(nn.Module):
def __init__(self, input_size, output_size, hidden_size, num_layers):
super(Model, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm1 = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True)
self.fc2 = nn.Linear(in_features=hidden_size, out_features=output_size)
def forward(self, X):
batch_size = X.size(0)
h0 = torch.randn((self.num_layers, batch_size, self.hidden_size)).to(device)
c0 = torch.randn((self.num_layers, batch_size, self.hidden_size)).to(device)
out, ht = self.lstm1(X, (h0, c0)) # out -> [batch_size, time_steps, input_size]
outn = out[:,-1,:]
outn = outn.contiguous().view(batch_size, self.hidden_size)
outn = self.fc2(outn)
return outn
model = Model(input_size=num_chars, output_size=num_langs, hidden_size=hidden_size, num_layers=num_layers)
model = nn.DataParallel(model)
model = model.to(device)
print(model)
#list(model.parameters())
for p in model.parameters():
print(p.dtype)
# +
#summary(model, input_size=(max_timesteps, num_chars))
# +
#writer.add_graph(model, X)
#writer.close()
# -
# ## 8. Set optimizer
lr = 0.01
step_size = len_train_loader * 4
gamma = 0.95
print(step_size)
alpha = 0.6
weights = len(words_df_train) / (words_df_train['language'].value_counts() ** alpha)
weights = weights / weights.sum()
weights = weights.sort_index()
print(weights)
weights = torch.Tensor(weights).to(device)
print(weights)
criterion = nn.CrossEntropyLoss(weight=weights, reduction="mean")
optimizer = optim.Adam(model.parameters(), lr=lr)
lr_scheduler = optim.lr_scheduler.StepLR(optimizer=optimizer, step_size=step_size, gamma=gamma)
# ## 9. Train model
epochs = 100
print_every_n_epochs = 1
# +
epoch_losses = []
epoch_lrs = []
iteration_losses = []
iteration_lrs = []
for epoch in tqdm(range(1, epochs+1)):
epoch_loss = 0
epoch_lr = 0
for X, Y in tqdm(train_loader, desc="Epoch-{}".format(epoch)):
X, Y = X.to(device), Y.to(device)
optimizer.zero_grad()
Y_pred_logits = model(X)
loss = criterion(Y_pred_logits, Y)
loss.backward()
optimizer.step()
lr_scheduler.step()
iteration_losses.append(loss.item())
iteration_lrs.append(lr_scheduler.get_lr()[0])
epoch_loss += loss.item()
epoch_lr += lr_scheduler.get_lr()[0]
epoch_loss /= len(train_loader)
epoch_lr /= len(train_loader)
epoch_losses.append(epoch_loss)
epoch_lrs.append(epoch_lr)
if epoch % print_every_n_epochs == 0:
message = "Epoch:{} Loss:{} LR:{}".format(epoch, epoch_loss, epoch_lr)
print(message)
# -
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(15, 8))
ax1.plot(epoch_losses, marker="o", markersize=5)
ax1.set_title("Loss")
ax2.plot(epoch_lrs, marker="o", markersize=5)
ax2.set_title("LR")
plt.xlabel("Epochs")
plt.show()
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(15, 8))
ax1.plot(iteration_losses[::10])
ax1.set_title("Loss")
ax2.plot(iteration_lrs[::10])
ax2.set_title("LR")
plt.xlabel("Iterations")
plt.show()
window = 100
plt.figure(figsize=(15, 4))
pd.Series(iteration_losses).rolling(window=window).mean().iloc[window-1:].plot()
plt.show()
path = os.path.join("storage", "models", "language-words", "classifier.pth")
torch.save(model.state_dict(), path)
# ## 10. Test Model
path = os.path.join("storage", "models", "language-words", "classifier.pth")
model = Model(input_size=num_chars, output_size=num_langs, hidden_size=hidden_size, num_layers=num_layers)
model = nn.DataParallel(model)
model.load_state_dict(torch.load(path))
#model = model.to("cpu")
with torch.no_grad():
Y_train, Y_pred_train = [], []
for X_mb, Y_mb in tqdm(train_loader):
out = model(X_mb)
_, Y_pred_mb = torch.max(out, 1)
Y_train.extend(Y_mb.numpy().tolist())
Y_pred_train.extend(Y_pred_mb.cpu().numpy().tolist())
with torch.no_grad():
Y_test, Y_pred_test = [], []
for X_mb, Y_mb in tqdm(test_loader):
out = model(X_mb)
_, Y_pred_mb = torch.max(out, 1)
Y_test.extend(Y_mb.numpy().tolist())
Y_pred_test.extend(Y_pred_mb.cpu().numpy().tolist())
train_accuracy = accuracy_score(Y_train, Y_pred_train)
test_accuracy = accuracy_score(Y_test, Y_pred_test)
print("Train Accuracy: {}".format(train_accuracy))
print("Test Accuracy: {}".format(test_accuracy))
mat = np.array([1, 2, 3])
print(mat.shape)
print(mat[:, np.newaxis].shape)
labels = sorted(list(lang_to_id.keys()))
c_mat_train = confusion_matrix(Y_train, Y_pred_train)
c_mat_train = c_mat_train / c_mat_train.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(15,5))
sns.heatmap(c_mat_train, annot=True, fmt="0.2f", xticklabels=labels, yticklabels=labels)
plt.title('Train Data')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
c_mat_test = confusion_matrix(Y_test, Y_pred_test)
c_mat_test = c_mat_test / c_mat_test.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(15, 5))
sns.heatmap(c_mat_test, annot=True, fmt='0.2f', xticklabels=labels, yticklabels=labels)
plt.title('Test Data')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
def compute_accuracies(c_mat):
accuracies = c_mat.astype('float') / c_mat.sum(axis=1)
accuracies = accuracies.diagonal()
accuracies = {k:v for k, v in zip(labels, accuracies)}
return accuracies
compute_accuracies(c_mat_train)
compute_accuracies(c_mat_test)
| 2-Multi-Class-Word-Language-Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyzing Trajectory
# ### Running an MD simulation is meaningless without meaningful analyses. There exist MANY packages that automate certain measurements of trajectory, but you will find often that you will need to write your own script to cater exactly to what you desire to measure.
# ## Below is what I have personally found to work best for me in terms of visualizing my system:
# ### Step 1: Edit your trajectory files to prepare for visualization
# ioutfm = 1 flag tells AMBER to write the trajetory in the format of NetCDF (binary, with the typical .nc extension), but some visualization softwares can't read this format (like PyMol, for example) or VMD if you have a newer MacBook Pro that required the 64-bit updated VMD package- I think there may be a bug in their NetCDF plug in, but if you're using an older version it will probably work).
#
# What I usually do is convert my trajectory into a .dcd filetype. It is still binary, but different format. AMBER doesn't actually know how to immediately write trajectory as a .dcd, but if it did I would probably have it do that immediately instead of writing it as a .nc in the first place. Oh well.
#
# To convert a trajectory file into a .dcd:
# + active=""
# cpptraj
#
# > parm $NAME.prmtop #load in parameter file
# > trajin $NAME.nc #load in the trajectory file you want to convert
# > strip :WAT, :Na+, :Cl- #You might want to do this is you have a large sytem, this will make the file size a lot more reasonable and easier to transfer, and unless you're measure something about the waters/ions then you don't need to keep their coordinates during analyses. In some cases you might keep waters that are within a certain distances of your protein atoms, or around a specific area. There are ways to specify that in CPPTRAJ.
# > autoimage anchor :1-454 #SUPER important! This will fix any strange visual effects arising from periodic boundary conditions, like bond wrapping. The anchor tells the command what components should be held fixed in position, and orient mobile molecules (like waters and ions) around that position. I chose all the residues of my protein, to ensure the whole protein is centered. If you don't give an anchor I think the default is the first atom or residue in the file, and this can still lead to visualization artifacts if not all your residues are bonded together- so like if you have a dimer! You also need to be aware that it takes the position of your anchor in the first frame... so If your molecule is centered in that frame then the rest of the trajectory will not be cenered in reference to the box. This works fine if you sequentially do this to all your trajectory, starting with traj1 which probably hasn't had the chance to drift much... but if you are just looking at one trajectory you may want to do an alignment (see below).
# > trajout $NAME_stripped.dcd #this tells CPPTRAJ you want your traj converted to a dcd, I always add 'stripped' if I've removed atoms
# > go
# # now cpptraj will begin the conversion, it may take a few seconds, it should show a progress bar
# > exit
#
# Now you should have a new file for your stripped .dcd file.
# -
# ##### If you need to align your structure throughout the trajectory you can also do that with cpptraj:
# + active=""
# cpptraj
#
# > parm $NAME.prmtop #load in parameter file
# > trajin $NAME.nc #load in the trajectory file you want to convert
# > strip :WAT, :Na+, :Cl- #You might want to do this is you have a large sytem, this will make the file size a lot more reasonable and easier to transfer, and unless you're measure something about the waters/ions then you don't need to keep their coordinates during analyses. In some cases you might keep waters that are within a certain distances of your protein atoms, or around a specific area. There are ways to specify that in CPPTRAJ.
# >
#
#
#
#
#
# > trajout $NAME_stripped.dcd #this tells CPPTRAJ you want your traj converted to a dcd, I always add 'stripped' if I've removed atoms
# > go
# # now cpptraj will begin the conversion, it may take a few seconds, it should show a progress bar
# > exit
# -
# ### Step 2: Visualize!
# #### For PyMol:
| Trajectory Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/institutohumai/cursos-python/blob/master/AnalisisDeDatos/4_Data_Wrangling_Avanzado/ejercicio/ejercicio.ipynb"> <img src='https://colab.research.google.com/assets/colab-badge.svg' /> </a>
# <div align="center"> Recordá abrir en una nueva pestaña </div>
# # Ejercicio: Informe macroeconómico de Argentina
#
# La consultora "<NAME>" quiere hacer un análisis del mercado argentino para entender como ha evolucionado en los últimos años. Van a analizar dos indicadores macroeconómicos principales: el **IPC: Índice de Precios al Consumidor** (para medir inflación) y el tipo de cambio (**cotización del dólar**).
#
# ## IPC: Índice de Precios al Consumidor
#
# Para más información sobre el IPC pueden visitar la siguiente página del INDEC: https://www.indec.gob.ar/indec/web/Nivel4-Tema-3-5-31
#
# La base de IPC a analizar tiene como base diciembre de 2016, al cual le corresponde el índice 100. Los precios se encuentran con cuatro niveles de apertura:
#
# * General: Indice de Precios de toda la canasta de bienes y servicios considerada en el análisis
#
# * Estacional: Bienes y servicios con comportamiento estacional. Por ejemplo: frutas y verduras
#
# * Regulados: Bienes y servicios cuyos precios están sujetos a regulación o tienen alto componente impositivo. Por ejemplo: electricidad
#
# * Núcleo: : Resto de los grupos del IPC
#
# Su jefa quiere analizar el comportamiento de los cuatro niveles de apertura del indice de precios en los años que componen el dataset. Para eso le pide que obtenga el promedio, mediana e índice máximo anuales para cada nivel de apertura. Luego, de ser posible, graficar la evolución anual del índice medio a nivel general.
#
# **Pasos sugeridos:**
#
# 1) Leer los datos del IPC.
#
# 2) Modificar la tabla para que cumpla con la definición de tidy data: cada variable debe ser una columna (Apertura, Fecha e Indice).
#
# 3) Convertir la variable de fecha al formato date-time y extraer el año y el mes.
#
# *Ayuda*: Vas a tener que utilizar el argumento format en la función to_datetime de pandas. En esta página vas a poder encontrar los códigos de formato o directivas necesarios para convertir las fechas: https://docs.python.org/es/3/library/datetime.html#strftime-and-strptime-behavior
#
# 4) Calcular el indice promedio, mediano y maximo por año para cada nivel de apertura.
#
# 5) Graficar.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
# -
# **1) Leer los datos del IPC.**
ipc_df = pd.read_csv('https://datasets-humai.s3.amazonaws.com/datasets/ipc_indec.csv')
# **2) Modificar la tabla** para que cumpla con la definición de tidy data: cada variable debe ser una columna (Apertura, Fecha e Indice).
# **3)** Convertir la **variable de fecha** al formato date-time y extraer el año y el mes
#
# **4)** Calcular el **indice promedio, mediano y maximo** por año para cada nivel de apertura.
# **5) Graficar**
# # Dolar
# La base de cotización de dolar traer los precios de compra y venta oficiales de la divisa en Argentina desde el 01-06-2015 hasta el 03-08-2020 según el portal Ámbito Financiero.
#
# Para proseguir con el informe se quiere obtener la cotización media diaria (promedio entre compra y venta) y obtener la mediana mensual con su respectivo gráfico. Adicionalmente, se quiere encontrar el top 5 de los días con mayores aumentos porcentuales en el tipo de cambio para la misma ventana de tiempo que se analizó el IPC (desde 01-12-2016 hasta el 30-06-2020)
#
# **Pasos sugeridos:**
#
# 1) Leer los datos de la cotización del dolar
#
# 2) Crear una variable que compute el valor promedio entre compra y venta por día
#
# 3) Convertir la fecha de un dato tipo string a un objeto datetime (to_datetime). Construir las variables de año y mes.
#
# 4) Calcular el promedio mensual y graficar (recordar ordenar en forma ascendente la fecha)
#
# 5) Ordenar de manera ascendente por fecha, filtrar las fechas señaladas y calcular la variación porcentual diaria en la cotización
#
# 6) Hallar los 5 días con mayor variación en la cotización.
# **1)** Leer los datos de la cotización del dolar
dolar_df = pd.read_csv('https://datasets-humai.s3.amazonaws.com/datasets/dolar_oficial_ambito.csv')
# **2)** Crear una variable que compute el valor **promedio** entre compra y venta por día
# **3)** Convertir la **fecha** de un dato tipo string a un objeto datetime (to_datetime) y construir las variables de año y mes
# **4)** Calcular el promedio mensual y graficar.
# **5)** Ordenar de manera ascendente por fecha, filtrar las fechas señaladas y calcular la variación porcentual diaria en la cotización
#
# Para calcular la variación porcentual debemos realizar la siguiente cuenta:
#
# $VariacionPorcentual = \frac{CotizacionHoy - CotizacionAyer}{CotizacionAyer}*100$
# **6)** Hallar los 5 días con mayor variación en la cotización.
| AnalisisDeDatos/4_Data_Wrangling_Avanzado/ejercicio/ejercicio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# numpy efficiently deals with numerical multi-dimensional arrays.
import numpy as np
# matplotlib is a plotting library, and pyplot is its easy-to-use module.
import matplotlib.pyplot as pl
# Exert tensorflow
import tensorflow as tf
import keras as kr
import csv
# This just sets the default plot size to be bigger.
pl.rcParams['figure.figsize'] = (16.0, 8.0)
# ## By importing CSV to loading data from document and show detail
# +
# Load the Iris dataset.
# Data from: https://github.com/mwaskom/seaborn-data/blob/master/iris.csv
#First way to load data from irisdata.csv
#iris_data = np.genfromtxt('irisdata.csv', delimiter=',', skip_header=1, usecols=(), dtype=None)
#data = iris_data.transpose()
#iris_data
#second way to load data from irisdata.csv
irisdata = list(csv.reader(open('irisdata.csv')))[1:]
# The inputs are four floats: sepal length, sepal width, petal length, petal width.
inputs = np.array(irisdata)[:,:4].astype(np.float)
#Set data
#IRIS_TRAINING = "irisdata.csv"
#IRIS_TEST = "irisdata.csv"
#Load datasets
#training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
# filename=IRIS_TRAINING,
#target_dtype=np.int,
# features_dtype = np.float32)
#test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
# filename=IRIS_TEST,
# target_dtype=np.int,
# features_dtype=np.float32)
inputs
# -
# ## Split the data into training and testing
# ### Split the data set into a training set and a testing set. You should investigate the best way to do this, and list any online references used in your notebook. If you wish to, you can write some code to randomly separate the data on the fly.
# +
outputs = np.array(irisdata)[:,4]
# Convert the output strings to ints.
outputs_vals, outputs_ints = np.unique(outputs, return_inverse=True)
# Encode the category integers as binary categorical vairables.
outputs_cats = kr.utils.to_categorical(outputs_ints)
# Split the input and output data sets into training and test subsets.
#
inds = np.random.permutation(len(inputs))
train_inds, test_inds = np.array_split(inds, 2)
inputs_train, outputs_train = inputs[train_inds], outputs_cats[train_inds]
inputs_test, outputs_test = inputs[test_inds], outputs_cats[test_inds]
inputs_train
# -
# ## Use Tensorflow to create model
#
# ### Use Tensorflow to create a model to predict the species of Iris from a flower’s sepal width, sepal length, petal width, and petal length.
# +
# Create a neural network.
# this modeles is sequential
model = kr.models.Sequential()
# Add an initial layer with 4 input nodes, and a hidden layer with 16 nodes.
# more nodes let machine learning too many times for increasing correct rate
model.add(kr.layers.Dense(16, input_shape=(4,)))
# Apply the sigmoid activation function to that layer.
model.add(kr.layers.Activation("sigmoid"))
# Add another layer, connected to the layer with 16 nodes, containing three output nodes.
model.add(kr.layers.Dense(3))
# Use the softmax activation function there.
model.add(kr.layers.Activation("softmax"))
# By add model let deep learning
# this model for tensorflow
#classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
# hidden_units = [10,20,10],
# n_classes=3,
# model_dir="/tmp/iris_model")
#
#classifier.fit(x=training_set.data,y=training_set.target,steps=2000)
# -
# ## Train the model
#
# ### Use the testing set to train your model.
# +
# Configure the model for training.
# Uses the adam optimizer and categorical cross entropy as the loss function.
# Add in some extra metrics - accuracy being the only one.
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# Fit the model using our training data.
model.fit(inputs_train, outputs_train, epochs=100, batch_size=1, verbose=1)
#
#accuracy_score = classifier.evaluate(x=test_set.data,y=test_set.target)["accuracy"]
#print('Áccuracy:{0:f}'.format(accuracy_score))
# -
# ## Test the model
#
# ### Use the testing set to test your model, clearly calculating and displaying the error rate.
# +
# Evaluate the model using the test data set.
loss, accuracy = model.evaluate(inputs_test, outputs_test, verbose=1)
# Output the accuracy of the model.
print("\n\nLoss: %6.4f\tAccuracy: %6.4f" % (loss, accuracy))
# Predict the class of a single flower.
prediction = np.around(model.predict(np.expand_dims(inputs_test[0], axis=0))).astype(np.int)[0]
print("Actual: %s\tEstimated: %s" % (outputs_test[0].astype(np.int), prediction))
print("That means it's a %s" % outputs_vals[prediction.astype(np.bool)][0])
# Save the model to a file for later use.
# Load the model again with: model = load_model("iris_nn.h5")
| tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (pytorch)
# language: python
# name: pytorch
# ---
# +
q = p
p = sorted([(str(x[0]), x[1]) for x in enumerate(p)], f)
code = { x[0]: '' for x in p}
for k in range(10-1):
for x in p[0][0]:
code[x] += '0'
for x in p[1][0]:
code[x] += '1'
p = sorted(p[2:] + [(p[0][0]+p[1][0], p[0][1]+p[1][1])],f)
print q
print code
# +
freqs = [301, 302, 303, 304, 303, 302, 304, 304]
freqs = [(str(i), freq) for i, freq in enumerate(freqs)]
f = lambda x: x[1]
freqs = sorted(freqs, key=f)
freqs
code = { x[0]: '' for x in freqs }
for k in range(len(freqs)-1):
for x in freqs[0][0]:
code[x] += '0'
for x in freqs[1][0]:
code[x] += '1'
freqs = sorted(freqs[2:] + [(freqs[0][0]+freqs[1][0], freqs[0][1]+freqs[1][1])], key=f)
print(code)
# -
301
302
303
304
303
302
304
304
class HuffmanNode:
def __init__(self, char, freq):
self.char = char
self.freq = freq
self.left = None
self.right = None
def __eq__(self, other):
if other is None:
return -1
return self.freq == other.freq
def __gt__(self, other):
if other is None:
return -1
return self.freq > other.freq
def __lt__(self, other):
if other is None:
return -1
return self.freq < other.freq
# +
import heapq
class Huffman:
def __init__(self):
self.heap = []
self.freqs = {}
self.codes = {}
def generate_freqs(self, freqs):
self.freqs = {}
for i, freq in enumerate(freqs):
self.freqs[str(i)] = freq
def generate_heap(self):
for key in self.freqs:
node = HuffmanNode(key, self.freqs[key])
heapq.heappush(self.heap, node)
def merge_nodes(self):
while len(self.heap) > 1:
node1 = heapq.heappop(self.heap)
node2 = heapq.heappop(self.heap)
merged_node = HuffmanNode(None, node1.freq+node2.freq)
merged_node.left = node1
merged_node.right = node2
heapq.heappush(self.heap, merged_node)
def generate_code_helper(self, root, code):
if root is None:
return
if not root.char is None:
self.codes[root.char] = code
return
self.generate_code_helper(root.left, code+'0')
self.generate_code_helper(root.right, code+'1')
def generate_code(self):
root = heapq.heappop(self.heap)
code = ''
self.generate_code_helper(root, code)
# -
freqs = [301, 302, 303, 304, 303, 302, 304, 304]
#freqs = [1, 2, 4, 8, 16]
#freqs = [47, 21, 87, 20, 36, 78, 14]
huffman = Huffman()
huffman.generate_freqs(freqs)
huffman.generate_heap()
huffman.merge_nodes()
huffman.generate_code()
print(huffman.codes)
for key in huffman.codes:
print(huffman.codes[key])
n1 = HuffmanNode('a', 10)
n2 = HuffmanNode('b', 20)
n1 < n2
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from rdkit.Chem.Crippen import MolLogP
smile = "C(=O)Cl"
iMol = Chem.MolFromSmiles(smile)
for atom in iMol.GetAtoms():
print(atom.GetSymbol())
print(atom.GetDegree())
print(atom.GetTotalNumHs())
print(atom.GetImplicitValence())
print(atom.GetIsAromatic())
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Projeto Ciência de Dados - Previsão de Vendas
#
# - Nosso desafio é conseguir prever as vendas que vamos ter em determinado período com base nos gastos em anúncios nas 3 grandes redes que a empresa Hashtag investe: TV, Jornal e Rádio
# ### Passo a Passo de um Projeto de Ciência de Dados
#
# - Passo 1: Entendimento do Desafio
# - Passo 2: Entendimento da Área/Empresa
# - Passo 3: Extração/Obtenção de Dados
# - Passo 4: Ajuste de Dados (Tratamento/Limpeza)
# - Passo 5: Análise Exploratória
# - Passo 6: Modelagem + Algoritmos (Aqui que entra a Inteligência Artificial, se necessário)
# - Passo 7: Interpretação de Resultados
# #### Importar a Base de dados
# +
import pandas as pd
# lendo o arquivo csv
df = pd.read_csv("advertising.csv")
# mostrando na tabela
display(df)
# -
# #### Análise Exploratória
# - Vamos tentar visualizar como as informações de cada item estão distribuídas
# - Vamos ver a correlação entre cada um dos itens
# +
# importando bibliotecas
import seaborn as sns
import matplotlib.pyplot as plt
# pairplot é um tipo de gráfico do seaborn
sns.pairplot(df)
plt.show()
# vai criar um mapa de calor, vai mostrar a correlação da tabela, o cmap vai pintar com uma cor, 'annot = True' vai mostrar os números
sns.heatmap(df.corr(), cmap ='Wistia', annot =True)
# mostrando o gráfico
plt.show()
# -
# #### Com isso, podemos partir para a preparação dos dados para treinarmos o Modelo de Machine Learning
#
# - Separando em dados de treino e dados de teste
# +
from sklearn.model_selection import train_test_split
# tabela inteira com exceção da coluna de 'Vendas'
x = df.drop('Vendas', axis=1)
# valores de y é só a coluna de 'Vendas'
y = df['Vendas']
# separando os valores de 'x' com o 'train_test_split', é necessário passar a base de dados, 30% é para testar
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state=1)
# -
# #### Temos um problema de regressão - Vamos escolher os modelos que vamos usar:
#
# - Regressão Linear
# - RandomForest (Árvore de Decisão)
# +
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn import metrics
import numpy as np
# regressão linear
lin_reg = LinearRegression()
lin_reg.fit(x_train, y_train)
# random forest
rf_reg = RandomForestRegressor()
rf_reg.fit(x_train, y_train)
# -
# #### Teste da AI e Avaliação do Melhor Modelo
#
# - Vamos usar o R² -> diz o % que o nosso modelo consegue explicar o que acontece
# - Também vamos olhar o MSE (Erro Quadrático Médio) -> diz o quanto o nosso modelo "erra" quando tenta fazer uma previsão
# +
# fazendo previsão de teste com o 'predict'
test_pred_lin = lin_reg.predict(x_test)
test_pred_rf = rf_reg.predict(x_test)
# 'score' calcula a métrica quadrada
r2_lin = metrics.r2_score(y_test, test_pred_lin)
# calculando o erro quadrático médio
mse_lin = metrics.mean_squared_error(y_test, test_pred_lin)
print(f"R² da Regressão Linear: {r2_lin}")
print(f"MSE da Regressão Linear: {mse_lin}")
r2_rf= metrics.r2_score(y_test, test_pred_rf)
mse_rf = metrics.mean_squared_error(y_test, test_pred_rf)
print(f"R² do Random Forest: {r2_rf}")
print(f"MSE do Random Forest: {mse_rf}")
# -
# #### Visualização Gráfica das Previsões
# +
df_resultado = pd.DataFrame()
# df_resultado.index = x_test
df_resultado['y_teste'] = y_test
df_resultado['y_previsao_rf'] = test_pred_rf
df_resultado['y_previsao_lin'] = test_pred_lin
df_resultado = df_resultado.reset_index(drop=True)
plt.figure(figsize=(15, 5))
# criando linhas
sns.lineplot(data=df_resultado)
# mostrando o gráfico
plt.show()
# mostrando a tabela
display(df_resultado)
# -
# #### Qual a importância de cada variável para as vendas?
# +
# importancia_features = pd.DataFrame(rf_reg.feature_importances_, x_train.columns)
plt.figure(figsize=(15, 5))
# mostra os valores da tv, rádio e jornal no gráfico
sns.barplot(x=x_train.columns, y=rf_reg.feature_importances_)
plt.show()
# -
# #### Será que estamos investindo certo?
# somando os valores das colunas
print(df[["Radio", "Jornal"]].sum())
| programas/ciencia-de-dados/ciencia-de-dados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="YuSAE1IKO9OF" colab_type="code" colab={}
# !git clone https://github.com/tttslab/tut-asr-voicecommand.git
# %cd tut-asr-voicecommand
# !chmod 744 *.sh
# + id="PhiQJ-5avgeg" colab_type="code" colab={}
# %%shell
DATA_ROOT=data
MODEL_DIR=model
# echo 'start preparing'
date
./prep.sh $DATA_ROOT 2>&1 | tee prep.log
# echo 'start training'
date
start_time=`date +%s`
./train.sh $MODEL_DIR $DATA_ROOT train1p.txt 2>&1 | tee train.log
#./train.sh $MODEL_DIR $DATA_ROOT train20p.txt 2>&1 | tee train.log
#./train.sh $MODEL_DIR $DATA_ROOT train60p.txt 2>&1 | tee train.log
#./train.sh $MODEL_DIR $DATA_ROOT train100p.txt 2>&1 | tee train.log
end_time=`date +%s`
time=$((end_time - start_time))
# echo $time >& train.time.log
# echo 'start evaluating'
date
./eval.sh $MODEL_DIR $DATA_ROOT eval.txt 2>&1 | tee eval.log
date
| exp60p/colab.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ROOT C++
# language: c++
# name: root
# ---
# # Tracking
#
# ## 概述
#
# 使用类继承的方法,无需修改自动生成的文件。
#
# ```shell
# root f8ppac001.root
#
# tree->MakeClass("trackingBase");
# ```
#
# 将生成的类命名为`trackingBase`,意为`Tracking`的基类,以此为基础编写`Tracking`类
#
# ## Tracking 类
#
# ### Tracking.h
# ```c++
# // Tracking.h
# #ifndef Tracking_h
# #define Tracking_h
#
# #include "trackingBase.h"
# #include <TROOT.h>
# #include <TChain.h>
# #include <TFile.h>
# #include <TGraph.h>
# #include <TF1.h>
# #include <TH2D.h>
# #include <TFitResult.h>
#
# class Tracking: public trackingBase {
# public :
# Tracking(TTree *tree = 0);
# virtual ~Tracking();
# virtual void Loop(TTree *tree);
# private:
# // anode
# Double_t anode[5];
# // variable for x tracking
# Double_t xx[5], xz[5];
# // variable for y tracking
# Double_t yy[5], yz[5];
# // residual
# Double_t dx[5], dy[5];
# // target position
# Double_t tx, ty, tz;
# // target position after projection
# Double_t ptx, pty;
# // chi2/ndf
# Double_t c2nx, c2ny;
# // number of PPAC to track, 0 for tracking failure
# // 用于重建的探测器数量,0表示不满足重建条件
# Int_t numX, numY;
# // combination of the valid PPAC,用于重建径迹的探测器组合
# // 使用标志位的方式记录
# Int_t trackX, trackY;
# // 拟合得到的径迹方程
# Double_t bx, kx;
# Double_t by, ky;
# // 重建后的径迹在不同探测器上的位置
# Double_t fxx[5], fyy[5];
#
# // init the variables, return: 0 -- can't track, 1 -- track x, 2 -- track y, 3 -- track x and y
# virtual Int_t trackInit();
# virtual void setBranch(TTree *tree);
# virtual void addTrace(TH2D *h, Double_t k, Double_t b, Int_t minBin, Int_t maxBin);
# virtual Double_t simpleFit(TGraph *g, Double_t &k, Double_t &b);
# };
#
# #endif
# ```
#
# ### Tracking.cpp
# ```c++
# // Tracking.cpp
# #include "Tracking.h"
#
# Tracking::Tracking(TTree *tree)
# : trackingBase(tree) {
# }
#
#
# Tracking::~Tracking() {
# }
#
#
#
# void Tracking::setBranch(TTree *tree) {
# tree->Branch("xx", &xx, "xx[5]/D");
# tree->Branch("xz", &xz, "xz[5]/D");
# tree->Branch("yy", &yy, "yy[5]/D");
# tree->Branch("yz", &yz, "yz[5]/D");
# tree->Branch("anode", &anode, "anode[5]/D");
#
# tree->Branch("dx", &dx, "dx[5]/D");
# tree->Branch("dy", &dy, "dy[5]/D");
#
# tree->Branch("tx", &tx, "tx/D");
# tree->Branch("ty", &ty, "ty/D");
# tree->Branch("tz", &tz, "tz/D");
#
# tree->Branch("ptx", &ptx, "ptx/D");
# tree->Branch("pty", &pty, "pty/D");
#
# tree->Branch("c2nx", &c2nx, "c2nx/D");
# tree->Branch("c2ny", &c2ny, "c2ny/D");
#
# tree->Branch("beamTrig", &beamTrig, "beamTrig/I");
# tree->Branch("must2Trig", &must2Trig, "must2Trig/I");
#
# tree->Branch("targetX", &targetX, "targetX");
# tree->Branch("targetY", &targetY, "targetY");
#
# tree->Branch("numX", &numX, "numX/I");
# tree->Branch("numY", &numY, "numY/I");
#
# tree->Branch("trackX", &trackX, "trackX/I");
# tree->Branch("trackY", &trackY, "trackY/I");
#
# tree->Branch("bx", &bx, "bx/D");
# tree->Branch("kx", &kx, "kx/D");
# tree->Branch("by", &by, "by/D");
# tree->Branch("ky", &ky, "ky/D");
#
# tree->Branch("fxx", &fxx, "fxx[5]/D");
# tree->Branch("fyy", &fyy, "fyy[5]/D");
# }
#
#
# Int_t Tracking::trackInit() {
# tx = -999;
# ty = -999;
#
# for (int i = 0; i != 5; ++i) {
# xx[i] = PPACF8[i][0];
# xz[i] = PPACF8[i][2];
# yy[i] = PPACF8[i][1];
# yz[i] = PPACF8[i][3];
# anode[i] = PPACF8[i][4];
# }
#
# // 判断是否满足重建流程:
# // 1. 遍历所有探测器,记录有信号的探测器总数量和组合
# // 2. 如果数量等于1,不要
# // 3. 如果数量为2,且所有信号的都在PPAC1或者PPAC2里面,不要
# // 4. 其他都要
#
# // check x
# numX = 0;
# trackX = 0;
# for (int i = 0; i != 4; ++i) {
# if (abs(xx[i]) < 120) {
# numX += 1;
# trackX |= 1 << i;
# }
# }
# // PPAC3
# if (abs(xx[4]) < 50) {
# numX += 1;
# trackX |= 1 << 4;
# }
#
# if (numX == 1) numX = -1;
# if (numX == 2) {
# if (trackX & 0x3) numX = -2; // 信号都在PPAC1
# if (trackX & 0xC) numX = -2; // 信号都在PPAC2
# }
#
#
# // check y
# numY = 0;
# trackY = 0;
# for (int i = 0; i != 4; ++i) {
# if (abs(yy[i]) < 75) {
# numY += 1;
# trackY |= 1 << i;
# }
# }
# // PPAC3
# if (abs(yy[4]) < 50) {
# numY += 1;
# trackY |= 1 << 4;
# }
#
# if (numY == 1) numY = -1;
# if (numY == 2) {
# if (trackY & 0x3) numY = -2;
# if (trackY & 0xC) numY = -2;
# }
#
#
# Int_t flag = 0;
# if (numX > 0) flag |= 1;
# if (numY > 0) flag |= 2;
# return flag;
# }
#
#
# void Tracking::addTrace(TH2D *h, Double_t k, Double_t b, Int_t minBin, Int_t maxBin) {
# if (h == nullptr) return;
# if (minBin >= maxBin) return;
#
# for (int i = minBin; i != maxBin; i+=10) {
# h->Fill(i, (Int_t)(i*k*b));
# }
# return;
# }
#
#
# void Tracking::Loop(TTree *tree) {
# TH2D *htf8xz = new TH2D("htf8xz", "xz track by ppac", 220, -2000, 200, 300, -150, 150);
# TH2D *htf8yz = new TH2D("htf8yz", "yz track by ppac", 220, -2000, 200, 300, -150, 150);
#
# // TFile *opf = new TFile("../data/tracking.root", "recreate");
# // TTree *tree = new TTree("tree", "ppac tracking");
# setBranch(tree);
#
#
#
# if (fChain == 0) return;
#
# Long64_t nentries = fChain->GetEntriesFast();
# Long64_t nbytes = 0, nb = 0;
# for (Long64_t jentry = 0; jentry != nentries; jentry++) {
# // load data
# Long64_t ientry = LoadTree(jentry);
# if (ientry < 0) break;
# nb = fChain->GetEntry(jentry);
# nbytes += nb;
#
#
#
# // x and y tracked
# if (trackInit() != 3) continue;
#
#
#
# // fit part
# TGraph *grx = new TGraph;
# int points = 0;
# for (int i = 0; i != 5; ++i) {
# if (trackX & 1<<i) {
# grx->SetPoint(points, xz[i], xx[i]);
# points++;
# }
# }
# if (sFit) {
# c2nx = simpleFit(grx, kx, bx);
# c2nx = points == 2 ? 0.0 : c2nx / (points-2);
# for (int i = 0; i != 5; ++i) {
# fxx[i] = kx * xz[i] + bx;
# dx[i] = (trackX & 1<<i) ? xx[i] - fxx[i] : -999;
# }
# } else {
# TF1 *fx = new TF1("fx", "pol1", -2000, 100);
# TFitResultPtr r = grx->Fit(fx, "SQ");
# bx = fx->GetParameter(0);
# kx = fx->GetParameter(1);
# // residual and fitted position
# for (int i = 0; i != 5; ++i) {
# fxx[i] = fx->Eval(xz[i]);
# dx[i] = (trackX & 1<<i) ? xx[i] - fxx[i] : -999;
# }
# c2nx = r->Chi2() / r->Ndf();
# delete fx;
# }
# delete grx;
#
# TGraph* gry = new TGraph;
# points = 0;
# for (int i = 0; i != 5; ++i) {
# if (trackY & 1<<i) {
# gry->SetPoint(points, yz[i], yy[i]);
# points++;
# }
# }
# TF1 *fy = new TF1("fy", "pol1", -2000, 100);
# TFitResultPtr r = gry->Fit(fy, "SQ");
# by = fy->GetParameter(0);
# ky = fy->GetParameter(1);
# // residual
# for (int i = 0; i != 5; ++i) {
# fyy[i] = fy->Eval(yz[i]);
# dy[i] = (trackY & 1<<i) ? yy[i] - fyy[i] : -999;
# }
# c2ny = r->Chi2() / r->Ndf();
# delete fy;
# delete gry;
#
# fitt += clock() - t;
#
# // position part
# // target position
# tz = -bx / (kx + 1.0);
# tx = -tz;
# tx = bx;
# ty = ky * tz + by;
# // projection
# ptx = tx * 1.4142; // tx / sqrt(2.0)
# pty = ty;
#
#
#
# // add trace part
# // set trace
# addTrace(htf8xz, kx, bx, -1800, 100);
# addTrace(htf8yz, ky, by, -1800, 100);
#
#
#
# // fill part
# tree->Fill();
#
#
#
# if (jentry % 100000 == 0) std::cout << jentry << "/" << nentries << std::endl;
# }
#
#
# htf8xz->Write();
# htf8yz->Write();
#
# }
#
# ```
# ### 逻辑
# #### 变量
# + 在不同探测器上的信号 `anode[5], xx[5], xz[5], yy[5], yz[5]`
# + 径迹的方程 `bx, kx, by, ky`
# + 径迹在探测器上的位置 `fxx[5], fyy[5]`
# + 残差 `dx[5], dy[5]`,平方差 `c2nx, c2ny`
# + 靶上坐标 `tx, ty, tz`,投影后的坐标`ptx, pty`
# + 重建径迹的探测器数量`numX, numY`
# + 探测器组合,使用标志位方式记录`trackX, trackY`
#
# #### 方法
# + `setBranch` 初始化输出的树
# + `trackInit` 初始化变量,判断事件是否满足重建要求,流程如下
# 1. 遍历所有探测器,记录有信号的探测器总数量和组合
# 2. 如果数量等于1,不要
# 3. 如果数量为2,且所有信号的都在PPAC1或者PPAC2里面,不要
# 4. 其他都要
# + `addTrace` 经径迹添加到图里
# + `Loop` 主要tracking方法
# 1. load getEntry
# 2. track init
# 3. fit
# 4. calculate target position
# 5. add trace
# 6. fill
#
#
# ## main
#
# ```c++
# // main.cpp
# #include <cstring>
# #include <iostream>
# #include <TFile.h>
# #include <TString.h>
# #include <TTree.h>
# #include "Tracking.h"
#
# int main(int argc, char **argv) {
# if (argc != 1) {
# std::cout << "usage: ./Tracking" << std::endl;
# }
#
# TString inFile("../data/f8ppac001.root");
# TFile *ipf = new TFile(inFile.Data());
# if (!ipf->IsOpen()) {
# std::cout << "Error open file " << inFile << std::endl;
# }
# TTree *ipt = (TTree*)ipf->Get("tree");
#
# TFile *opf;
# TTree *opt;
#
# opf = new TFile("../data/tracking.root", "recreate");
# opt = new TTree("tree", "ppac tracking");
# Tracking *tk = new Tracking(ipt);
# tk->Loop(opt);
# opt->Write();
# opf->Close();
#
#
#
# ipf->Close();
#
# return 0;
# }
#
# ```
#
#
# ## Makefile
#
# ```Makefile
# GXX = g++
# OBJS = main.o Tracking.o trackingBase.o
# SRC = main.cpp Tracking.cpp trackingBase.cpp
#
# ROOTCFLAGS = $(shell root-config --cflags)
# INCLUDE = -I ../include
# CFLAGS = -Wall -O3 $(ROOTCFLAGS) $(INCLUDE)
#
# ROOTLIBS = $(shell root-config --libs)
# LIBS = $(ROOTLIBS)
# LDFLAGS = $(LIBS)
#
#
# all:
# make Tracking
#
# Tracking: $(OBJS)
# $(GXX) -o $@ $^ $(LDFLAGS)
# $(OBJS):%.o:%.cpp
# $(GXX) $(CFLAGS) -c $<
#
#
# clean:
# rm *.o Tracking
#
#
# ```
| hw2_1/jupyter/Tracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # INFO 3350/6350
#
# ## Lecture 05: Sentiment scoring
#
# ## To do
#
# * Readings
# * *HDA* ch. 4 for today
# * Two articles for Weds
# * **Response posts.** See Canvas for schedule and instructions.
# * **Optional** response for *this week*; due by Tuesday at 4pm.
# * Most people will not write a response this week
# * Use if you know you'll miss a later one, or are very interested in this week's topic
# * The readings themselves are *always* required. Read every week. Respond only as scheduled.
# * HW2 due Thursday night at 11:59pm
# * Remember that Ed is the place for questions about any aspect of the course.
# * Follow up in office hours as needed.
# * Classroom for lectures is likely to change. Will announce via Ed if/when it does.
#
# ## What are we doing?
#
# We want to determine the *sentiment* of a text and of the individual sentences from which the text is composed.
#
# "Sentiment" can mean a lot of things:
#
# * Positive and negative feelings
# * Emotional intensity (could be good or bad)
# * Amount or intensity of other emotions (joy, surprise, awe, fear, etc.)
# * Maybe even "sentimentalness" (roughly, "nostalgia")
#
# Today, we'll focus on sentiment as the expression of positive and negative feelings at the token level. This is a common case in many text anlaysis problems.
#
# ## Supervised and unsupervised
#
# There are two broad ways we could approach the task of senitment analysis:
#
# * **Unsupervised** methods start with **known-informative features** and produce labels or scores from those features.
# * **Supervised** methods start with **labeled data** and try to learn the features that best predict the labels.
# * Advantages and disadvantages of each
# * In short: unfront labeling costs vs. later validation costs
#
# ## Warning
#
# Every semester, we see a surprising number of student projects that are built around unsupervised sentiment analysis. These projects tend to be weak, since they rely on a method covered in week 3 that has lots of known limitations and many superior alternatives. Do not fall into this trap.
#
# ## Our method
#
# We will work, for now, with **unsupervised** sentiment analysis. But there are lots of supervised approaches, too.
#
# Specifically, we're going to use so-called lexical or dictionary-based methods that assign one or more emotions to a subset of English words. We will assume that each of those words in a given text is an indication that the text contains the corresponding emotion. We can them sum up the emotions over all words in the text to get a measurement of that text's net emotional content.
#
# **Quick exercise:** Do you expect this method to work? What are some potential problems?
#
# ## Example case
#
# * From Jockers' [*Syuzhet* vignette](https://cran.r-project.org/web/packages/syuzhet/vignettes/syuzhet-vignette.html).
# * Note that this is an R package. We can't use it directly.
# * "Syuzhet" = "plot" or "subject" in Russian; it refers to narrative order, rather than to the "true" order of the narrative's underlying events (which is called the *fabula*).
# * To see the difference, think about a flashback that occurs near the end of a story.
#
# Consider the following story (or "story"):
#
# > I begin this story with a neutral statement.
# Basically this is a very silly test.
# You are testing the Syuzhet package using short, inane sentences.
# I am actually very happy today.
# I have finally finished writing this package.
# Tomorrow I will be very sad.
# I won't have anything left to do.
# I might get angry and decide to do something horrible.
# I might destroy the entire package and start from scratch.
# Then again, I might find it satisfying to have completed my first R package.
# Honestly this use of the Fourier transformation is really quite elegant.
# You might even say it's beautiful!
#
# ### Score some sentences ...
#
# By show of hands, how many think each sentence is:
# * Positive
# * Neutral
# * Negative
#
# The sentences:
# 1. **Basically this is a very silly test.**
# 1. **You are testing the Syuzhet package using short, inane sentences.**
# 1. **I have finally finished writing this package.**
# 1. **I won't have anything left to do.**
# 1. **Honestly this use of the Fourier transformation is really quite elegant.**
# +
# Enter eyeballed values from in-class survey ...
# Range [-2, 2] strong negative to strong positive
silly = -0.3
inane = -0.45
finished = 0.5
left = -0.75
fourier = 1.7
# scores for all sentences, includings ones not recorded above
human_scores = [
0,
silly,
inane,
2,
finished,
-2,
left,
-2,
-2,
1,
fourier,
1.5
]
print("Human scores by sentence:", human_scores)
print("Overall human sentiment score:", round(sum(human_scores),3))
# -
# **Confidence check:** Does this strike us as a reasonable summary of the overall positive-negative affect of the sample story? If not, why not?
#
# ### Ingest and tokenize the example "story"
#
# Notice that our output data structure is a list of lists. The "outer" list contains sentences. The "inner" lists each contain the tokens in one sentence.
# +
from nltk import sent_tokenize, word_tokenize
# The story. Why triple quotes?
story = '''\
I begin this story with a neutral statement.
Basically this is a very silly test.
You are testing the Syuzhet package using short, inane sentences.
I am actually very happy today.
I have finally finished writing this package.
Tomorrow I will be very sad.
I won't have anything left to do.
I might get angry and decide to do something horrible.
I might destroy the entire package and start from scratch.
Then again, I might find it satisfying to have completed my first R package.
Honestly this use of the Fourier transformation is really quite elegant.
You might even say it's beautiful!'''
# Tokenize the story
tokens = [word_tokenize(sent.lower()) for sent in sent_tokenize(story)]
print("Sentences:", len(tokens))
print("Total tokens:", sum([len(sent) for sent in tokens]))
print("\nSample sentences:", tokens[0:2])
# -
# Quick aside: **list comprehension**:
#
# * It's a compact way to write a for loop and store the result in a list.
# * Good for quick stuff, but not flexible and not very legible.
#
# ```
# tokens = [word_tokenize(sent.lower()) for sent in sent_tokenize(story)]
# ```
#
# is the same as
#
# ```
# tokens = []
# for sent in sent_tokenize(story):
# tokens.append(word_tokenize(sent.lower()))
# ```
#
# ### Set up sentiment dictionaries
#
# We want to compare a couple of them. Specifically, we'll use:
#
# * NLTK's copy of Hu and Liu's lexicon ([paper](https://www.cs.uic.edu/~liub/publications/kdd04-revSummary.pdf) | [dataset](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html))
# * Mohammad's [EmoLex](http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) from the Canadian National Research Council (NRC).
# * Crowd-sourced word associations.
# * Some are ... questionable? See HW3.
#
# Are these good and suitable choices for our task? Let's (begin to) find out!
# +
from collections import defaultdict
import nltk
import os
import random
nltk.download('opinion_lexicon') # Need to download this the first time used
# NLTK simple lexicon (from Hu and Liu (2004))
nltk_lexicon = {
'positive' : set(nltk.corpus.opinion_lexicon.positive()), # Why cast to a set?
'negative' : set(nltk.corpus.opinion_lexicon.negative())
}
# Print a sample of the NLTK lexicon
print('NLTK lexicon sample')
for key in nltk_lexicon.keys():
print(f'{key}:', random.sample(tuple(nltk_lexicon[key]), 5))
# NRC EmoLex lexicon (from Mohammad, http://sentiment.nrc.ca/lexicons-for-research/)
# No package for this, just read the data from a local file
emolex_file = os.path.join('..', 'data', 'lexicons', 'emolex.txt')
nrc_lexicon = defaultdict(dict) # Like Counter(), defaultdict eases dictionary creation
with open(emolex_file, 'r') as f:
# emolex file format is: word emotion value
for line in f:
word, emotion, value = line.strip().split()
nrc_lexicon[word][emotion] = int(value)
# Print a sample of the NRC EmoLex lexicon
print('\nNRC lexicon sample')
for key in random.sample(tuple(nrc_lexicon.keys()), 2):
print(f'{key}:', nrc_lexicon[key])
# -
# ### Define scoring function
#
# Set up a function to score a word as positive or negative using either the NLTK or NRC dictionary.
def word_sentiment_score(word, method='nrc', lex=nrc_lexicon):
'''
Takes a word, optional method in ['nrc', 'nltk'], and optional lexicon dictionary.
Returns 1 (if positive), -1 (if negative), 0 (neutral), or None (not in lex).
'''
word = word.lower() # Handle non-case-folded inputs
if method.lower() == 'nrc':
if word in lex: # Only score words that are in the lexicon
pos = lex[word]['positive']
neg = lex[word]['negative']
if pos == neg: # Ties (mostly 0==0) return zero
return 0
elif pos > neg:
return 1
else:
return -1
elif method.lower() == 'nltk':
if word in lex['positive']:
return 1
elif word in lex['negative']:
return -1
else:
raise NameError("Method not in ['nrc', 'nltk']")
return None # If word not in lexicon, return None (not zero). Why do this?
print("NRC 'analyst':", word_sentiment_score('analyst'))
print("NLTK 'analyst':", word_sentiment_score('analyst', method='nltk', lex=nltk_lexicon))
# ### Score our example sentences
#
# For reference, here are our class-crowd-sourced scores:
# Human scores
print("Method: human")
print("Sentence scores:", human_scores)
print("Summary score:", sum(human_scores))
# +
# Calculate and compare dictionary scores
# Scoring methods for input
methods = {
'nltk':nltk_lexicon,
'nrc' :nrc_lexicon
}
# Calculate, record, and print scores
method_scores = {} # To store results
for method in methods:
sentence_scores = [] # Could rewrite next few lines as a list comprehension
for sent in tokens:
sentence_score = 0
for word in sent:
word_score = word_sentiment_score(word, method=method, lex=methods[method])
if word_score != None:
sentence_score += word_score
sentence_scores.append(sentence_score)
method_scores[method] = sentence_scores # Save sentence-level scores
# Print results
print("Method:", method)
print("Sentence scores:", sentence_scores)
print("Summary score:", sum(sentence_scores),'\n')
# +
# Visualize and print the story for discussion
import matplotlib.pyplot as plt
x = list(range(len(tokens)))
fig, ax = plt.subplots(figsize=(12,8))
plt.plot(x, human_scores, '-', c='black', label='human', alpha=0.7, linewidth=3)
plt.plot(x, method_scores['nltk'], '--', c='blue', label='nltk', alpha=0.7, linewidth=3)
plt.plot(x, method_scores['nrc'], '-.', c='red', label='nrc', alpha=0.7, linewidth=3)
plt.legend()
plt.title("Sentiment scores")
plt.show()
print(story)
# -
# ### Discuss
#
# Do these scores make sense?
#
# * We can look at specific instances, like NRC on the first sentence (next code block)
# * **Is this a happy story or not?**
# * Do the summary scores reflect our judgment about that?
# * If not, why not and how could we improve those scores?
# Token-level NRC scores for example sentence 1
for word in tokens[0]:
print(f'{word.ljust(11)}{word_sentiment_score(word)}')
# +
# Use Seaborn to plot data with lowess (local regression) fit
import seaborn as sns
sns.set_context('talk')
fig, ax = plt.subplots(figsize=(12,8))
sns.regplot(x=x, y=human_scores, lowess=True, color='k', label='human')
sns.regplot(x=x, y=method_scores['nrc'], lowess=True, color='r', label='nrc')
sns.regplot(x=x, y=method_scores['nltk'], lowess=True, color='b', label='nltk')
plt.legend()
plt.show()
# -
# ## *<NAME>* (Flaubert, 1856/1857)
#
# Expected arc: starts happy, ends sad.
#
# Note that we do not lowercase our text (why not?), nor do we remove stopwords and punctuation (again, why not?). We *could* do both of those things, but would want to be sure that there weren't any name collisions in our text (that is, names that have affective associations when used as common words). We might also want to preserve case and punctuation for other tasks in our processing pipeline, even if we don't need them for our sentiment score.
# Read and tokenize the novel
bovary_file = os.path.join('..', 'data', 'texts', 'F-Flaubert-Madame_Bovary-1857-M.txt')
with open(bovary_file, 'r') as f:
bovary_text = f.read()
bovary = [word_tokenize(sent) for sent in sent_tokenize(bovary_text)]
print("Sentences:", len(bovary))
print("Total tokens:", sum([len(sent) for sent in bovary]))
# Note one small change below: we divide sentence sentiment by sentence length, so that long sentences don't count more than short ones.
# Score it using NLTK and NRC methods
bovary_scores = {}
for method in methods:
sentence_scores = []
for sent in bovary:
sentence_score = 0
for word in sent:
word_score = word_sentiment_score(word, method=method, lex=methods[method])
if word_score != None:
sentence_score += word_score
sentence_scores.append(sentence_score/len(sent))
bovary_scores[method] = sentence_scores
print("Method:", method)
print("Summary score:", round(sum(bovary_scores[method]),2),'\n')
# Plot results with 4th-order polynomial fit (why?)
fig, ax = plt.subplots(figsize=(12,8))
x_bov = [i/len(bovary) for i in range(len(bovary))]
sns.regplot(x=x_bov, y=bovary_scores['nltk'], order=4, scatter=False, color='b', label=None)
sns.regplot(x=x_bov, y=bovary_scores['nltk'], x_bins=20, scatter=True, fit_reg=False, color='b', label='nltk')
sns.regplot(x=x_bov, y=bovary_scores['nrc'], order=4, scatter=False, color='r', label=None)
sns.regplot(x=x_bov, y=bovary_scores['nrc'], x_bins=20, scatter=True, fit_reg=False, color='r', label='nrc')
plt.title("Sentiment in $\it{Madame\ Bovary}$")
plt.xlabel("Narrative time")
plt.ylabel("Average binned sentiment")
plt.legend()
plt.show()
# **Discuss:** How would you compare and evaluate the results of these two methods on this text?
#
# ## Where to go from here
#
# * More validation!
# * Is your lexicon good? For this kind of text? Written at this time?
# * Other dictionaries
# * Develop your own?
# * See Jurafsky and Martin for clever and/or complex ideas
# * Example: tag adjectives. Those that appear on either side of the token `and` probably have the same sentiment polarity; those linked by `but` are likely opposites.
# * Lots of embedding-based approaches, too.
# * Other aspects of sentiment/emotion/affect
# * NRC 'anger', 'surprise', 'trust', etc.
# * Other texts and other *types* of text
# * **Combine sentiment with other kinds of text- and sentence-level scoring**
# * Gender, time period (linguistic drift, yikes!), translations of the same text, news coverage of candidates, ...
# * Problem set 3 will try this
# * More ideas?
| lectures/lec-05-sentiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import numpy as np
import ipywidgets as widgets
from glmtools.io.glm import GLMDataset
from datetime import datetime, timedelta
# +
from glmtools.test.common import get_sample_data_path
sample_path = get_sample_data_path()
samples = [
"OR_GLM-L2-LCFA_G16_s20181830433000_e20181830433200_c20181830433231.nc",
"OR_GLM-L2-LCFA_G16_s20181830433200_e20181830433400_c20181830433424.nc",
"OR_GLM-L2-LCFA_G16_s20181830433400_e20181830434000_c20181830434029.nc",
]
samples = [os.path.join(sample_path, s) for s in samples]
# -
# ## Create and plot grids of the sample data included in glmtools
#
# The command below creates grids on the ABI fixed grid at 2 km. The `cmd` is an ordinary shell command - you can run it in a terminal or as part of a shell script.
#
# The domain is a GOES mesoscale sector centered on the provided lon/lat. It is not necessary to provide the center if using the conus or full disk domains.
#
# The start and end times will be inferred from the data files if not provided, and in this case would result in a grid with a single 1-min frame.
#
# Here, we create a 5 min grid. This grid will be empty except for the minute (0433 UTC) corresponding to the data.
# +
import subprocess
import os, glob
import tempfile
tmpdir = tempfile.TemporaryDirectory()
import glmtools
from glmtools.test.common import get_sample_data_path
# glmtools_path = os.path.abspath(glmtools.__path__[0])
glmtools_path = os.path.abspath('.')
# Set the start time and duration
startdate = datetime(2018, 7, 2, 4, 30)
duration = timedelta(0, 60*5)
enddate = startdate+duration
cmd = "python {0}/../examples/grid/make_GLM_grids.py -o {1}"
cmd += " --fixed_grid --split_events --goes_position=east --goes_sector=meso"
cmd += " --ctr_lat=33.5 --ctr_lon=-101.5 --dx=2.0 --dy=2.0"
cmd += " --start={3} --end={4} {2}"
cmd = cmd.format(glmtools_path, tmpdir.name, ' '.join(samples),
startdate.isoformat(), enddate.isoformat())
# print (cmd)
out_bytes = subprocess.check_output(cmd.split())
# print(out_bytes.decode('utf-8'))
grid_dir_base = tmpdir.name
nc_files = glob.glob(os.path.join(grid_dir_base, startdate.strftime('%Y/%b/%d'),'*.nc'))
# print(nc_files)
# +
# print(grid_dir_base)
# + language="bash"
# # ls /var/folders/sp/7k9p40wj1x9fdvwjbdrs4ffm0000gn/T/tmp7f3_aeir/2018/Jul/02
# # ncdump -h /var/folders/sp/7k9p40wj1x9fdvwjbdrs4ffm0000gn/T/tmp7f3_aeir/2018/Jul/02/GLM-00-00_20180702_043000_300_1src_056urad-dx_flash_area_min.nc
# -
# **Run the cell below to see the log file created by the gridding process.**
# + language="bash"
#
# # cat make_GLM_grid.log
# -
# ## Grab and load the files for each grid type
# +
from lmatools.grid.grid_collection import LMAgridFileCollection
# 056 is 2 km resolution
res = '056'
dtdx_base = '{0}_1src_{1}urad-dx'.format(int(duration.total_seconds()), res)
glm_init_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_flash_init.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_fed_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_flash_extent.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_ed_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_source.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_energy_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_total_energy.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_foot_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_footprint.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_grinit_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_group_init.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_ged_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_group_extent.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_grfoot_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_group_area.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
glm_minarea_files = glob.glob(os.path.join(grid_dir_base, '{0}/*_{1}_flash_area_min.nc'.format(startdate.strftime('%Y/%b/%d'), dtdx_base)))
display_params = {}
from matplotlib.cm import get_cmap
from matplotlib.colors import LogNorm, Normalize
glm_cmap = get_cmap('viridis')
glm_cmap._init()
alphas = np.linspace(1.0, 1.0, glm_cmap.N+3)
glm_cmap._lut[:,-1] = alphas
glm_cmap._lut[0,-1] = 0.0
glm_flctr_grids = LMAgridFileCollection(glm_init_files, 'flash_centroid_density',
x_name='x', y_name='y', t_name='time')
display_params[glm_flctr_grids] = {
'product_label':"GLM Flash Centroid Density (count)",
'glm_norm':Normalize(vmin=0, vmax=5),
'file_tag':'flash_init'
}
glm_fed_grids = LMAgridFileCollection(glm_fed_files, 'flash_extent_density',
x_name='x', y_name='y', t_name='time')
display_params[glm_fed_grids] = {
'product_label':"GLM Flash Extent Density (count)",
'glm_norm':LogNorm(vmin=0.9, vmax=48),
'file_tag':'flash_extent',
}
glm_ged_grids = LMAgridFileCollection(glm_ged_files, 'group_extent_density',
x_name='x', y_name='y', t_name='time')
display_params[glm_ged_grids] = {
'product_label':"GLM Group Extent Density (count)",
'glm_norm':Normalize(vmin=0, vmax=200),
'file_tag':'group_extent',
}
glm_ed_grids = LMAgridFileCollection(glm_ed_files, 'event_density',
x_name='x', y_name='y', t_name='time')
display_params[glm_ed_grids] = {
'product_label':"GLM Event Density (count)",
'glm_norm':Normalize(vmin=0, vmax=200),
'file_tag':'event_density',
}
glm_energy_grids = LMAgridFileCollection(glm_energy_files, 'total_energy',
x_name='x', y_name='y', t_name='time')
display_params[glm_energy_grids] = {
'product_label':"GLM Total Energy (J)",
'glm_norm':LogNorm(vmin=1e-18, vmax=1e-12),
'file_tag':'total_energy'
}
# display_params[glm_energy_grids]['glm_cmap'].set_bad('w',0)
glm_flarea_grids = LMAgridFileCollection(glm_foot_files, 'average_flash_area', x_name='x', y_name='y', t_name='time')
display_params[glm_flarea_grids] = {
'product_label':"GLM Average Flash Area (km^2)",
'glm_norm':LogNorm(vmin=32, vmax=.5e4),
'file_tag':'flash_area'
}
glm_grarea_grids = LMAgridFileCollection(glm_grfoot_files, 'average_group_area', x_name='x', y_name='y', t_name='time')
display_params[glm_grarea_grids] = {
'product_label':"GLM Average Group Area (km^2)",
'glm_norm':LogNorm(vmin=32, vmax=.5e4),
'file_tag':'group_area'
}
glm_minflarea_grids = LMAgridFileCollection(glm_minarea_files, 'minimum_flash_area', x_name='x', y_name='y', t_name='time')
display_params[glm_minflarea_grids] = {
'product_label':"GLM Minimum Flash Area (km^2)",
'glm_norm':LogNorm(vmin=32, vmax=.5e4),
'file_tag':'flash_area_min'
}
# -
# ## Make an interactive plot with a map background
# +
# Plot this grid
# glm_grids = glm_fed_grids
# glm_grids = glm_flarea_grids
glm_grids = glm_minflarea_grids
# Grab the needed paramters for plotting
glm_norm = display_params[glm_grids]['glm_norm']
product_label = display_params[glm_grids]['product_label']
file_tag = display_params[glm_grids]['file_tag']
# -
print(glm_grids._filenames)
# +
label_string = """
{1} (max {0:3.0f})"""
# %matplotlib notebook
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import numpy as np
glm_times = glm_grids.times
glm_times.sort()
geoproj = ccrs.Geodetic()
def plot(t, fig):
fig.clf()
t_glm = t['new']
glmx, glmy, glm, glm_nc= glm_grids.data_for_time(t_glm, return_nc=True)
proj_var = glm_nc.variables['goes_imager_projection']
x = glmx * proj_var.perspective_point_height
y = glmy * proj_var.perspective_point_height
glm_xlim = x.min(), x.max()
glm_ylim = y.min(), y.max()
# Use a masked array instead of messing with colormap to get transparency
# glm = np.ma.array(glm, mask=(glm == 0))
# glm_alpha = .5 + glm_norm(glm)*0.5
state_boundaries = cfeature.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lakes',
scale='50m', facecolor='none')
globe = ccrs.Globe(semimajor_axis=proj_var.semi_major_axis, semiminor_axis=proj_var.semi_minor_axis)
proj = ccrs.Geostationary(central_longitude=proj_var.longitude_of_projection_origin,
satellite_height=proj_var.perspective_point_height, globe=globe)
ax = fig.add_subplot(1, 1, 1, projection=proj)
glm_img = ax.imshow(glm, extent=(x.min(), x.max(),
y.min(), y.max()),
# transform = ccrs.PlateCarree(),
cmap=glm_cmap, interpolation='nearest',
norm=glm_norm)#, alpha=0.8)
ax.coastlines('10m', color='red')
ax.add_feature(state_boundaries, edgecolor='red')
# Match the GLM grid limits, in fixed grid space
ax.set_xlim(glm_xlim)
ax.set_ylim(glm_ylim)
# Set a lat/lon box directly
# ax.set_extent([-103, -99.5, 31.0, 34.0])
limits = ax.axis()
ax.text(limits[0]+.02*(limits[1]-limits[0]), limits[2]+.02*(limits[3]-limits[2]),
t_glm.isoformat().replace('T', ' ')+' UTC'+
label_string.format(glm.max(), product_label),
# transform = proj,
color=(0.3, 0.0, 0.0))
cbar = plt.colorbar(glm_img, orientation='vertical')
# Make the colorbar position match the height of the Cartopy axes
# Have to draw to force layout so that the ax position is correct
fig.canvas.draw()
posn = ax.get_position()
cbar.ax.set_position([posn.x0 + posn.width + 0.01, posn.y0,
0.04, posn.height])
fig = plt.figure(figsize=(10,10))
def resize_colorbar(event):
ax, cbar_ax = fig.axes[0], fig.axes[1]
posn = ax.get_position()
cbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0,
0.04, posn.height])
fig.canvas.mpl_connect('resize_event', resize_colorbar)
from functools import partial
plot = partial(plot, fig=fig)
from ipywidgets import widgets
time_slider = widgets.SelectionSlider(options=glm_times)
time_slider.observe(plot, 'value')
display(time_slider)
time_slider.value = time_slider.options[3]
# -
# # Save an image of each frame
#
# The code below loops through the slider positions and saves a figure at each step.
# The cell below that displays one of those images.
# print(grid_dir_base)
# print(file_tag)
images_out = []
for option in time_slider.options[:]:
time_slider.value = option
resize_colorbar('foo')
outfile=grid_dir_base + 'GLM_{1}_{0}.png'.format(option, file_tag)
outfile = outfile.replace(' ', '_')
outfile = outfile.replace(':', '')
images_out.append(outfile)
fig.savefig(outfile)
# print(images_out)
from IPython.display import Image
Image(images_out[3])
| examples/glm_test_data_new_grid_dev.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: ''
# name: pysparkkernel
# ---
# # Spark DataFrames
#
# Use Spakr DataFrames rather than RDDs whenever possible. In general, Spark DataFrames are more performant, and the performance is consistent across differnet languagge APIs. Unlike RDDs which are executed on the fly, Spakr DataFrames are compiled using the Catalyst optimiser and an optimal execution path executed by the engine. Since all langugaes compile to the same execution code, there is no difference across languages (unless you use user-defined funcitons UDF).
# Start spark
# + language="spark"
# -
spark.version
# Import native spark functions
import pyspark.sql.functions as F
# Import variables to specify schema
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
# ## RDDs and DataFrames
data = [('ann', 'spring', 'math', 98),
('ann', 'fall', 'bio', 50),
('bob', 'spring', 'stats', 100),
('bob', 'fall', 'stats', 92),
('bob', 'summer', 'stats', 100),
('charles', 'spring', 'stats', 88),
('charles', 'fall', 'bio', 100)
]
rdd = sc.parallelize(data)
rdd.take(3)
df = spark.createDataFrame(rdd, ['name', 'semester', 'subject', 'score'])
df.show()
df.show(3)
df.rdd.take(3)
df.describe()
# ## Converstion to and from pandas
#
# Make sure your data set can fit into memory before converting to `pandas`.
pdf = df.toPandas()
pdf
spark.createDataFrame(pdf).show()
# ## Schemas
df.printSchema()
# ## Data manipulation
# ### Selecting columns
df.select(['name', 'subject', 'score']).show()
# ### Filtering rows
df.filter(df['score'] > 90).show()
# ### Mutating values
# Using select
df.select(df['name'], df['semester'], df['subject'], df['score'],
(df['score'] - 10).alias('adj_score')).show()
# Using `withColumn`
df.withColumn('sqrt_socre', df['score']/2).show()
# ### Sorting
df.sort(df['score']).show()
df.sort(df['score'].desc()).show()
# Alternative syntax
df.sort(df.score.desc()).show()
# ### Summarizing
df.agg(
{'score': 'mean'}
).show()
df.agg(
F.mean(df.score).alias('avg'),
F.min(df.score).alias('min'),
F.max(df.score).alias('max')
).show()
# ### Split-Apply-Combine
df.groupby('name').agg({'score': 'mean', 'subject': 'count'}).show()
# ### Join
meta = [('ann', 'female', 23),
('bob', 'male', 19),
('charles', 'male', 22),
('daivd', 'male', 23)
]
schema = StructType([
StructField('name', StringType(), True),
StructField('sex', StringType(), True),
StructField('age', IntegerType(), True)
])
df_meta = spark.createDataFrame(meta, schema)
df_meta.printSchema()
df_meta.show()
df.join(df_meta, on='name', how='inner').show()
df_full = df.join(df_meta, on='name', how='rightouter')
df_full.drop()
df_full.groupby('sex').mean().show()
df_full.groupby('sex').pivot('subject').agg(F.mean('age')).show()
(
df_full.
dropna().
groupby('sex').
pivot('subject').
agg(F.mean('age')).
show()
)
# ## Using SQL
df_full.createOrReplaceTempView('table')
# ### Select columns
spark.sql('select name, age from table').show()
# ### Filter rows
spark.sql('select name, age from table where age > 20').show()
# ### Mutate
spark.sql('select name, age + 2 as adj_age from table').show()
# ### Sort
spark.sql('select name, age from table order by age desc').show()
# ### Summary
spark.sql('select mean(age) from table').show()
# ### Split-apply-combine
#
q = '''
select sex, mean(score), min(age)
from table
group by sex
'''
spark.sql(q).show()
# ## Using SQL magic
# + language="sql"
#
# select sex, mean(score), min(age)
# from table
# group by sex
# -
# ### Capture output locally (i.e. not sent to livy and cluster)
# + magic_args="-q -o df1" language="sql"
#
# select sex, score, age from table
# +
# %%local
# %matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(x='age', y='score', data=df1)
plt.show()
# -
# ## User Definted Functions (UDF) verus `pyspark.sql.functions`
# **Version 1**: Using a User Defined Funciton (UDF)
#
# Note: Using a Python UDF is not efficient.
@F.udf
def score_to_grade(g):
if g > 90:
return 'A'
elif g > 80:
return 'B'
else:
return 'C'
df.withColumn('grade', score_to_grade(df['score'])).show()
# **Version 2**: Using built-in fucntions.
#
# See [list of functions](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#module-pyspark.sql.functions) available.
#
# More performant version
score_to_grade_fast = (
F.
when(F.col('score') > 90, 'A').
when(F.col('score') > 80, 'B').
otherwise('C')
)
df.withColumn('grade_fast', score_to_grade_fast).show()
# ### Vectorized UDFs
#
# The current version of `pyspark 2.3` has support for vectorized UDFs, which can make Python functions using `numpy` or `pandas` functionality much faster. Unfortunately, the Docker version of `pyspark 2.2` does not support vectorized UDFs.
#
# If you have access to `pysark 2.3` see [Introducing Pandas UDF for PySpark: How to run your native Python code with PySpark, fast](https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html) and the linked [benchmarking notebook](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/1281142885375883/2174302049319883/7729323681064935/latest.html).
# ## I/O options
# ### CSV
df_full.write.mode('overwrite').option('header', 'true').csv('foo.csv')
foo = spark.read.option('header', 'true').csv('foo.csv')
foo.show()
# ### JSON
df_full.write.mode('overwrite').json('foo.json')
foo = spark.read.json('foo.json')
foo.show()
# ### Parquet
#
# This is an efficient columnar store.
df_full.write.mode('overwrite').parquet('foo.parquet')
foo = spark.read.parquet('foo.parquet')
foo.show()
# ## Random numbers
foo.withColumn('uniform', F.rand(seed=123)).show()
foo.withColumn('normal', F.randn(seed=123)).show()
# ## Indexing with row numbers
#
# Note that `monotonically_increasing_id` works over partitions, so while numbers are guaranteed to be unique and increasing, they may not be consecutive.
foo.withColumn('index', F.monotonically_increasing_id()).show()
# ## Example: Word counting in a DataFrame
# There are 2 text files in the `/data/texts` directory. We will read both in at once.
# +
hadoop = sc._jvm.org.apache.hadoop
fs = hadoop.fs.FileSystem
conf = hadoop.conf.Configuration()
path = hadoop.fs.Path('/data/texts')
for f in fs.get(conf).listStatus(path):
print f.getPath()
# -
text = spark.read.text('/data/texts')
text.show(10)
# Remove blank lines
text = text.filter(text['value'] != '')
text.show(10)
# ### Using built-in functions to process a column of strings
#
# Note: This is more efficient than using a Python UDF.
# +
from string import punctuation
def process(col):
col = F.lower(col) # convert to lowercase
col = F.translate(col, punctuation, '') # remove punctuation
col = F.trim(col) # remove leading and traling blank space
col = F.split(col, '\s') # split on blank space
col = F.explode(col) # give each iterable in row its owwn row
return col
# -
words = text.withColumn('value', process(text.value))
words.show(20)
counts = words.groupby('value').count()
counts.show(20)
counts.sort(counts['count'].desc()).show(20)
counts.sort(counts['count']).show(20)
| notebooks/.ipynb_checkpoints/S15C_Spark_DataFrames-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="2EtmGlmK4pAa"
# # DCN on Criteo Ad Dataset in TF 2.x
# + id="0pxSU24FbSAy"
# !pip install tensorflow==2.5.0
# + colab={"base_uri": "https://localhost:8080/"} id="QnFg_wXiclo4" executionInfo={"status": "ok", "timestamp": 1637061269255, "user_tz": -330, "elapsed": 135691, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="6cc6fdd1-38da-459c-ea28-576e6c5492b1"
# !pip install -q -U kaggle
# !pip install --upgrade --force-reinstall --no-deps kaggle
# !mkdir ~/.kaggle
# !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# !kaggle datasets download -d mrkmakr/criteo-dataset
# + colab={"base_uri": "https://localhost:8080/"} id="PZYBD38Ad5j2" executionInfo={"status": "ok", "timestamp": 1637061804413, "user_tz": -330, "elapsed": 325474, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="efff747a-58f1-49d5-bb09-51875616aaa0"
# !unzip criteo-dataset.zip
# + id="BZwknC3Gd8Qg"
import os
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder, KBinsDiscretizer
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Layer, Input, ReLU
from tensorflow.keras.layers import Dense, Embedding, Dropout
from tensorflow.keras.regularizers import l2
from tensorflow.keras.losses import binary_crossentropy
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import AUC
# + id="GDRfYvu4e4mO"
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
file = 'dac/train.txt'
read_part = True
sample_num = 10000
test_size = 0.2
embed_dim = 8
dnn_dropout = 0.5
hidden_units = [256, 128, 64]
learning_rate = 0.001
batch_size = 4096
epochs = 10
# + id="9zzb1WXIet8A"
def sparseFeature(feat, feat_num, embed_dim=4):
"""
create dictionary for sparse feature
:param feat: feature name
:param feat_num: the total number of sparse features that do not repeat
:param embed_dim: embedding dimension
:return:
"""
return {'feat_name': feat, 'feat_num': feat_num, 'embed_dim': embed_dim}
def denseFeature(feat):
"""
create dictionary for dense feature
:param feat: dense feature name
:return:
"""
return {'feat_name': feat}
# + id="5NnOfRIQerQh"
def create_criteo_dataset(file, embed_dim=8, read_part=True, sample_num=100000, test_size=0.2):
"""
a example about creating criteo dataset
:param file: dataset's path
:param embed_dim: the embedding dimension of sparse features
:param read_part: whether to read part of it
:param sample_num: the number of instances if read_part is True
:param test_size: ratio of test dataset
:return: feature columns, train, test
"""
names = ['label', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11',
'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11',
'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22',
'C23', 'C24', 'C25', 'C26']
if read_part:
data_df = pd.read_csv(file, sep='\t', iterator=True, header=None,
names=names)
data_df = data_df.get_chunk(sample_num)
else:
data_df = pd.read_csv(file, sep='\t', header=None, names=names)
sparse_features = ['C' + str(i) for i in range(1, 27)]
dense_features = ['I' + str(i) for i in range(1, 14)]
features = sparse_features + dense_features
data_df[sparse_features] = data_df[sparse_features].fillna('-1')
data_df[dense_features] = data_df[dense_features].fillna(0)
# Bin continuous data into intervals.
est = KBinsDiscretizer(n_bins=100, encode='ordinal', strategy='uniform')
data_df[dense_features] = est.fit_transform(data_df[dense_features])
for feat in sparse_features:
le = LabelEncoder()
data_df[feat] = le.fit_transform(data_df[feat])
# ==============Feature Engineering===================
# ====================================================
feature_columns = [sparseFeature(feat, int(data_df[feat].max()) + 1, embed_dim=embed_dim)
for feat in features]
train, test = train_test_split(data_df, test_size=test_size)
train_X = train[features].values.astype('int32')
train_y = train['label'].values.astype('int32')
test_X = test[features].values.astype('int32')
test_y = test['label'].values.astype('int32')
return feature_columns, (train_X, train_y), (test_X, test_y)
# + id="m6L1wMGGeCGE"
class CrossNetwork(Layer):
def __init__(self, layer_num, reg_w=1e-6, reg_b=1e-6):
"""CrossNetwork
:param layer_num: A scalar. The depth of cross network
:param reg_w: A scalar. The regularizer of w
:param reg_b: A scalar. The regularizer of b
"""
super(CrossNetwork, self).__init__()
self.layer_num = layer_num
self.reg_w = reg_w
self.reg_b = reg_b
def build(self, input_shape):
dim = int(input_shape[-1])
self.cross_weights = [
self.add_weight(name='w_' + str(i),
shape=(dim, 1),
initializer='random_normal',
regularizer=l2(self.reg_w),
trainable=True
)
for i in range(self.layer_num)]
self.cross_bias = [
self.add_weight(name='b_' + str(i),
shape=(dim, 1),
initializer='random_normal',
regularizer=l2(self.reg_b),
trainable=True
)
for i in range(self.layer_num)]
def call(self, inputs, **kwargs):
x_0 = tf.expand_dims(inputs, axis=2) # (batch_size, dim, 1)
x_l = x_0 # (None, dim, 1)
for i in range(self.layer_num):
x_l1 = tf.tensordot(x_l, self.cross_weights[i], axes=[1, 0]) # (batch_size, dim, dim)
x_l = tf.matmul(x_0, x_l1) + self.cross_bias[i] + x_l # (batch_size, dim, 1)
x_l = tf.squeeze(x_l, axis=2) # (batch_size, dim)
return x_l
class DNN(Layer):
def __init__(self, hidden_units, activation='relu', dropout=0.):
"""Deep Neural Network
:param hidden_units: A list. Neural network hidden units.
:param activation: A string. Activation function of dnn.
:param dropout: A scalar. Dropout number.
"""
super(DNN, self).__init__()
self.dnn_network = [Dense(units=unit, activation=activation) for unit in hidden_units]
self.dropout = Dropout(dropout)
def call(self, inputs, **kwargs):
x = inputs
for dnn in self.dnn_network:
x = dnn(x)
x = self.dropout(x)
return x
# + id="K09g18Cmgxrf"
class DCN(Model):
def __init__(self, feature_columns, hidden_units, activation='relu',
dnn_dropout=0., embed_reg=1e-6, cross_w_reg=1e-6, cross_b_reg=1e-6):
"""
Deep&Cross Network
:param feature_columns: A list. sparse column feature information.
:param hidden_units: A list. Neural network hidden units.
:param activation: A string. Activation function of dnn.
:param dnn_dropout: A scalar. Dropout of dnn.
:param embed_reg: A scalar. The regularizer of embedding.
:param cross_w_reg: A scalar. The regularizer of cross network.
:param cross_b_reg: A scalar. The regularizer of cross network.
"""
super(DCN, self).__init__()
self.sparse_feature_columns = feature_columns
self.layer_num = len(hidden_units)
self.embed_layers = {
'embed_' + str(i): Embedding(input_dim=feat['feat_num'],
input_length=1,
output_dim=feat['embed_dim'],
embeddings_initializer='random_uniform',
embeddings_regularizer=l2(embed_reg))
for i, feat in enumerate(self.sparse_feature_columns)
}
self.cross_network = CrossNetwork(self.layer_num, cross_w_reg, cross_b_reg)
self.dnn_network = DNN(hidden_units, activation, dnn_dropout)
self.dense_final = Dense(1, activation=None)
def call(self, inputs, **kwargs):
sparse_inputs = inputs
sparse_embed = tf.concat([self.embed_layers['embed_{}'.format(i)](sparse_inputs[:, i])
for i in range(sparse_inputs.shape[1])], axis=-1)
x = sparse_embed
# Cross Network
cross_x = self.cross_network(x)
# DNN
dnn_x = self.dnn_network(x)
# Concatenate
total_x = tf.concat([cross_x, dnn_x], axis=-1)
outputs = tf.nn.sigmoid(self.dense_final(total_x))
return outputs
def summary(self):
sparse_inputs = Input(shape=(len(self.sparse_feature_columns),), dtype=tf.int32)
Model(inputs=sparse_inputs, outputs=self.call(sparse_inputs)).summary()
# + colab={"base_uri": "https://localhost:8080/"} id="dHIfDXsXePmP" executionInfo={"status": "ok", "timestamp": 1637064295163, "user_tz": -330, "elapsed": 13519, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="6e4d52f6-872a-4ccb-86ab-9b4a03bb75d0"
# ========================== Create dataset =======================
feature_columns, train, test = create_criteo_dataset(file=file,
embed_dim=embed_dim,
read_part=read_part,
sample_num=sample_num,
test_size=test_size)
train_X, train_y = train
test_X, test_y = test
# ============================Build Model==========================
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = DCN(feature_columns, hidden_units, dnn_dropout=dnn_dropout)
model.summary()
# =========================Compile============================
model.compile(loss=binary_crossentropy, optimizer=Adam(learning_rate=learning_rate),
metrics=[AUC()])
# ============================model checkpoint======================
# check_path = 'save/dcn_weights.epoch_{epoch:04d}.val_loss_{val_loss:.4f}.ckpt'
# checkpoint = tf.keras.callbacks.ModelCheckpoint(check_path, save_weights_only=True,
# verbose=1, period=5)
# ===========================Fit==============================
model.fit(
train_X,
train_y,
epochs=epochs,
callbacks=[EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)], # checkpoint
batch_size=batch_size,
validation_split=0.1
)
# ===========================Test==============================
print('test AUC: %f' % model.evaluate(test_X, test_y, batch_size=batch_size)[1])
# + id="UKnzXHU2k0Bt"
| _notebooks/2022-01-19-dcn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
data = pd.read_csv('clean-std-norm-nasa.csv')
data = data.drop(columns=['time.1'],axis=0)
data
data = data.drop(columns=['class'],axis=0)
data
Xtrain = data.iloc[:34370, :-1]
Xtrain
Ytrain = data.iloc[:34370,-1:]
Ytrain
Xtest , Ytest = data.iloc[34370:44189,:-1],data.iloc[34370:44189,-1:]
Xval,Yval = data.iloc[44190:,:-1],data.iloc[44190:,-1:]
Xtrain =Xtrain.values.reshape(Xtrain.shape[0],Xtrain.shape[1],1)
Xval = Xval.values.reshape(Xval.shape[0],Xval.shape[1],1)
Xtest = Xtest.values.reshape(Xtest.shape[0],Xtest.shape[1],1)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from tensorflow.keras.optimizers import *
model = Sequential()
model.add(Bidirectional(LSTM(128, return_sequences=True), input_shape=([Xtrain.shape[1], Xtrain.shape[2]])))
model.add(TimeDistributed(Dense(128, activation='relu')))
model.add(Dropout(0.2))
model.add(Dense(units = 100))
model.add(Dense(units = 100))
model.add(Dense(units = 1))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.summary()
history = model.fit(Xtrain, Ytrain, epochs=20,
validation_data=(Xval, Yval))
model.evaluate(Xtest, Ytest)
# +
import matplotlib as mpl
import matplotlib.pyplot as plt
def plot_learning_curves(loss, val_loss):
plt.plot(np.arange(len(loss)) + 0.5, loss, "b.-", label="Training loss")
plt.plot(np.arange(len(val_loss)) + 1, val_loss, "r.-", label="Validation loss")
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.grid(True)
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
# -
ypred_probs = model.predict(Xtest, verbose=0)
# predict crisp classes for test set
ypred_classes = model.predict_classes(Xtest, verbose=0)
# reduce to 1d array
ypred_probs = ypred_probs[:, 0]
ypred_classes = ypred_classes[:, 0]
# +
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
accuracy = accuracy_score(Ytest, ypred_classes)
print('Accuracy: %f' % accuracy)
# precision tp / (tp + fp)
precision = precision_score(Ytest, ypred_classes)
print('Precision: %f' % precision)
# recall: tp / (tp + fn)
recall = recall_score(Ytest, ypred_classes)
print('Recall: %f' % recall)
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(Ytest, ypred_classes)
print('F1 score: %f' % f1)
# -
model1 = Sequential()
model1.add(Bidirectional(LSTM(128, return_sequences=True), input_shape=([Xtrain.shape[1], Xtrain.shape[2]])))
model1.add(Dropout(0.2))
model1.add(Dense(units = 100))
model1.add(Dense(units = 100))
model1.add(Dense(units = 1))
model1.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model1.summary()
history = model1.fit(Xtrain, Ytrain, epochs=20,
validation_data=(Xval, Yval))
model1.evaluate(Xtest, Ytest)
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
ypred_probs = model1.predict(Xtest, verbose=0)
# predict crisp classes for test set
ypred_classes = model1.predict_classes(Xtest, verbose=0)
# reduce to 1d array
ypred_probs = ypred_probs[:, 0]
ypred_classes = ypred_classes[:, 0]
# +
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
accuracy = accuracy_score(Ytest, ypred_classes)
print('Accuracy: %f' % accuracy)
# precision tp / (tp + fp)
precision = precision_score(Ytest, ypred_classes)
print('Precision: %f' % precision)
# recall: tp / (tp + fn)
recall = recall_score(Ytest, ypred_classes)
print('Recall: %f' % recall)
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(Ytest, ypred_classes)
print('F1 score: %f' % f1)
# -
history = model1.fit(Xtrain, Ytrain, epochs=50,
validation_data=(Xval, Yval))
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
import tensorflow
tensorflow.keras.backend.clear_session()
| Bi_LSTM multi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Node
class Node():
def __init__(self, data):
self.data = data
self.next = None
self.back = None
def __repr__(self):
return self.data
# Linked List
class DoublyLinkedList():
def __init__(self, nodes=None):
self.head = None
self.tail = None
if nodes is not None:
node = Node(nodes.pop(0)) # Pop the first value (get + delete)
self.head = node
self.tail = self.head
for val in nodes:
node.next = Node(val)
node.next.back = node
node = node.next
self.tail = node
def __repr__(self):
node = self.head
if node is None: return "Empty"
nodes = []
while (node is not None):
nodes.append(node.data)
node = node.next
nodes.append("None")
return ' -> '.join(nodes)
# O(1)
def pop_back(self):
# If empty
if self.head is None: return "Empty"
# If we only have 1 element
elif self.head.next is None:
val = self.head.data
self.head = None
self.tail = None
return val
# Else
else:
val = self.tail.data
self.tail = self.tail.back
self.tail.next = None
return val
# O(1)
def pop_front(self):
# If empty
if self.head is None: return "Empty"
# If we only have head
if self.head.next is None:
val = self.head.data
self.head = None
self.tail = None
return val
# Else
val = self.head.data
self.head = self.head.next
return val
# O(1)
def insert_front(self, data):
# If empty
if self.head is None:
self.head = Node(data)
self.tail = self.head
# Else
else:
self.head.back = Node(data)
self.head.back.next = self.head
self.head = self.head.back
return self.__repr__()
# O(1)
def insert_back(self, data):
# If empty
if self.head is None:
self.head = Node(data)
self.tail = self.head
# Else
else:
self.tail.next = Node(data)
self.tail.next.back = self.tail
self.tail = self.tail.next
return self.__repr__()
# +
# first element (head)
a = Node('1')
# second element
b = Node('2')
a.next = b
b.back = a
# third element
c = Node('3')
b.next = c
c.back = b
# fourth element
d = Node('4')
c.next = d
d.back = c
# Make the linked list
ll = DoublyLinkedList()
ll.head = a
ll.tail = d
ll
# +
print(ll.pop_front())
print(ll)
print(ll.pop_back())
print(ll)
print(ll.pop_front())
print(ll)
print(ll.pop_back())
print(ll)
# +
ll = DoublyLinkedList(['1', '2', '3', '4'])
print("Linked List: ", ll)
print(ll.pop_front())
print(ll)
print(ll.pop_back())
print(ll)
print(ll.pop_front())
print(ll)
print(ll.pop_back())
print(ll)
# -
ll = DoublyLinkedList(['1', '2', '3', '4'])
print("Linked List: ", ll)
ll.insert_front('0')
ll.insert_back('2')
ll = DoublyLinkedList()
ll
ll.insert_back('2')
ll.pop_back()
ll.insert_front('0')
ll.pop_front()
ll.pop_back()
| DataStructure/2_DoublyLinkedList.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# necessary imports
import numpy as np
import pandas as pd
import sklearn
import os
labels_file_name = './data/train_machine_labels.csv'
# quick lookup on the the contents of the file
labels_file = pd.read_csv(labels_file_name)
labels_file.head()
classes_file_name = './data/classes-trainable.csv'
classes_file = pd.read_csv(classes_file_name)
classes_file.head()
all_labels = classes_file['label_code']
print ('The number of unique labels is {}'.format(len(all_labels)))
# +
# set the number of labels which will be used as an output layer size for a model
num_labels = len(all_labels)
# build the index dictionary based on the labels collection
labels_index = {label:idx for idx, label in enumerate(all_labels)}
# +
# retrieve the list of labels assigned to all images
labels_set = set(all_labels)
images_with_labels = {}
# -
# set up the threshold for the confidence of the machine label
machine_label_threshold = .5
train_images_dir = './data/train/scaled/'
scaled_train_images = os.listdir(train_images_dir)
image_file_name = scaled_train_images[0]
print (image_file_name)
image_file_name_wo_ext = image_file_name[:-4]
print (image_file_name_wo_ext)
print (labels_file.loc[labels_file['ImageID'] == image_file_name_wo_ext][labels_file['Confidence'] > machine_label_threshold]['LabelName'].tolist())
# +
images_dict = {}
for index, image_file_name in enumerate(scaled_train_images):
#if index > 10:
# break
image_file_name_wo_ext = image_file_name[:-4]
#print (image_file_name_wo_ext)
lst = labels_file.loc[labels_file['ImageID'] == image_file_name_wo_ext][labels_file['Confidence'] > machine_label_threshold]['LabelName'].tolist()
images_dict[image_file_name_wo_ext] = lst
# -
import pickle
# +
#f = open("./data/train.pkl","wb")
#pickle.dump(images_dict,f)
#f.close()
# -
with open('./data/train.pkl', 'rb') as handle:
data = pickle.load(handle)
print (data)
# do the multi-hot encoding
def multi_hot_encode(x, num_classes):
labels_encoded = np.zeros(num_classes)
labels_encoded[x] = 1
return labels_encoded
print (np.sum(multi_hot_encode(data['f001f505a3549449'], num_labels)))
# split the samples to train and validation sets
from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(scaled_train_images, test_size=0.1)
print (validation_samples)
import cv2
# define the generator method which loads images in a batches
def generator(samples, batch_size=32):
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
sklearn.utils.shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
labels = []
for batch_sample in batch_samples:
image_name = train_images_dir + batch_sample
image = cv2.imread(image_name)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (300,300))
key = batch_sample[:-4]
label = multi_hot_encode(data[key], num_labels)
images.append(image)
labels.append(label)
X_train = np.array(images)
y_train = np.array(labels)
yield sklearn.utils.shuffle(X_train, y_train)
from models.darknet53 import darknet_classifier
input_shape = (300, 300, 3)
model = darknet_classifier(input_shape, num_labels)
model.compile(loss="binary_crossentropy", optimizer='adam', metrics=["accuracy"])
model.summary()
num_epochs = 2
batch_size = 8
from keras.callbacks import ModelCheckpoint, EarlyStopping
# trains the model
# defined 2 callbacks: early stopping and checkpoint to save the model if the validation loss has been improved
def train_model(model, train_generator, validation_generator, epochs=3):
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=1)
checkpoint_callback = ModelCheckpoint('model.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
model.fit_generator(train_generator, steps_per_epoch=len(train_samples)//batch_size, validation_data=validation_generator, validation_steps=len(validation_samples)//batch_size, epochs=epochs, callbacks=[early_stopping_callback, checkpoint_callback], )
# compile and train the model using the generator function
train_generator = generator(train_samples, batch_size=batch_size)
validation_generator = generator(validation_samples, batch_size=batch_size)
train_model(model, train_generator, validation_generator, num_epochs)
| machine_labels_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://www.exalumnos.usm.cl/wp-content/uploads/2015/06/ISOTIPO-Color.jpg" title="Title text" width="20%" />
#
# <hr style="height:2px;border:none"/>
# <H1 align='center'> Regresión </H1>
#
# <H3> INF-396 Introducción a la Ciencia de Datos </H3>
# <H3> Autor: <NAME></H3>
#
# Lenguaje: Python
#
# Temas:
#
# - Regresión Lineal
# - Regresión Polinomial
# - Regresión Logística
# <hr style="height:2px;border:none"/>
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
# +
# Funciones para generar datos sintéticos para probar la regresión
np.random.seed(0)
def true_f(x):
return 0.1 * (x-2) ** 3 + x ** 2 - 8.0 * x - 1.0
def generate(n_samples):
X = np.random.rand(n_samples,) * 20.0 - 10.0
y = true_f(X) + 5 * np.random.randn(n_samples)
return X.reshape(n_samples, 1), y
# +
X_train, y_train = generate(15)
xs = np.linspace(-10, 10, num=1000)
plt.plot(xs, true_f(xs), c="r", label="Distribución Subyacente")
plt.scatter(X_train, y_train, label="Datos Muestreados")
plt.legend()
plt.show()
X_test, y_test = generate(50)
# +
# ---------------
# En esta sección implementé las funciones necesarias
# para la regresión polinomial:
# features: función con entrada X y que genera una nueva representación X_features.
# con features de un polinomio de grado d calculadas de X.
# learn: Funcion que toma como entrada X e y y calcula los pesos W, utilizando las ecuaciones normales.
# model: Funcion que toma una entrada X y pesos W y calcula la predicción del modelo.
# error: Función que toma como entrada la prediccion y_hat e y y calcula el error cuadrático.
# --------------
def features(X, d):
poly = PolynomialFeatures(d)
X_features = poly.fit_transform(X)
return X_features
def learn(X_features, y_train):
W = np.linalg.inv(X_features.T.dot(X_features)).dot(X_features.T).dot(y_train)
return W
def model(X, W, poly):
# aqui su codigo
X = poly.fit_transform(X)
y = X.dot(W)
return y
def error(y_hat, y):
#aqui su codigo
error = np.mean(np.power((y_hat-y),2))
return error
# -
def train(X_train,y_train,d):
X_features = features(X_train, d)
W = learn(X_features, y_train)
return W
for d in [3,5,10,15]:
print("Para d = {}:".format(d))
poly = PolynomialFeatures(d)
W = train(X_train,y_train,d)
# probar modelo en training set
y_hat = model(X_train, W, poly)
error_training = error(y_hat, y_train)
print(" Error en los datos de entrenamiento: {}".format(error_training))
# probar modelo en test set
y_hat_test = model(X_test,W,poly)
error_test = error(y_hat_test, y_test)
print(" Error en los datos de testing: {}".format(error_test))
# +
print("Para d = 3")
poly = PolynomialFeatures(3)
W = train(X_train,y_train,3)
y_hat = model(X_train, W, poly)
error_training = error(y_hat, y_train)
y_hat_test = model(X_test,W,poly)
error_test = error(y_hat_test, y_test)
plt.scatter(X_test, y_hat_test, c="r", label="Distribución Subyacente")
plt.scatter(X_test, y_test, label="Datos Muestreados")
plt.legend()
plt.show()
# -
# ### Conclusiones
# - El mejor resultado se obtiene con d = 3. Aquí se encuentra el equilibrio generalización-complejidad del modelo. Esto era esperable, ya que la distribución original se creó a partir de un polinomio de grado 3.
# - Para d = 10 y d = 15, el modelo es demasiado complejo y sufre de overfitting severo.
# # Tarea 3-2: Regresión Logística
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
# +
# ---------------
# En esta sección implementé las funciones necesarias
# para la regresión logística: cost: Función de costo,
# gradient: Gradiente de la función de costo respecto a los
# parámetros, gradient_descent: Gradiente descendiente
# --------------
def sigmoid(z):
s = 1/(1+np.exp(-z))
return s
def cost(X, y, theta):
m = X.shape[0]
A = sigmoid(X.dot(theta.T))
cost = (-1/m)*np.sum(y*np.log(A)+(1-y)*np.log(1-A))
return cost
def gradient(X, y, theta):
m = X.shape[0]
A = sigmoid(X.dot(theta.T))
dtheta = (1/m)*np.dot(X.T,(A-y)).T
return dtheta
def gradient_descent_step(X, y, theta, alpha=0.01):
new_theta = theta - alpha*gradient(X, y, theta)
return new_theta
def gradient_descent(X, y, theta, iterations=100, alpha=0.01):
# retorna una lista con la historia de costos y thetas en
# cada iteracion
thetas, costs = [],[]
for i in range(iterations):
theta = gradient_descent_step(X, y, theta, alpha=0.01)
costo = cost(X, y, theta)
thetas.append(theta)
costs.append(costo)
return thetas, costs
# +
import pandas as pd
# leer datos
df = pd.read_csv('tarea2_data.txt',header=None)
df.columns = ["atributo1", "atributo2", "target"]
df["b"] = 1
X_train, y_train = df[["b","atributo1","atributo2"]], df["target"].values.reshape(-1,1)
#entrenar modelo
w = np.zeros((1,X_train.shape[1]))
thetas, costs = gradient_descent(X_train, y_train, w)
# -
# #### El código produce un error debido a que los datos no están normalizados. Normalizar los datos es muy importante para la regresión logística. Esto es debido a que la función sigmoid está recibiendo números muy grandes (producto de la diferencia de proporciones entre atributos) que transforma a $1_s$ y a $0_s$. Luego, la función de costo trata de calcular $log(0)$.
# +
# MinMax scaler
def normalize(X):
new_X= (X-X.min())/(X.max()-X.min())
return new_X.fillna(1)
# +
X_train_norm = normalize(X_train)
#entrenar modelo
#inicializar parametros
w = np.zeros((X_train_norm.shape[1],1)).T
thetas, costs = gradient_descent(X_train_norm, y_train, w)
# +
plt.plot(range(1,101), costs, c="r", label="Costo vs iteración")
plt.legend()
plt.show()
# -
# #### Se nota una disminución consistente en el costo
clf = LogisticRegression().fit(X_train_norm,y_train.squeeze())
costo = cost(X_train_norm,y_train,clf.coef_)
print(costo)
# #### El método implementado por sklearn llega a un mejor resultado. Esto puede deberse a que ocupe otro método que no es gradiente descendente o a que los parámetros estén inicializados de forma diferente. De cualquier forma, la diferencia no es sustantiva.
| Aprendizaje Supervisado/Otros/Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py37]
# language: python
# name: conda-env-py37-py
# ---
# # Layered objects examples
# *Author: <NAME> (<EMAIL>)*
# **NOTICE:** This notebook assumes you are familiar with the basic concept of layered surface detection introduced in the *DetectLayers* notebook.
#
# Load essential modules for loading and showing data.
import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imread
# +
# Load data.
path = './data/nerves.png'
data = imread(path).astype(np.int32)
data_centers = imread(path.replace('nerves', 'nerves_centers'), as_gray=True)
# Get center positions.
centers = np.transpose(np.where(data_centers))
# Show image with centers.
plt.imshow(data, cmap='gray')
plt.scatter(centers[..., 1], centers[..., 0], color='red', s=6)
plt.show()
print('Number of objects:', len(centers))
# -
# ## Unfolding
# To detect the objects (nerves) using layered surface detection, we first need to unfold the nerves using a radial resampling.
# +
from scipy.ndimage.interpolation import map_coordinates
def unfold_image(img, center, max_dists=None, r_min=1, r_max=20, angles=30, steps=15):
# Sampling angles and radii.
angles = np.linspace(0, 2*np.pi, angles, endpoint=False)
distances = np.linspace(r_min, r_max, steps, endpoint=True)
if max_dists is not None:
max_dists.append(np.max(distances))
# Get angles.
angles_cos = np.cos(angles)
angles_sin = np.sin(angles)
# Calculate points positions.
x_pos = center[0] + np.outer(angles_cos, distances)
y_pos = center[1] + np.outer(angles_sin, distances)
# Create list of sampling points.
sampling_points = np.array([x_pos, y_pos]).transpose()
sampling_shape = sampling_points.shape
sampling_points_flat = sampling_points.reshape((-1, 2))
# Sample from image.
samples = map_coordinates(img, sampling_points_flat.transpose(), mode='nearest')
samples = samples.reshape(sampling_shape[:2])
return samples, sampling_points
# -
# Now that we have a function for unfolding image data, let's test it. The result should be an unfolded image, for which we can use layer detection.
# +
samples, sample_points = unfold_image(data, centers[3])
plt.figure(figsize=(15, 5))
ax = plt.subplot(1, 3, 1, title='Sample positions in data')
ax.imshow(data, cmap='gray')
ax.scatter(sample_points[..., 1], sample_points[..., 0], s=2, color='red')
ax = plt.subplot(1, 3, 2, title='Sample positions and intensities')
ax.scatter(sample_points[..., 1], sample_points[..., 0], c=samples, cmap='gray')
ax = plt.subplot(1, 3, 3, title='Unfolded image')
ax.imshow(samples, cmap='gray')
plt.show()
# -
# ## Detect layers in object
# Now that we can unfold the nerves, we can try to use graph cut based layer detection, as introduced in the previous notebook.
#
# Since we want to separate the inner and outer part of the nerve, we will detect two layers per nerve. We will use the gradient image for this.
from slgbuilder import GraphObject, MaxflowBuilder
# +
# Create gradient-based objects.
diff_samples = np.diff(samples, axis=0)
outer_nerve = GraphObject(255 - diff_samples)
inner_nerve = GraphObject(diff_samples)
# Show object data.
plt.figure(figsize=(10, 5))
ax = plt.subplot(1, 2, 1, title='Outer never data')
ax.imshow(outer_nerve.data, cmap='gray')
ax = plt.subplot(1, 2, 2, title='Inner never data')
ax.imshow(inner_nerve.data, cmap='gray')
plt.show()
# -
# The surface will be detected where the data pixel intensity in the images are low. This corresponds well with the data for the outer and inner nerves shown above.
#
# Let's detect the layers. We apply boundary cost, smoothness and containment constraints. Here we set both ```min_margin``` and ```max_margin``` constraints for our containment. Then we use ```maxflow``` to find the optimal solution.
helper = MaxflowBuilder()
helper.add_objects([outer_nerve, inner_nerve])
helper.add_layered_boundary_cost()
helper.add_layered_smoothness(delta=2)
helper.add_layered_containment(outer_nerve, inner_nerve, min_margin=3, max_margin=6)
flow = helper.solve()
print('Maximum flow/minimum energy:', flow)
# +
segmentations = [helper.get_labels(o).astype(np.int32) for o in helper.objects]
segmentation_lines = [np.count_nonzero(s, axis=0) - 0.5 for s in segmentations]
# Draw results.
plt.figure(figsize=(10, 10))
ax = plt.subplot(1, 3, 1)
ax.imshow(samples, cmap='gray')
ax = plt.subplot(1, 3, 2)
ax.imshow(np.sum(segmentations, axis=0))
ax = plt.subplot(1, 3, 3)
ax.imshow(samples, cmap='gray')
for line in segmentation_lines:
ax.plot(line)
plt.show()
# -
# Since we have the original positions (in the original image) for each pixel in our unfolded image, we can easily map the segmentation back to our real data. We will do this later.
# ## Detecting multiple objects
# In the image data, we have marked 17 different nerves that we would like to segment. We could segment each of these individually, the same way we segmented the single nerve above. Although it is not the most memory efficient way of segmenting the objects, we could also just add all the objects to the graph at once and get a segmentation for each object. This creates a graph with many "layers", each representing nodes for an object. Because the nodes in each layers only represent a subset of the original image pixels, we call this a Sparse Layered Graph (SLG).
# +
# Lists for storing nerve objects.
nerve_samples = []
outer_nerves = []
inner_nerves = []
# For each center, create an inner and outer never.
for center in centers:
# Unfold nerve.
samples, sample_points = unfold_image(data, center)
nerve_samples.append(samples)
# Create outer and inner nerve objects.
diff_samples = np.diff(samples, axis=0)
diff_sample_points = sample_points[:-1]
outer_nerves.append(GraphObject(255 - diff_samples, diff_sample_points))
inner_nerves.append(GraphObject(diff_samples, diff_sample_points))
# -
# Here we also add the sample positions to the ```GraphObject```s. We will need these later.
# +
helper = MaxflowBuilder()
helper.add_objects(outer_nerves + inner_nerves)
helper.add_layered_boundary_cost()
helper.add_layered_smoothness(delta=2)
for outer_nerve, inner_nerve in zip(outer_nerves, inner_nerves):
helper.add_layered_containment(outer_nerve, inner_nerve, min_margin=3, max_margin=6)
# -
flow = helper.solve()
print('Maximum flow/minimum energy:', flow)
# +
# Get segmentations.
segmentations = []
for outer_nerve, inner_nerve in zip(outer_nerves, inner_nerves):
segmentations.append(helper.get_labels(outer_nerve))
segmentations.append(helper.get_labels(inner_nerve))
segmentation_lines = [np.count_nonzero(s, axis=0) - 0.5 for s in segmentations]
# Draw segmentations.
plt.figure(figsize=(15, 5))
for i, samples in enumerate(nerve_samples):
ax = plt.subplot(3, len(nerve_samples) // 3 + 1, i + 1)
ax.imshow(samples, cmap='gray')
ax.plot(segmentation_lines[2*i])
ax.plot(segmentation_lines[2*i + 1])
plt.show()
# -
# While most of the segmentations went well, if we look closely we see that some don't look right. If we draw the lines on the original image, we see the problem.
# +
def draw_segmentations(data, helper):
"""Draw all segmentations for objects in the helper on top of the data."""
# Create figure.
plt.figure(figsize=(10, 10))
plt.imshow(data, cmap='gray')
plt.xlim([0, data.shape[1]-1])
plt.ylim([data.shape[0]-1, 0])
# Draw segmentation lines.
for i, obj in enumerate(helper.objects):
# Get segmentation.
segment = helper.get_labels(obj)
# Create line.
line = np.count_nonzero(segment, axis=0)
# Get actual points.
point_indices = tuple(np.asarray([line - 1, np.arange(len(line))]))
points = obj.sample_points[point_indices]
# Close line.
points = np.append(points, points[:1], axis=0)
# Plot points.
plt.plot(points[..., 1], points[..., 0])
plt.show()
draw_segmentations(data, helper)
# -
# One of the objects is segmented incorrectly overlapping the neighbouring segmentations.
# ## Multi-object exclusion
# To overcome the issue of overlapping segments, we can add exclusion contraints between all outer nerves. However, exclusion is a so-called *nonsubmodular* energy term, which means it cannot be represented as a single edge in our graph. Luckily there's an algorithm called *QPBO* (Qudratic Pseudo-Boolean Optimization) that can help us.
#
# QPBO creates a complementary graph, alongside the original graph. The complementary graph is inverted, meaning that is has the exact same edges as the original graph, except they are reversed. This means that the graph size is doubled, which makes computation slower and uses more memory. The benefit of QPBO is that we can now add nonsubmodular energies such as exclusion. When coupled with the sparse layered graph structure, we are able to segment many interacting objects using both containment and exclusion interactions.
#
# The ```slgbuilder``` module contains a ```QPBOBuilder``` class, which is very similar to the ```MaxflowBuilder``` we've been using so far. The main difference is that it has functions for adding exclusion. One of these is ```add_layered_exclusion``` which we will now use. We will be using the ```GraphObject```s created earlier.
from slgbuilder import QPBOBuilder
# +
helper = QPBOBuilder()
helper.add_objects(outer_nerves + inner_nerves)
helper.add_layered_boundary_cost()
helper.add_layered_smoothness(delta=2)
for outer_nerve, inner_nerve in zip(outer_nerves, inner_nerves):
helper.add_layered_containment(outer_nerve, inner_nerve, min_margin=3, max_margin=6)
# +
twice_flow = helper.solve()
print('Two times maximum flow/minimum energy:', twice_flow)
if 2*flow == twice_flow:
print('QPBO flow is exactly twice the Maxflow flow.')
else:
print('Something is wrong...')
# -
# We see that the ```QPBOBuilder``` energy/flow is exactly twice the flow computed by ```MaxflowBuilder``` for a similar problem, which is what we expect, since we double the number of nodes and edges. This is because we have added exactly the same edges/energies on above. This of course also means that the segmentation is exactly the same, hence we haven't fixed the problem yet.
#
# To avoid the overlapping nerve segments, we add exclusion between all *outer* nerve objects using ```add_layered_exclusion``` and call ```solve``` again. Note that calculating the new maxflow/mincut only requires us to re-evaluate parts of the graph that were changed, potentially making the computation very fast.
# Add exclusion constraints between all pairs of outer nerves.
for i in range(len(outer_nerves)):
for j in range(i + 1, len(outer_nerves)):
helper.add_layered_exclusion(outer_nerves[i], outer_nerves[j], margin=3)
twice_flow = helper.solve()
print('Two times maximum flow/minimum energy:', twice_flow)
# We see that adding the new constraints has increased the energy. This makes sense, since our constraints are forcing a solution that is less optimal from the perspective of the data. However, our prior knowledge tells us that nerves cannot overlap, so even if the data suggest that they do, we know this is not the case, but rather because the data is inaccurate.
#
# Let's draw the segmentation results with exclusion inteactions.
draw_segmentations(data, helper)
# ## Region cost
# So far we've only been using the gradient of the data in our model. However, the pixel intensity may also provide valuable information. In this segmentation problem we notice that each region (area between the layers/object boundaries) has different mean intensities. Generally the nerves are blight inside, while the outer part in dark. The background is also bright.
#
# If the mean intensities of the different objects are relatively consistent, we can use the intensities in our model. We've been using the ordered multi-column graph structure by [Li et al](https://doi.org/10.1109/TPAMI.2006.19). To use the intensities, we use the region cost approach by [Haeker et al](https://doi.org/10.1007/978-3-540-73273-0_50).
#
# We will add the region cost to our existing model and see how it changes the segmentation. The ```beta``` value is used to scale the influence of the region information compared to the gradient information previously added through the ```add_layered_boundary_cost```. To add region cost for an object we use the ```add_layered_region_cost```.
# +
mu_inside = 90
mu_ring = 70
mu_outside = 90
beta = 0.1
for samples, outer_nerve, inner_nerve in zip(nerve_samples, outer_nerves, inner_nerves):
samples = samples[:-1]
inside_cost = np.abs(samples - mu_inside) * beta
ring_cost = np.abs(samples - mu_ring) * beta
outside_cost = np.abs(samples - mu_outside) * beta
helper.add_layered_region_cost(inner_nerve, ring_cost, inside_cost)
helper.add_layered_region_cost(outer_nerve, outside_cost, ring_cost)
# -
twice_flow = helper.solve()
print('Two times maximum flow/minimum energy:', twice_flow)
draw_segmentations(data, helper)
# In some areas the results improve slightly, however for areas where the intensities are far from the mean instensities of the region the region cost may lead to a less accurate segmentation.
| notebooks/DetectObjects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Fire up graphlab create
import graphlab
# # Load some house sales data
sales = graphlab.SFrame('home_data.gl/')
sales
# # Exploring the data for housing sales
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
# # Create a simple regression model of sqft_living to price
seed = 0
train_data, test_data = sales.random_split(0.80, seed=seed)
# ### Build the regression model
sqft_model = graphlab.linear_regression.create(train_data, target="price", features=['sqft_living'])
# ### Evaluate the sqft_model
print(test_data['price'].mean())
print(sqft_model.evaluate(test_data))
# ### Let's show what out predictions look like
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(test_data['sqft_living'], test_data['price'], '.',
test_data['sqft_living'], sqft_model.predict(test_data), '-')
sqft_model.get('coefficients')
# # Exploring other features in the data
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x="zipcode", y="price")
# # Build a regression model with more features
my_features_model = graphlab.linear_regression.create(train_data, target='price', features=my_features)
print(my_features)
print(sqft_model.evaluate(test_data))
print(my_features_model.evaluate(test_data))
# # Apply learned models to predict prices of 3 houses
house1 = sales[sales['id']=='5309101200']
house1
# <img src="house-5309101200.jpg">
print(house1['price'])
print(sqft_model.predict(house1))
print(my_features_model.predict(house1))
# # Prediction for a second, fancier house
house2 = sales[sales['id'] == '1925069082']
house2
print(house2['price'])
print(sqft_model.predict(house2))
print(my_features_model.predict(house2))
# # An even fancier house!
bill_gates = {
'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]
}
print(sqft_model.predict(graphlab.SFrame(bill_gates)))
print(my_features_model.predict(graphlab.SFrame(bill_gates)))
# # Assignment
x = sales[sales['sqft_living'] >= 2000]
y = x[x['sqft_living'] <= 4000]
print sales.num_rows()
print y.num_rows()
print float(y.num_rows()) / sales.num_rows()
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
advanced_features_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features)
print(my_features_model.evaluate(test_data))
print(advanced_features_model.evaluate(test_data))
| week02/predicting-house-prices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <h1><b>Lab 3</b></h1>
# <h1>PHYS 580 - Computational Physics</h1>
# <h2>Professor Molnar</h2>
# </br>
# <h3><b><NAME></b></h3>
# <h4>https://www.github.com/ethank5149</h4>
# <h4><EMAIL></h4>
# </br>
# </br>
# <h3><b>September 17, 2020</b></h3>
# </center>
# ### Imports
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
import numpy as np
import sympy as sp
from scipy.special import ellipk
from scipy.signal import find_peaks
import matplotlib.pyplot as plt
from functools import partial
# -
# ### Support Functions
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
def euler_step(f, y, t, dt):
y = y + f(t, y) * dt
return y
def rk2_step(f, y, t, dt):
k1 = dt * f(t, y)
k2 = dt * f(t + dt, y + k1)
y = y + (k1 + k2) / 2.0
return y
def euler_cromer_step(f, y, dy, t, dt):
dy = dy + f(t, y, dy) * dt
y = y + dy * dt
return y, dy
def dsolve(f, t, y0, step = euler_step):
t = np.asarray(t) # Ensure t is a Numpy array
y0 = np.asarray(y0)
y = np.zeros((np.size(t), np.size(y0))) # Create our output data container
y[0] = y0 # Set initial condition
for i in range(np.size(t)-1):
y[i+1] = step(f, y[i], t[i], t[i+1] - t[i]) # Step forward
return t, np.hsplit(y, np.size(y0))
def dsolve_simplectic(f, t, y0, dy0, step = euler_cromer_step):
t = np.asarray(t) # Ensure t is a Numpy array
y0 = np.asarray(y0)
y = np.zeros((np.size(t), np.size(y0))) # Create our output data container
dy = np.zeros((np.size(t), np.size(dy0))) # Create our output data container
y[0] = y0 # Set initial condition
dy[0] = dy0 # Set initial condition
for i in range(np.size(t)-1):
y[i+1], dy[i+1] = step(f, y[i], dy[i], t[i], t[i+1] - t[i]) # Step forward
return t, y, dy
def get_kinetic_energy(I, omega):
return 0.5 * I * omega ** 2
def get_potential_energy(m, g, l, theta):
return m * g * l * (1.0 - np.cos(theta))
def get_total_energy(m, I, l, g, theta, omega):
return get_kinetic_energy(I, omega) + get_potential_energy(m, g, l, theta)
def global_error(exact, calculated):
error = np.zeros_like(exact)
for i in range(len(error)):
error[i] = calculated[i] - exact[i]
return error
def local_error(y_exact, y_approx, x):
error = np.zeros_like(x)
for i in np.arange(1, len(error)):
error[i-1] = y_exact[i] - y_exact[i-1] - (y_approx[i] - y_approx[i-1])
return error
# -
# ### Analytical Calculations
# $$I\ddot{\theta}+c\dot{\theta}+mgl\theta=F_0\cos(\omega_Dt)\rightarrow\ddot{\theta}+\frac{c}{I}\dot{\theta}+\frac{mgl}{I}\theta=\frac{F_0}{I}\cos(\omega_Dt)$$
# Using:
# $$A=\frac{F_0}{I},\quad\beta=\frac{c}{2\sqrt{mglI}},\quad\omega_0=\sqrt{\frac{mgl}{I}}$$
# Gives:
# $$\ddot{\theta}+2\beta\omega_0\dot{\theta}+\omega_0^2\theta=A\cos(\omega_Dt)$$
#
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
def df_linear_pendula(t, x, zeta, w0, A, wd):
return np.asarray([x[1], -2 * zeta * w0 * x[1] - w0 ** 2 * x[0] + A * np.cos(wd * t)])
def df_linear_pendula_simplectic(t, x, dx, zeta, w0, A, wd):
return -2 * zeta * w0 * dx - w0 ** 2 * x + A * np.cos(wd * t)
# -
# # Number 1
# ## Analytical Solution
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
omega_0, t, theta0, dtheta0 = sp.symbols(r'\omega_0 t \theta_0 \dot{\theta}_0')
theta = sp.Function(r'\theta')
ode = sp.Eq(sp.Derivative(theta(t), t, t) + omega_0**2*theta(t),0)
ics = {theta(0): theta0, theta(t).diff(t).subs(t, 0): dtheta0}
soln = sp.dsolve(ode, theta(t), ics=ics).rewrite(sp.cos).simplify()
theta_func = soln.rhs
omega_func = theta_func.diff(t)
m, g, l, I = sp.symbols(r'm g l I')
V = m * g * l * (1 - sp.cos(theta_func))
T = I * omega_func ** 2 / 2
H = V + T
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
theta_func
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
H
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
def theta_exact(t, theta0, dtheta0, w0):
t = np.asarray(t)
return dtheta0 * np.sin(w0 * t) / w0 + theta0 * np.cos(w0 * t)
def total_energy_exact(t, theta0, dtheta0, w0, m, g, l, I):
t = np.asarray(t)
return I * (dtheta0 * np.cos(w0 * t) - w0 * theta0 * np.sin(w0 * t))**2 / 2 + m*g*l*(1-np.cos(dtheta0 * np.sin(w0 * t) / w0 + theta0 * np.cos(w0 * t)))
# -
# ## Parameters
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
m = 1.0
g = 9.81
l = 1.0
I = m*l**2
c = 0.0
F0 = 0.0
A = F0/I
zeta = c/(2*np.sqrt(m*g*l*I)) # Damping ratio
w0 = np.sqrt(m*g*l/I)
wd = 1.0
theta0 = np.pi/2.0
dtheta0 = 0.0
ti = 0
tf = 10
dt = 0.001
t = np.arange(ti, tf, dt)
state0 = np.asarray([theta0, dtheta0])
# -
# ## Calculate Trajectories
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
## Curried differential equation
df = partial(df_linear_pendula, zeta=zeta, w0=w0, A=A, wd=wd)
df_simplectic = partial(df_linear_pendula_simplectic, zeta=zeta, w0=w0, A=A, wd=wd)
## Solutions
t, pendula_euler = dsolve(df, t, state0, step=euler_step)
t, pendula_rk2 = dsolve(df, t, state0, step=rk2_step)
t, *pendula_euler_cromer = dsolve_simplectic(df_simplectic, t, theta0, dtheta0)
## Energies
pendula_euler_energy = get_total_energy(m, I, l, g, *pendula_euler)
pendula_rk2_energy = get_total_energy(m, I, l, g, *pendula_rk2)
pendula_euler_cromer_energy = get_total_energy(m, I, l, g, *pendula_euler_cromer)
theta_analytic = theta_exact(t, theta0, dtheta0, w0)
total_energy_analytic = total_energy_exact(t, theta0, dtheta0, w0, m, g, l, I)
# -
# ## Plotting
# + jupyter={"outputs_hidden": true, "source_hidden": true} pycharm={"name": "#%%\n"}
fig, ax = plt.subplots(3, 2, figsize=(16, 9), constrained_layout=True)
ax[0,0].plot(t, pendula_euler[0], label='Euler Method')
ax[0,0].plot(t, pendula_rk2[0], label='RK2 Method')
ax[0,0].plot(t, pendula_euler_cromer[0], label='Euler-Cromer Method')
ax[0,0].set_xlabel(r't [s]')
ax[0,0].set_ylabel(r'$\theta$ [rad]')
ax[0,0].set_title(r'$\theta$ vs Time')
ax[0,0].grid()
ax[0,0].legend()
ax[0,1].plot(t, pendula_euler_energy, label='Euler Method')
ax[0,1].plot(t, pendula_rk2_energy,label='RK2 Method')
ax[0,1].plot(t, pendula_euler_cromer_energy, label='Euler-Cromer Method')
ax[0,1].set_xlabel(r't [s]')
ax[0,1].set_ylabel(r'$E$ [J]')
ax[0,1].set_title('Total Energy vs Time')
ax[0,1].grid()
ax[0,1].legend()
ax[1,0].plot(t, local_error(theta_analytic, pendula_euler[0], t), label='Euler Method')
ax[1,0].plot(t, local_error(theta_analytic, pendula_rk2[0], t), label='RK2 Method')
ax[1,0].plot(t, local_error(theta_analytic, pendula_euler_cromer[0], t), label='Euler-Cromer Method')
ax[1,0].set_xlabel(r't [s]')
ax[1,0].set_ylabel(r'$\theta$ [rad]')
ax[1,0].set_title(r'$\theta$ Local Error')
ax[1,0].grid()
ax[1,0].legend()
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_euler_energy, t), label='Euler Method')
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_rk2_energy, t),label='RK2 Method')
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_euler_cromer_energy, t), label='Euler-Cromer Method')
ax[1,1].set_xlabel(r't [s]')
ax[1,1].set_ylabel(r'$E$ [J]')
ax[1,1].set_title('Total Energy Local Error')
ax[1,1].grid()
ax[1,1].legend()
ax[2,0].plot(t, global_error(theta_analytic, pendula_euler[0]), label='Euler Method')
ax[2,0].plot(t, global_error(theta_analytic, pendula_rk2[0]), label='RK2 Method')
ax[2,0].plot(t, global_error(theta_analytic, pendula_euler_cromer[0]), label='Euler-Cromer Method')
ax[2,0].set_xlabel(r't [s]')
ax[2,0].set_ylabel(r'$\theta$ [rad]')
ax[2,0].set_title(r'$\theta$ Global Error')
ax[2,0].grid()
ax[2,0].legend()
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_euler_energy), label='Euler Method')
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_rk2_energy),label='RK2 Method')
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_euler_cromer_energy), label='Euler-Cromer Method')
ax[2,1].set_xlabel(r't [s]')
ax[2,1].set_ylabel(r'$E$ [J]')
ax[2,1].set_title('Total Energy Global Error')
ax[2,1].grid()
ax[2,1].legend()
plt.show()
# -
# ## Repeat With Different Initial Conditions
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
theta0 = 0.0
dtheta0 = np.pi/2.0
state0 = np.asarray([theta0, dtheta0])
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
## Curried differential equation
df = partial(df_linear_pendula, zeta=zeta, w0=w0, A=A, wd=wd)
df_simplectic = partial(df_linear_pendula_simplectic, zeta=zeta, w0=w0, A=A, wd=wd)
## Solutions
t, pendula_euler = dsolve(df, t, state0, step=euler_step)
t, pendula_rk2 = dsolve(df, t, state0, step=rk2_step)
t, *pendula_euler_cromer = dsolve_simplectic(df_simplectic, t, theta0, dtheta0)
## Energies
pendula_euler_energy = get_total_energy(m, I, l, g, *pendula_euler)
pendula_rk2_energy = get_total_energy(m, I, l, g, *pendula_rk2)
pendula_euler_cromer_energy = get_total_energy(m, I, l, g, *pendula_euler_cromer)
theta_analytic = theta_exact(t, theta0, dtheta0, w0)
total_energy_analytic = total_energy_exact(t, theta0, dtheta0, w0, m, g, l, I)
# + jupyter={"outputs_hidden": true, "source_hidden": true} pycharm={"name": "#%%\n"}
fig, ax = plt.subplots(3, 2, figsize=(16, 9), constrained_layout=True)
ax[0,0].plot(t, pendula_euler[0], label='Euler Method')
ax[0,0].plot(t, pendula_rk2[0], label='RK2 Method')
ax[0,0].plot(t, pendula_euler_cromer[0], label='Euler-Cromer Method')
ax[0,0].set_xlabel(r't [s]')
ax[0,0].set_ylabel(r'$\theta$ [rad]')
ax[0,0].set_title(r'$\theta$ vs Time')
ax[0,0].grid()
ax[0,0].legend()
ax[0,1].plot(t, pendula_euler_energy, label='Euler Method')
ax[0,1].plot(t, pendula_rk2_energy,label='RK2 Method')
ax[0,1].plot(t, pendula_euler_cromer_energy, label='Euler-Cromer Method')
ax[0,1].set_xlabel(r't [s]')
ax[0,1].set_ylabel(r'$E$ [J]')
ax[0,1].set_title('Total Energy vs Time')
ax[0,1].grid()
ax[0,1].legend()
ax[1,0].plot(t, local_error(theta_analytic, pendula_euler[0], t), label='Euler Method')
ax[1,0].plot(t, local_error(theta_analytic, pendula_rk2[0], t), label='RK2 Method')
ax[1,0].plot(t, local_error(theta_analytic, pendula_euler_cromer[0], t), label='Euler-Cromer Method')
ax[1,0].set_xlabel(r't [s]')
ax[1,0].set_ylabel(r'$\theta$ [rad]')
ax[1,0].set_title('Theta Local Error')
ax[1,0].grid()
ax[1,0].legend()
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_euler_energy, t), label='Euler Method')
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_rk2_energy, t),label='RK2 Method')
ax[1,1].plot(t, local_error(total_energy_analytic, pendula_euler_cromer_energy, t), label='Euler-Cromer Method')
ax[1,1].set_xlabel(r't [s]')
ax[1,1].set_ylabel(r'$E$ [J]')
ax[1,1].set_title('Total Energy Local Error')
ax[1,1].grid()
ax[1,1].legend()
ax[2,0].plot(t, global_error(theta_analytic, pendula_euler[0]), label='Euler Method')
ax[2,0].plot(t, global_error(theta_analytic, pendula_rk2[0]), label='RK2 Method')
ax[2,0].plot(t, global_error(theta_analytic, pendula_euler_cromer[0]), label='Euler-Cromer Method')
ax[2,0].set_xlabel(r't [s]')
ax[2,0].set_ylabel(r'$\theta$ [rad]')
ax[2,0].set_title('Theta Global Error')
ax[2,0].grid()
ax[2,0].legend()
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_euler_energy), label='Euler Method')
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_rk2_energy),label='RK2 Method')
ax[2,1].plot(t, global_error(total_energy_analytic, pendula_euler_cromer_energy), label='Euler-Cromer Method')
ax[2,1].set_xlabel(r't [s]')
ax[2,1].set_ylabel(r'$E$ [J]')
ax[2,1].set_title('Total Energy Global Error')
ax[2,1].grid()
ax[2,1].legend()
plt.show()
# -
# # Number 2
# ## Parameters
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
m = 1.0
g = 9.81
l = 1.0
I = m*l**2
c1 = 2*np.sqrt(m*g*l*I) / 10
c2 = 2*np.sqrt(m*g*l*I)
c3 = 2*np.sqrt(m*g*l*I) * 10
F0 = 1.0
A = F0/I
zeta1 = c1/(2*np.sqrt(m*g*l*I)) # Damping ratio
zeta2 = c2/(2*np.sqrt(m*g*l*I)) # Damping ratio
zeta3 = c3/(2*np.sqrt(m*g*l*I)) # Damping ratio
w0 = np.sqrt(m*g*l/I)
wd = 1.0
ti = 0
tf = 50
dt = 0.001
t = np.arange(ti, tf, dt)
state0 = np.asarray([-np.pi / 2.0, np.pi / 2.0])
# -
# ## Calculate Trajectories
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
## Curried differential equation
df1_simplectic = partial(df_linear_pendula_simplectic, zeta=zeta1, w0=w0, A=A, wd=wd)
df2_simplectic = partial(df_linear_pendula_simplectic, zeta=zeta2, w0=w0, A=A, wd=wd)
df3_simplectic = partial(df_linear_pendula_simplectic, zeta=zeta3, w0=w0, A=A, wd=wd)
## Solutions
t, *pendula_euler_cromer_1 = dsolve_simplectic(df1_simplectic, t, state0[0], state0[1])
t, *pendula_euler_cromer_2 = dsolve_simplectic(df2_simplectic, t, state0[0], state0[1])
t, *pendula_euler_cromer_3 = dsolve_simplectic(df3_simplectic, t, state0[0], state0[1])
# -
# ## Plotting
# + jupyter={"outputs_hidden": true, "source_hidden": true} pycharm={"name": "#%%\n"}
fig, ax = plt.subplots(2, 3, figsize=(16, 9), constrained_layout=True)
plt.suptitle(r'Euler-Cromer Method, Initial Conditions: $\psi_0=\left<-\frac{\pi}{2},\frac{\pi}{2}\right>$')
ax[0,0].plot(t, pendula_euler_cromer_1[0])
ax[0,0].set_xlabel(r't [s]')
ax[0,0].set_ylabel(r'$\theta$ [rad]')
ax[0,0].set_title(r'Underdamped')
ax[0,0].grid()
ax[0,1].plot(t, pendula_euler_cromer_2[0])
ax[0,1].set_xlabel(r't [s]')
ax[0,1].set_ylabel(r'$\theta$ [rad]')
ax[0,1].set_title(r'Critically Damped')
ax[0,1].grid()
ax[0,2].plot(t, pendula_euler_cromer_3[0])
ax[0,2].set_xlabel(r't [s]')
ax[0,2].set_ylabel(r'$\theta$ [rad]')
ax[0,2].set_title(r'Overdamped')
ax[0,2].grid()
ax[1,0].plot(*pendula_euler_cromer_1)
ax[1,0].set_xlabel(r'$\theta$ [rad]')
ax[1,0].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,0].grid()
ax[1,1].plot(*pendula_euler_cromer_2)
ax[1,1].set_xlabel(r'$\theta$ [rad]')
ax[1,1].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,1].grid()
ax[1,2].plot(*pendula_euler_cromer_3)
ax[1,2].set_xlabel(r'$\theta$ [rad]')
ax[1,2].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,2].grid()
plt.show()
# -
# # Number 3
# $$I\ddot{\theta}=mgl\sin\left(\theta\right)\rightarrow\ddot{\theta}=\frac{g}{l}\sin\left(\theta\right)\rightarrow\ddot{\theta}=\omega_0^2\sin\left(\theta\right)$$
#
# $$T=4\sqrt{\frac{l}{g}}K\left(\sin\left(\frac{\theta_m}{2}\right)\right)=\frac{4}{\omega_0}K\left(\sin\left(\frac{\theta_m}{2}\right)\right)$$
# ## Parameters
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
w0 = np.linspace(0,3*np.pi,500)
ti = 0
tf = 50
dt = 0.001
t = np.arange(ti, tf, dt)
state0 = np.asarray([-np.pi / 2.0, np.pi / 2.0])
# -
# ## Functions
# + jupyter={"source_hidden": true} pycharm={"name": "#%%\n"}
def df(t, x, dx, w0):
return - w0 ** 2 * np.sin(x)
def get_period(t, x):
peak_indices = find_peaks(x.flatten())[0]
times = [t[i] for i in peak_indices]
diffs = np.ediff1d(times)
return np.mean(diffs)
def get_amplitude(x):
peak_indices = find_peaks(x.flatten())[0]
amps = [x[i] for i in peak_indices]
return np.mean(amps)
# -
# ## Part A: Amplitude vs. Period
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
amps = []
from tqdm import tqdm
for _,w in enumerate(tqdm(w0)):
df_1 = partial(df, w0=w)
t, *soln = dsolve_simplectic(df_1, t, state0[0], state0[1])
theta_m = get_amplitude(soln[0])
amps.append(theta_m)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
fig = plt.figure(figsize=(16, 9))
ax = plt.axes()
ax.plot(w0**(-1),amps)
ax.set_xlabel('Period [s]')
ax.set_ylabel('Amplitude [m]')
ax.set_title('Effect of Oscillation Period On Amplitude')
ax.grid()
plt.show()
# -
# ## Part B: Period Accuracy
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
ti = 0
tf = 150
dt = 0.001
t = np.arange(ti, tf, dt)
state0 = np.asarray([np.pi / 2.0, -np.pi/8])
w01 = 0.0885*np.pi
w02 = 0.09*np.pi
w03 = 0.2*np.pi
## Curried differential equation
df_1 = partial(df, w0=w01)
df_2 = partial(df, w0=w02)
df_3 = partial(df, w0=w03)
## Solutions
t, *soln1 = dsolve_simplectic(df_1, t, state0[0], state0[1])
t, *soln2 = dsolve_simplectic(df_2, t, state0[0], state0[1])
t, *soln3 = dsolve_simplectic(df_3, t, state0[0], state0[1])
theta_m1 = get_amplitude(soln1[0])
theta_m2 = get_amplitude(soln2[0])
theta_m3 = get_amplitude(soln3[0])
T_exact1 = (4/w01)*ellipk(np.sin(theta_m1/2))
T_exact2 = (4/w02)*ellipk(np.sin(theta_m2/2))
T_exact3 = (4/w03)*ellipk(np.sin(theta_m3/2))
T_approx1 = get_period(t, soln1[0])
T_approx2 = get_period(t, soln2[0])
T_approx3 = get_period(t, soln3[0])
print(f'Exact Period | Approx. Period | % Error ')
print(f' {T_exact1:0.4f} s | {T_approx1:0.4f} s | {100*(T_approx1-T_exact1)/T_exact1:0.4f}%')
print(f' {T_exact2:0.4f} s | {T_approx2:0.4f} s | {100*(T_approx2-T_exact2)/T_exact2:0.4f}%')
print(f' {T_exact3:0.4f} s | {T_approx3:0.4f} s | {100*(T_approx3-T_exact3)/T_exact3:0.4f}%')
# -
# ## Plotting
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
fig, ax = plt.subplots(2, 3, figsize=(16, 9), constrained_layout=True)
plt.suptitle(r'Nonlinear Pendulum, Euler-Cromer Method, Initial Conditions: $\psi_0=\left<\frac{\pi}{2},-\frac{\pi}{8}\right>$')
ax[0,0].plot(t, soln1[0])
ax[0,0].set_xlabel(r't [s]')
ax[0,0].set_ylabel(r'$\theta$ [rad]')
ax[0,0].set_title(rf'$\omega_0={w01:0.4f}$')
ax[0,0].grid()
ax[0,1].plot(t, soln2[0])
ax[0,1].set_xlabel(r't [s]')
ax[0,1].set_ylabel(r'$\theta$ [rad]')
ax[0,1].set_title(rf'$\omega_0={w02:0.4f}$')
ax[0,1].grid()
ax[0,2].plot(t, soln3[0])
ax[0,2].set_xlabel(r't [s]')
ax[0,2].set_ylabel(r'$\theta$ [rad]')
ax[0,2].set_title(rf'$\omega_0={w03:0.4f}$')
ax[0,2].grid()
ax[1,0].plot(*soln1)
ax[1,0].set_xlabel(r'$\theta$ [rad]')
ax[1,0].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,0].grid()
ax[1,1].plot(*soln2)
ax[1,1].set_xlabel(r'$\theta$ [rad]')
ax[1,1].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,1].grid()
ax[1,2].plot(*soln3)
ax[1,2].set_xlabel(r'$\theta$ [rad]')
ax[1,2].set_ylabel(r'$\dot{\theta}$ [rad]/[s]')
ax[1,2].grid()
plt.show()
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
| Labs/Lab03/Lab3out.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# Really simple PD control for the "reacher" system, which is just a fully actuated 2DOF arm
# done as a sanity check for the way I was doing PD control on the walker
# +
import mujoco_py as mj
import numpy as np
from math import pi
import copy
from matplotlib.pyplot import plot, legend,figure, title
# %matplotlib inline
model_xml = open('reacher.xml').read()
model = mj.load_model_from_xml(model_xml)
sim = mj.MjSim(model)
viewer = mj.MjViewer(sim)
default_state = copy.deepcopy(sim.get_state())
# +
sim.reset()
sim.set_state(default_state)
num_steps = 400
set_points = np.array([-pi/2,-pi/2])
p_gains = 1*np.ones((1,2))
d_gains = .1*np.ones((1,2))
num_pos = default_state[1].size
num_vel = default_state[2].size
num_u = sim.data.ctrl.size
q_pos_hist = np.zeros((num_steps, num_pos))
q_vel_hist = np.zeros((num_steps, num_vel))
u_vals_hist = np.zeros((num_steps, num_u))
while(True):
for i in range(num_steps):
q_pos = sim.get_state()[1]
q_vel = sim.get_state()[2]
sim.data.ctrl[:] = (set_points - q_pos[0:2])*p_gains + (np.zeros((1,2)) - q_vel[0:2])*d_gains
q_pos_hist[i,:] = q_pos
q_vel_hist[i,:] = q_vel
u_vals_hist[i,:] = sim.data.ctrl[:]
sim.step()
viewer.render()
sim.set_state(default_state) #reset everything at the end of the loop
# +
#some plotting
plot(u_vals_hist)
title('u_vals')
figure()
plot(q_pos_hist)
title('positions')
figure()
plot(q_vel_hist)
title('velocities')
figure()
plot(set_points - q_pos_hist[:,0:2])
# -
| old/reacher/reacher.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab assignment: unemployment rates
# In this assignment we will estimate unemployment rates in Spain, using *pandas* for reading information, *scikit-learn* for training estimators, and *geopandas* and *folium* for visualizing results.
# ## 0. Guidelines
# Throughout this notebook you will find empty cells that you will need to fill with your own code. Follow the instructions in the notebook and pay special attention to the following symbols.
#
# <table align="center">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">You will need to solve a question by writing your own code or answer in the cell immediately below or in a different file, as instructed.</td></tr>
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">This is a hint or useful observation that can help you solve a question. You should pay attention to these hints to better understand questions.</td></tr>
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">This is an advanced and voluntary exercise that can help you gain a deeper knowledge into the topic. Good luck!</td></tr>
# </table>
#
# During the assignment you will make use of several Python packages that might not be installed in your machine. It is strongly recommended that you use the environment file *environment.yml* and follow the instructions of the following <a href="https://github.com/jorloplaz/teaching_material/tree/master/SVM">link</a>.
#
# If you need any help on the usage of a Python function you can place the writing cursor over its name and press Caps+Shift to produce a pop-out with related documentation. This will only work inside code cells.
#
# Make sure the following cell executes correctly, as it imports all you will need:
import pandas as pd
import numpy as np
import branca.colormap as cm
import utils
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVR, SVR
from sklearn.metrics import r2_score
from math import ceil
# %matplotlib inline
# ## 1. Load and understand data
# To begin with, let us read some **numerical characteristics of Spanish municipalities**:
muns_population = utils.read_population('./data/municipios.csv')
print(len(muns_population))
muns_population.head()
# As you can see, **for each municipality we have its position (latitude and longitude in degrees), its altitude (in meters) and its population (number of inhabitants)**. The resulting dataframe is **indexed in a hierarchical way, first by autonomous community, then by province, and finally by municipality name, for a total of 8112 municipalities**.
# Our goal is to **predict the yearly unemployment rate for each municipality in 2017**. Let us read the unemployment data belonging to that year:
unempl_2017 = utils.read_unemployment('./data/Paro_por_municipios_2017.csv') # use specific load function from utils module
print(len(unempl_2017), len(unempl_2017.index.unique()))
unempl_2017.head()
# These data are indexed in the same way as *muns_population* above. However, we are interested in yearly unemployment, but **we have monthly information** (actually, 97512 observations from 8126 municipalities). For example, taking the municipality of Abla:
unempl_2017.xs(key='Abla', level=utils.MUN_FIELD, drop_level=False) # all records whose mun is Abla, keeping index intact
# Using grouping, it is fairly easy to obtain the **yearly means** for each municipality:
unempl_2017_means = unempl_2017.groupby(unempl_2017.index.names)[[utils.UNEMPL_FIELD]].mean() # group by all index levels, then take the mean
assert(len(unempl_2017_means) == len(unempl_2017) // 12) # there should be just 1 row for each municipality
unempl_2017_means.head()
# Now, in order to obtain our target values, we just need to transform these raw figures to unemployment rates, that is, we have to **divide unemployment by each municipality's population**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Create a <i>Y</i> matrix with the mean unemployment rates.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# You should join the frames <i>unempl_2017_means</i> and <i>muns_population</i>. Then divide the mean unemployed people by the inhabitants.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Be careful while joining. There are some municipalities present in <i>unempl_2017_means</i> but missing in <i>muns_population</i> (8126 versus 8112), which can lead to <i>nan</i> values when dividing.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Make sure you did things correctly:
# + run_control={"marked": true}
assert(len(Y) <= min(len(unempl_2017_means), len(muns_population))) # join cannot be larger than each of the joined frames
assert(isinstance(Y, pd.Series) or (isinstance(Y, pd.DataFrame) and len(Y.columns) == 1)) # a series or a frame with just 1 column
# -
# <table align="left">
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Can you list the municipalities with known unemployment but unknown population? Conversely, are there any municipalities with known population but unknown unemployment?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Great, so we already have Y, but we need patterns X to train a model. Which features should we use? Well, to begin with **we can use all features from muns\_population, since they are numerical**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Create an <i>X</i> matrix with the numerical features for each municipality. Considering that in <i>Y</i> we have unemployment rates (unemployed people divided by inhabitants), is it cheating if we use the number of inhabitants as a feature? Why/why not?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# You should have obtained X and Y in the same order:
assert(isinstance(X, pd.DataFrame))
assert(all(X.columns == muns_population.columns))
assert(all(X.index == Y.index))
# ## 2. Baseline models
# Cool, we are ready to exploit Machine Learning. We just need to **split the data into training and set sets**. Let us fix the random seed and the ratio of data for training and test:
RANDOM_STATE = 42 # fix seed
TEST_PERC = 0.3 # 30% for test, 70% for training
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Split <i>X</i> and <i>Y</i> into training and test sets, so that you obtain variables <i>X_train</i>, <i>Y_train</i>, <i>X_test</i> and <i>Y_test</i>. Use both constants <i>RANDOM_STATE</i> and <i>TEST_PERC</i>.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Let us verify that data have been split correctly:
assert(len(X_test) == ceil(TEST_PERC*len(X)) and len(Y_test) == len(X_test)) # test set size is 30% of total size
assert(len(X_train) == len(X)-len(X_test) and len(Y_train) == len(X_train)) # every pattern not in test should be in train
# And create **colors to plot in a map the unemployment rate of each municipality**:
# + code_folding=[]
cmap = cm.LinearColormap(colors=['green', 'yellow', 'orange', 'red', 'black'],
vmin=round(Y.min(), 2), vmax=round(Y.max(), 2),
caption='Unemployment rate')
cmap
# -
# The following code generates the map corresponding to the true values of patterns in the test set, saving this map as an HTML document:
# + run_control={"marked": true}
coms = utils.read_communities('./data/ComunidadesAutonomas_ETRS89_30N/') # read communities
provs = utils.read_provinces('./data/Provincias_ETRS89_30N/') # read provinces
_ = utils.generate_map(pd.concat([X_test, Y_test], axis=1), # generate map, which needs X and Y together...
utils.LAT_FIELD, utils.LON_FIELD, Y_test.name if Y_test.name is not None else 0, # columns with latitude, longitude and unemployment...
cmap, coms=coms, provs=provs, filename='./test_true.html') # colors, communities, provinces and where to store the map
# -
# If you open the HTML with a browser, you will see the test municipalities plotted in the map. Ideally, **that is what we should predict**. Will an SVM be able to reproduce it?
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Train a <b>linear SVR</b> with the dataset created above, using cross-validation to find the most suitable values for its hyperparameters.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Make sure you normalize inputs before feeding them to the SVM. Use a <i>Pipeline</i> with a <i>StandardScaler</i>.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Let us try to **interpret what the resulting SVM is doing**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Generate predictions for the test set with the best linear SVR you obtained. Do you observe overfitting? Plot these predictions in a map as we did above. How good/bad is it performing, visually speaking? Do you see any strange behaviour?
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# You just need to call <i>generate_map</i>. Do not read communities nor provinces again, cause they have been already loaded in memory.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Explain what the model obtained is doing. What is the intercept and why is it that? What is the importance of each feature? Which features have a positive (i.e., making unemployment bigger) or a negative (making it smaller) effect on unemployment? Does this agree with your intuition about the problem?
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# The intercept (<i>b</i> in our formulas) is stored in the <i>intercept_</i> attribute of the model, whereas the weight vector (<i>w</i> in our formulas) is kept in the <i>coef_</i> attribute.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Let us **check if we can do better with a non-linear SVM**.
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Train a <b>non-linear SVR with the RBF kernel</b>, tuning its hyperparameters with cross-validation.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Create the map with the non-linear SVR predictions. Did things improve?
# </td></tr>
# </table>
# + run_control={"marked": true}
####### INSERT YOUR CODE HERE
# -
# <table align="left">
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Try to explain the behaviour of the non-linear model obtained.
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# It is much more difficult to interpret a non-linear SVM than a linear one. You can examine the support vectors, available in the attributes <i>support_</i>, <i>support_vectors_</i> and <i>dual_coef_</i>, to see which municipalities are considered as the most and least representative of each class. Another idea is to check and detect trends in municipalities unemployment being underestimated, being overestimated or being accurately predicted.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# ## 3. Refined models
# ### 3.1. Previous years (2006-2015)
# So far we have predicted unemployment rates for 2017 based solely on municipality fixed characteristics (namely coordinates, altitude and population). Fortunately, we also have the unemployment figures for years 2006-2016, so that **we can also use unemployment in previous years to try to predict what will happen next**, in a time series fashion. Omitting for the moment 2016 (we will use it later):
prev_years = range(2006, 2016) # does not include 2016
prev_years
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Read the unemployment data from the <i>./data</i> folder for years 2006-2015 (both 2006 and 2015 included). You should concatenate the data for all years in a single dataframe called <i>unempl_2006_2015</i>.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# The function <i>read_unemployment</i> from the <i>utils</i> module will help you to load the unemployment figures for a single year, as was done above for year 2017. Each year has a separate CSV file, so you should call <i>read_unemployment</i> once for each file. Then use pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html">concat</a> method to store everything in a single dataframe.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# **For each municipality, we expect to have 120 observations** (10 years, 12 months each). For example, again taking Abla in Almería the last 20 are:
unempl_2006_2015.xs(key='Abla', level=utils.MUN_FIELD, drop_level=False).tail(20) # all records whose mun is Abla, keeping index intact
# **Again we can take yearly means** for each year:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Calculate unemployment yearly means for each year 2006-2015 and each municipality, in a similar way that was done previously for year 2017.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Since there are several years now, you should also group by year besides grouping by community, province and municipality.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# And **transform these raw means to rates**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Create an <i>X2</i> matrix with the mean unemployment rates for the different years. This matrix should have one row for each municipality and one column for each year.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# You have to do two things:
# <ol>
# <li> Join with populations and divide.
# <li> Transpose from column to row format.
# </ol>
# 1) is basically done as for 2017 (joining on just the first 3 levels, omitting the year), while 2) is easily tackled with the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html">pivot</a> method.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Verify that your matrix is correct:
assert(all(X2.index == X.index) and (len(X2.columns) == len(prev_years))) # same exact municipalities than before, one column for each year
# **Split these new patterns**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Split <i>X2</i> into training and test sets, so that you obtain variables <i>X2_train</i> and <i>X2_test</i>.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Do not call <i>train_test_split</i> again, because <i>Y</i> is already split. Just index <i>X2</i> with the indices in <i>X_train</i> and <i>X_test</i>, because municipalities are the same.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Training and test municipalities should be the same as before:
assert(all(X2_train.index == X_train.index) and all(X2_train.columns == X2.columns) and # same index, same columns for training
all(X2_test.index == X_test.index) and all(X2_test.columns == X2_train.columns)) # and for testing
# **Train and predict with a linear SVR with the new dataset**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Tune a <i>linear SVR</i>, predict with it, plot tits predictions and compare with the ones you got for the baseline linear SVR. Are these predictions better? By how much?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Explain what this new linear model is doing. Which do you think are the most relevant features now? Verify whether your intuition is correct or not.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# **See if a non-linear SVR can improve further**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Tune a <b>non-linear SVR</b>, predict with it, plot its predictions and compare with the ones you got for the baseline non-linear SVR, and also with the ones you just obtained for the linear SVR. Is it worth now going from linear to non-linear?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# <table align="left">
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Try to explain the behaviour of the new non-linear model obtained.
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# ### 3.2 Previous years 2006-2015 + 2016
# Time to **check whether introducing year 2016 helps or not**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Create an <i>X3</i> matrix with the same information as <i>X2</i> plus the unemployment mean rates for 2017 (i.e., an additional column). Split it into <i>X3_train</i> and <i>X3_test</i>.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/exclamation.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Do not process again years 2006-2015. You just have to read the information for 2017, process it analogously to what we did above and concatenate the resulting column with the ones of <i>X2</i>.
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# Your matrices should have incorporated the additional column for 2017, and kept the rest with no changes:
assert(all(X3_train.index == X2_train.index) and all((c in X3_train.columns) for c in X2_train.columns)
and len(X3_train.columns) == len(X2_train.columns)+1) # same index, same columns plus the extra one
assert(all(X3_test.index == X2_test.index) and all(X3_test.columns == X3_train.columns)) # same index, same columns as for train
# **Train the new linear model**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Tune a <b>linear SVR</b> and compare results with the ones you obtained for 2006-2015. Is 2016 helping? What happens now with feature relevance compared to the previous one?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# And **finally a non-linear model**:
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Tune a <b>non-linear SVR</b> and compare results with the ones you obtained for 2006-2015. Any better?
# </td></tr>
# </table>
# +
####### INSERT YOUR CODE HERE
# -
# ## 4. Summary
# <table align="left">
# <tr><td width="80"><img src="img/question.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Summarise the results obtained. There should be 6 models to compare:
# <ol>
# <li> Baseline with municipality characteristics (linear and non-linear).
# <li> Unemployment history 2006-2015 (linear and non-linear).
# <li> Unemployment history 2006-2016 (linear and non-linear).
# </ol>
# Which model would you choose if you had to go into production? Why?
# </td></tr>
# </table>
# ## 5. Bonus round
# <table align="left">
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Explore further how we can generate even better predictions. The following are possible lines:
# <ul>
# <li>Combining variables from municipalities and past unemployment rates.</li>
# <li>Using the original monthly data instead of yearly means.</li>
# <li>Downloading and using additional data (economical, meteorological...) from other sources.</li>
# </ul>
# Feel free to edit this notebook from now on explaining your approach.
# </td></tr>
# </table>
# <table align="left">
# <tr><td width="80"><img src="img/pro.png" style="width:auto;height:auto"></td><td style="text-align:left">
# Imagine that instead of predicting the mean unemployment rate for 2017 you were asked to predict monthly unemployment rates for 2017. In other words, you would like to predict for January 2017, then for February 2017, and so on till December 2017.
#
# Explain how you would tackle this problem with the data available. Extra points if you implement your proposal and show how it works in practice. Feel free to edit this notebook from now on explaining your approach.
# </td></tr>
# </table>
| SVM/svmUnemployment/svmUnemployment_student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Long short-term memory (LSTM)
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.layers import Dense, LSTM, Embedding
from tensorflow.keras.optimizers import Adam
n_train = 8000
n_test = 2000
x_train = np.random.randint(10, size=(n_train,30))
x_test = np.random.randint(10, size=(n_test,30))
def label(x):
y = np.sum(x,axis=1)
label = 1*(y>=100)
return label
y_train = label(x_train)
y_test = label(x_test)
y_test
x_train = x_train.reshape((8000,30,1))
x_test = x_test.reshape((2000,30,1))
inputs = keras.Input(shape=(30,1))
lstm = LSTM(200,return_sequences = True)(inputs)
output = Dense(1, activation = "sigmoid" )(lstm)
model = keras.Model(inputs = inputs, outputs = output, name = "LSTM")
model.summary()
model.compile(
loss=keras.losses.BinaryCrossentropy(),
optimizer = Adam(learning_rate=0.001,beta_1=0.9,
beta_2=0.999, epsilon=1e-8),
metrics=["binary_accuracy"])
history = model.fit(x_train, y_train, batch_size=50, epochs=60,
shuffle=True, validation_data=(x_test, y_test))
# +
plt.figure(figsize=(10, 10))
plt.subplot(211)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'y', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
#plt.title(title_loss)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(212)
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
plt.plot(epochs, acc, 'y', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
#plt.title(title_acc)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.subplots_adjust(wspace = 0.5)
plt.show()
# -
score = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
# +
print("Number of examples in the test set with classification equal to 1: ",
len(y_test[y_test==1]),
"\nNumber of examples in the test set with classification equal to 0:",
len(y_test[y_test==0]))
print("\nNumber of examples in the train set with classification equal to 1: ",
len(y_train[y_train==1]),
"\nNumber of examples in the train set with classification equal to 0:",
len(y_train[y_train==0]))
# -
| Sheet_07_Laura_v1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.028759, "end_time": "2021-08-16T21:43:44.071108", "exception": false, "start_time": "2021-08-16T21:43:44.042349", "status": "completed"} tags=[]
# # Kaggle’s 30 days of machine learning:
# ## Scikit-optimize for LightGBM (regression tutorial)
#
# 
# + [markdown] papermill={"duration": 0.028256, "end_time": "2021-08-16T21:43:44.128102", "exception": false, "start_time": "2021-08-16T21:43:44.099846", "status": "completed"} tags=[]
# **Machine learning algorithms have lots of knobs, and success often comes from twiddling them a lot.**
#
# *(DOMINGOS, Pedro. A few useful things to know about machine learning. Communications of the ACM, 2012, 55.10: 78-87.)*
# + [markdown] papermill={"duration": 0.026344, "end_time": "2021-08-16T21:43:44.180442", "exception": false, "start_time": "2021-08-16T21:43:44.154098", "status": "completed"} tags=[]
# In optimizing your model for this competition, you may wonder if there is a way to:
#
# * Leverage the information that you get as you explore the hyper-parameter space
#
# * Not necessarily become be an expert of a specific ML algorithm
#
# * Quickly find an optimization
# + [markdown] papermill={"duration": 0.025573, "end_time": "2021-08-16T21:43:44.231708", "exception": false, "start_time": "2021-08-16T21:43:44.206135", "status": "completed"} tags=[]
# The answer is:
#
# **Bayesian Optimization** (*<NAME>; <NAME>; <NAME>. Practical bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems. 2012. p. 2951-2959*)
# + [markdown] papermill={"duration": 0.029446, "end_time": "2021-08-16T21:43:44.292517", "exception": false, "start_time": "2021-08-16T21:43:44.263071", "status": "completed"} tags=[]
# The key idea behind Bayesian optimization is that we optimize a proxy function (the surrogate function) instead than the true objective function (what actually grid search and random search both do). This holds if testing the true objective function is costly (if it is not, then we simply go for random search.
#
# Bayesian search balances exploration against exploitation. At start it randomly explores, doing so it builds up a surrogate function of the objective. Based on that surrogate function it exploits an initial approximate knowledge of how the predictor works in order to sample more useful examples and minimize the cost function at a global level, not a local one.
#
# Bayesian Optimization uses an acquisition function to tell us how promising an observation will be. In fact, to rule the tradeoff between exploration and exploitation, the algorithm defines an acquisition function that provides a single measure of how useful it would be to try any given point.
# + [markdown] papermill={"duration": 0.027913, "end_time": "2021-08-16T21:43:44.347864", "exception": false, "start_time": "2021-08-16T21:43:44.319951", "status": "completed"} tags=[]
# 
# + [markdown] papermill={"duration": 0.025691, "end_time": "2021-08-16T21:43:44.400428", "exception": false, "start_time": "2021-08-16T21:43:44.374737", "status": "completed"} tags=[]
# In this step by ste tutorial, you will deal Bayesian optimization using LightGBM in few clear steps:
#
# 1. Prepare your data, especially your categorical
# 2. Define your cross-validation strategy
# 3. Define your evaluation metric
# 4. Define your base model
# 5. Define your hyper-parameter search space
# 6. Run optimization for a while
# + [markdown] papermill={"duration": 0.025407, "end_time": "2021-08-16T21:43:44.451635", "exception": false, "start_time": "2021-08-16T21:43:44.426228", "status": "completed"} tags=[]
# # 1. Data preparation
# + [markdown] papermill={"duration": 0.025665, "end_time": "2021-08-16T21:43:44.503280", "exception": false, "start_time": "2021-08-16T21:43:44.477615", "status": "completed"} tags=[]
# As first steps:
#
# we load the train and test data
# we separate the target from the training data
# we separate the ids from the data
# we convert integer variables to categories (thus our machine learning algorithm can pick them as categorical variables and not standard numeric one)
#
# You can add further processing, for instance by feature engineering, in order to succeed in this competition
# + papermill={"duration": 2.288112, "end_time": "2021-08-16T21:43:46.817200", "exception": false, "start_time": "2021-08-16T21:43:44.529088", "status": "completed"} tags=[]
# Importing core libraries
import numpy as np
import pandas as pd
from time import time
import pprint
import joblib
from functools import partial
# Suppressing warnings because of skopt verbosity
import warnings
warnings.filterwarnings("ignore")
# Classifiers
import lightgbm as lgb
# Model selection
from sklearn.model_selection import KFold
# Metrics
from sklearn.metrics import mean_squared_error
from sklearn.metrics import make_scorer
# Skopt functions
from skopt import BayesSearchCV
from skopt.callbacks import DeadlineStopper, DeltaYStopper
from skopt.space import Real, Categorical, Integer
# + papermill={"duration": 3.286514, "end_time": "2021-08-16T21:43:50.130930", "exception": false, "start_time": "2021-08-16T21:43:46.844416", "status": "completed"} tags=[]
# Loading data
X = pd.read_csv("../input/30-days-of-ml/train.csv")
X_test = pd.read_csv("../input/30-days-of-ml/test.csv")
# + papermill={"duration": 0.175201, "end_time": "2021-08-16T21:43:50.331599", "exception": false, "start_time": "2021-08-16T21:43:50.156398", "status": "completed"} tags=[]
# Preparing data as a tabular matrix
y = X.target
X = X.set_index('id').drop('target', axis='columns')
X_test = X_test.set_index('id')
# + papermill={"duration": 14.506362, "end_time": "2021-08-16T21:44:04.864920", "exception": false, "start_time": "2021-08-16T21:43:50.358558", "status": "completed"} tags=[]
# Dealing with categorical data
categoricals = [item for item in X.columns if 'cat' in item]
cat_values = np.unique(X[categoricals].values)
cat_dict = dict(zip(cat_values, range(len(cat_values))))
X[categoricals] = X[categoricals].replace(cat_dict).astype('category')
X_test[categoricals] = X_test[categoricals].replace(cat_dict).astype('category')
# + [markdown] papermill={"duration": 0.027293, "end_time": "2021-08-16T21:44:04.918826", "exception": false, "start_time": "2021-08-16T21:44:04.891533", "status": "completed"} tags=[]
# # Setting up optimization
# + [markdown] papermill={"duration": 0.026543, "end_time": "2021-08-16T21:44:04.972274", "exception": false, "start_time": "2021-08-16T21:44:04.945731", "status": "completed"} tags=[]
# First, we create a wrapper function to deal with running the optimizer and reporting back its best results.
# + papermill={"duration": 0.03799, "end_time": "2021-08-16T21:44:05.037245", "exception": false, "start_time": "2021-08-16T21:44:04.999255", "status": "completed"} tags=[]
# Reporting util for different optimizers
def report_perf(optimizer, X, y, title="model", callbacks=None):
"""
A wrapper for measuring time and performance of optmizers
optimizer = a sklearn or a skopt optimizer
X = the training set
y = our target
title = a string label for the experiment
"""
start = time()
if callbacks is not None:
optimizer.fit(X, y, callback=callbacks)
else:
optimizer.fit(X, y)
d=pd.DataFrame(optimizer.cv_results_)
best_score = optimizer.best_score_
best_score_std = d.iloc[optimizer.best_index_].std_test_score
best_params = optimizer.best_params_
print((title + " took %.2f seconds, candidates checked: %d, best CV score: %.3f "
+ u"\u00B1"+" %.3f") % (time() - start,
len(optimizer.cv_results_['params']),
best_score,
best_score_std))
print('Best parameters:')
pprint.pprint(best_params)
print()
return best_params
# + [markdown] papermill={"duration": 0.026374, "end_time": "2021-08-16T21:44:05.090383", "exception": false, "start_time": "2021-08-16T21:44:05.064009", "status": "completed"} tags=[]
# We then define the evaluation metric, using the Scikit-learn function make_scorer allows us to convert the optimization into a minimization problem, as required by Scikit-optimize. We set squared=False by means of a partial function to obtain the root mean squared error (RMSE) as evaluation.
# + papermill={"duration": 0.033729, "end_time": "2021-08-16T21:44:05.151672", "exception": false, "start_time": "2021-08-16T21:44:05.117943", "status": "completed"} tags=[]
# Setting the scoring function
scoring = make_scorer(partial(mean_squared_error, squared=False),
greater_is_better=False)
# + [markdown] papermill={"duration": 0.026116, "end_time": "2021-08-16T21:44:05.204708", "exception": false, "start_time": "2021-08-16T21:44:05.178592", "status": "completed"} tags=[]
# We set up a 5-fold cross validation
# + papermill={"duration": 0.032949, "end_time": "2021-08-16T21:44:05.264314", "exception": false, "start_time": "2021-08-16T21:44:05.231365", "status": "completed"} tags=[]
# Setting the validation strategy
kf = KFold(n_splits=5, shuffle=True, random_state=0)
# + [markdown] papermill={"duration": 0.026099, "end_time": "2021-08-16T21:44:05.316927", "exception": false, "start_time": "2021-08-16T21:44:05.290828", "status": "completed"} tags=[]
# We set up a generic LightGBM regressor.
# + papermill={"duration": 0.033683, "end_time": "2021-08-16T21:44:05.377607", "exception": false, "start_time": "2021-08-16T21:44:05.343924", "status": "completed"} tags=[]
# Setting the basic regressor
reg = lgb.LGBMRegressor(boosting_type='gbdt',
metric='rmse',
objective='regression',
n_jobs=1,
verbose=-1,
random_state=0)
# + [markdown] papermill={"duration": 0.026215, "end_time": "2021-08-16T21:44:05.430171", "exception": false, "start_time": "2021-08-16T21:44:05.403956", "status": "completed"} tags=[]
# We define a search space, expliciting the key hyper-parameters to optimize and the range where to look for the best values.
#
# + papermill={"duration": 0.046542, "end_time": "2021-08-16T21:44:05.503539", "exception": false, "start_time": "2021-08-16T21:44:05.456997", "status": "completed"} tags=[]
# Setting the search space
search_spaces = {
'learning_rate': Real(0.01, 1.0, 'log-uniform'), # Boosting learning rate
'n_estimators': Integer(30, 5000), # Number of boosted trees to fit
'num_leaves': Integer(2, 512), # Maximum tree leaves for base learners
'max_depth': Integer(-1, 256), # Maximum tree depth for base learners, <=0 means no limit
'min_child_samples': Integer(1, 256), # Minimal number of data in one leaf
'max_bin': Integer(100, 1000), # Max number of bins that feature values will be bucketed
'subsample': Real(0.01, 1.0, 'uniform'), # Subsample ratio of the training instance
'subsample_freq': Integer(0, 10), # Frequency of subsample, <=0 means no enable
'colsample_bytree': Real(0.01, 1.0, 'uniform'), # Subsample ratio of columns when constructing each tree
'min_child_weight': Real(0.01, 10.0, 'uniform'), # Minimum sum of instance weight (hessian) needed in a child (leaf)
'reg_lambda': Real(1e-9, 100.0, 'log-uniform'), # L2 regularization
'reg_alpha': Real(1e-9, 100.0, 'log-uniform'), # L1 regularization
}
# + [markdown] papermill={"duration": 0.026225, "end_time": "2021-08-16T21:44:05.556628", "exception": false, "start_time": "2021-08-16T21:44:05.530403", "status": "completed"} tags=[]
# We then define the Bayesian optimization engine, providing to it our LightGBM, the search spaces, the evaluation metric, the cross-validation. We set a large number of possible experiments and some parallelism in the search operations.
# + papermill={"duration": 0.034387, "end_time": "2021-08-16T21:44:05.617770", "exception": false, "start_time": "2021-08-16T21:44:05.583383", "status": "completed"} tags=[]
# Wrapping everything up into the Bayesian optimizer
opt = BayesSearchCV(estimator=reg,
search_spaces=search_spaces,
scoring=scoring,
cv=kf,
n_iter=60, # max number of trials
n_points=3, # number of hyperparameter sets evaluated at the same time
n_jobs=-1, # number of jobs
iid=False, # if not iid it optimizes on the cv score
return_train_score=False,
refit=False,
optimizer_kwargs={'base_estimator': 'GP'}, # optmizer parameters: we use Gaussian Process (GP)
random_state=0) # random state for replicability
# + [markdown] papermill={"duration": 0.026284, "end_time": "2021-08-16T21:44:05.670703", "exception": false, "start_time": "2021-08-16T21:44:05.644419", "status": "completed"} tags=[]
# Finally we runt the optimizer and wait for the results. We have set some limits to its operations: we required it to stop if it cannot get consistent improvements from the search (DeltaYStopper) and time dealine set in seconds (we decided for 6 hours).
# + papermill={"duration": 20787.283763, "end_time": "2021-08-17T03:30:32.980917", "exception": false, "start_time": "2021-08-16T21:44:05.697154", "status": "completed"} tags=[]
# Running the optimizer
overdone_control = DeltaYStopper(delta=0.0001) # We stop if the gain of the optimization becomes too small
time_limit_control = DeadlineStopper(total_time=60 * 60 * 6) # We impose a time limit (6 hours)
best_params = report_perf(opt, X, y,'LightGBM_regression',
callbacks=[overdone_control, time_limit_control])
# + [markdown] papermill={"duration": 0.028022, "end_time": "2021-08-17T03:30:33.037907", "exception": false, "start_time": "2021-08-17T03:30:33.009885", "status": "completed"} tags=[]
# # Prediction on test data
# + [markdown] papermill={"duration": 0.026938, "end_time": "2021-08-17T03:30:33.093399", "exception": false, "start_time": "2021-08-17T03:30:33.066461", "status": "completed"} tags=[]
# Having got the best hyperparameters for the data at hand, we instantiate a lightGBM using such values and train our model on all the available examples.
#
# After having trained the model, we predict on the test set and we save the results on a csv file.
# + papermill={"duration": 0.036532, "end_time": "2021-08-17T03:30:33.157267", "exception": false, "start_time": "2021-08-17T03:30:33.120735", "status": "completed"} tags=[]
# Transferring the best parameters to our basic regressor
reg = lgb.LGBMRegressor(boosting_type='gbdt',
metric='rmse',
objective='regression',
n_jobs=1,
verbose=-1,
random_state=0,
**best_params)
# + papermill={"duration": 105.093262, "end_time": "2021-08-17T03:32:18.277746", "exception": false, "start_time": "2021-08-17T03:30:33.184484", "status": "completed"} tags=[]
# Fitting the regressor on all the data
reg.fit(X, y)
# + papermill={"duration": 290.837305, "end_time": "2021-08-17T03:37:09.143692", "exception": false, "start_time": "2021-08-17T03:32:18.306387", "status": "completed"} tags=[]
# Preparing the submission
submission = pd.DataFrame({'id':X_test.index,
'target': reg.predict(X_test).ravel()})
submission.to_csv("submission.csv", index = False)
# + papermill={"duration": 0.057576, "end_time": "2021-08-17T03:37:09.230592", "exception": false, "start_time": "2021-08-17T03:37:09.173016", "status": "completed"} tags=[]
submission
| chapter_08/tutorial-bayesian-optimization-with-lightgbm.ipynb |
# ---
# title: "Create Baseline Regression Model"
# author: "<NAME>"
# date: 2017-12-20T11:53:49-07:00
# description: "How to create a baseline regression model in scikit-learn for machine learning in Python."
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## Preliminaries
# Load libraries
from sklearn.datasets import load_boston
from sklearn.dummy import DummyRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# ## Load Boston Housing Dataset
# +
# Load data
boston = load_boston()
# Create features
X, y = boston.data, boston.target
# -
# ## Split Data Into Training And Test Set
# Make test and training split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# ## Create Dummy Regression Always Predicts The Mean Value Of Target
# +
# Create a dummy regressor
dummy_mean = DummyRegressor(strategy='mean')
# "Train" dummy regressor
dummy_mean.fit(X_train, y_train)
# -
# ## Create Dummy Regression Always Predicts A Constant Value
# +
# Create a dummy regressor
dummy_constant = DummyRegressor(strategy='constant', constant=20)
# "Train" dummy regressor
dummy_constant.fit(X_train, y_train)
# -
# ## Evaluate Performance Metric
# Get R-squared score
dummy_constant.score(X_test, y_test)
| docs/machine_learning/model_evaluation/create_baseline_regression_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Popups
#
# It is possible to display the information of a feature showing **popups** when interacting with each feature. The events that allow us to interact with the feature are `hover` and `click`. A feature can listen to both events and the popup can display different information for each event.
# +
from cartoframes.auth import set_default_credentials
from cartoframes.viz import Map, Layer
set_default_credentials('cartovl')
# -
Map(
Layer(
'sf_neighborhoods',
'color: ramp(globalQuantiles($cartodb_id, 5), purpor)',
{
'hover': '$name',
'click': ['$name', '$created_at']
}
)
)
Map(
Layer(
'sf_neighborhoods',
popup={
'hover': '$name',
'click': ['$name', '$created_at']
}
)
)
# ### Using expressions to display information
Map(
Layer(
'populated_places',
'width: 15',
{
'hover': ['sqrt($pop_max)', '$pop_min % 100']
}
),
viewport={'zoom': 3.89, 'lat': 39.90, 'lng': 5.52}
)
# ### Using title and value
Map(
Layer(
'sf_neighborhoods',
popup={
'hover': {
'title': 'Name',
'value': '$name'
},
'click': [{
'title': 'Name',
'value': '$name'
},{
'title': 'Created at',
'value': '$created_at'
}]
}
)
)
| examples/03_visualizations/02_popups.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Let us create some RDDs
valueRDDA = sc.parallelize(["k", "f", "x", "w", "y", "f", "y", "k", "f"])
rddB = sc.parallelize([1, 2, 1, 2, 1, 2, 1, 1, 1])
# +
# Now, we can zip these RDDs to create new RDDs with different keys
rdd1 = rddB.zip(valueRDDA)
print("RDD 1 : ", rdd1.collect())
# -
# # reduceByKey()
# +
# Writing a transformation that finds all the unique strings corresponding to each key.
# rdd.map(kv => (kv._1, new Set[String]() + kv._2)).reduceByKey(_ ++ _)
rddResult=rdd1.map(lambda x: (x[0], set(x[1]) ) ).reduceByKey(lambda x, y: x.union(y))
rddResult.collect()
# -
# # aggregateByKey()
#
# +
# Better would be to use aggregateByKey()
def my_add(x, y):
x.add(y)
return x
rddResult2 = rdd1.aggregateByKey(set() , my_add , lambda x, y: x.union( y))
rddResult2.collect()
# -
# # combineByKey()
# +
# Using combineByKey()
# We need to provide 3 functions.
# createCombiner, which turns a V into a C (e.g., creates a one-element list)
# mergeValue, to merge a V into a C (e.g., adds it to the end of a list)
# mergeCombiners, to combine two C’s into a single one (e.g., merges the lists)
def to_set(a):
return set(a)
def my_add(x, y):
x.add(y)
return x
rddResult2 = rdd1.combineByKey(toSet, my_add , lambda x, y: x.union( y))
rddResult2.collect()
# +
# We can also aggregate to a list.
def to_list(a):
return [a]
def addToList(x, y):
x.append(y)
return x
def extend(x,y):
x.extend(y)
return x
rddResult2 = rdd1.combineByKey(to_list, addToList, extend)
rddResult2.collect()
| Notebooks/Spark-Example-14-combinedByKey.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# USAGE
# python train.py --dataset dataset
# import the necessary packages
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import cv2
import os
from glob import glob
from tqdm import tqdm
from time import gmtime, strftime
# +
# # construct the argument parser and parse the arguments
# ap = argparse.ArgumentParser()
# ap.add_argument("-d", "--dataset", required=True,
# help="path to input dataset")
# ap.add_argument("-p", "--plot", type=str, default="plot.png",
# help="path to output loss/accuracy plot")
# ap.add_argument("-m", "--model", type=str, default="covid19.model",
# help="path to output loss/accuracy plot")
# args = vars(ap.parse_args())
# +
# grab the list of images in our dataset directory, then initialize
# the list of data (i.e., images) and class images
print("[INFO] loading images...")
imagePaths = glob("dataset/normal/*")
data = []
labels = []
# loop over the image paths
for imagePath in tqdm(imagePaths):
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
data.append(image)
labels.append("normal")
imagePaths = glob("dataset/covid/*")
# loop over the image paths
for imagePath in tqdm(imagePaths):
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
data.append(image)
labels.append("covid")
# -
plt.imshow(data[0])
# +
# convert the data and labels to NumPy arrays while scaling the pixel
# intensities to the range [0, 255]
data = np.array(data) / 255.0
labels = np.array(labels)
# perform one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(trainX, testX, trainY, testY) = train_test_split(data, labels,test_size=0.20, stratify=labels, random_state=42)
# -
# initialize the training data augmentation object
trainAug = ImageDataGenerator(
rotation_range=15,
fill_mode="nearest")
# +
# load the VGG16 network, ensuring the head FC layer sets are left
# off
baseModel = VGG16(weights="imagenet", include_top=False,input_tensor=Input(shape=(224, 224, 3)))
# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(4, 4))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(64, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
# place the head FC model on top of the base model (this will become
# the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the first training process
for layer in baseModel.layers:
layer.trainable = False
# compile our model
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"])
# -
# initialize the initial learning rate, number of epochs to train for,
# and batch size
INIT_LR = 1e-3
EPOCHS = 1
BS = 8
# +
# train the head of the network
print("[INFO] training head...")
H = model.fit_generator(
trainAug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS)
# make predictions on the testing set
print("[INFO] evaluating network...")
predIdxs = model.predict(testX, batch_size=BS)
# for each image in the testing set we need to find the index of the
# label with corresponding largest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# show a nicely formatted classification report
print(classification_report(testY.argmax(axis=1), predIdxs,
target_names=lb.classes_))
# compute the confusion matrix and and use it to derive the raw
# accuracy, sensitivity, and specificity
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
acc = (cm[0, 0] + cm[1, 1]) / total
sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
print("sensitivity: {:.4f}".format(sensitivity))
print("specificity: {:.4f}".format(specificity))
# plot the training loss and accuracy
N = EPOCHS
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy on COVID-19 Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig("./plots/"+strftime("%Y.%m.%d_%H:%M:%S",gmtime()))
# -
# serialize the model to disk
print("[INFO] saving COVID-19 detector model...")
model.save("./models/"+strftime("%Y.%m.%d_%H:%M:%S",gmtime()), save_format="h5")
| ml/MASTER.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# name: ir
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/Marcioromarco/Scripits/blob/master/Teste.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="27yM9cEgJTsH" outputId="64dd470d-ada2-4998-9f85-1085acf2eb3c"
#Verificando a versao do R
version
# + colab={"base_uri": "https://localhost:8080/"} id="Q7tCzvQgJv-P" outputId="bcfe6b44-4375-4cdd-9a15-4b9ff75b0905"
install.packages("dplyr")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="K74bULllJrH6" outputId="ee8144e4-1fa3-47de-f75c-3048943eeb52"
# Veificando os pacotes instalados
str (allPackage <- installed.packages ())
allPackage [, c (1,3: 5)] # para parecer em formato de tabela
# + colab={"base_uri": "https://localhost:8080/", "height": 725} id="EYSndsH8CpoA" outputId="3e7ee49f-0bfc-46c0-f580-bc05d8be4c53"
dados <- read.csv2("est.csv", sep=";", dec=",",header = TRUE)
head(dados)
names(dados)
str(dados)
#install.packages("ExpDes.pt")
require(ExpDes.pt)
# + colab={"base_uri": "https://localhost:8080/"} id="doiZqOj7ENt5" outputId="96f02df4-ed9c-49e5-fd54-1d30770d9e2f"
#------ Analisando --------
dbc(dados$trat, dados$bloco, dados$ph, quali = TRUE, mcomp = "tukey", sigT = 0.05, sigF = 0.05)
fat2.dbc(dados$p, dados$t,dados$bloco, dados$ph, quali = c(TRUE, TRUE), mcomp =
"tukey", fac.names = c("Solo", "Topsoil"), sigT = 0.05, sigF = 0.05)
| Teste.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.0
# language: julia
# name: julia-1.4
# ---
# # Discrete event simulation using `SimJulia`
# <NAME> (@sdwfrost), 2020-04-27
#
# ## Libraries
using ResumableFunctions
using SimJulia
using Distributions
using DataFrames
using Random
using StatsPlots
using BenchmarkTools
# ## Utility functions
# +
function increment!(a::Array{Int64})
push!(a,a[length(a)]+1)
end
function decrement!(a::Array{Int64})
push!(a,a[length(a)]-1)
end
function carryover!(a::Array{Int64})
push!(a,a[length(a)])
end;
# -
# ## Transitions
mutable struct SIRPerson
id::Int64 # numeric ID
status::Symbol # :S,I,R
end;
mutable struct SIRModel
sim::Simulation
β::Float64
c::Float64
γ::Float64
ta::Array{Float64}
Sa::Array{Int64}
Ia::Array{Int64}
Ra::Array{Int64}
allIndividuals::Array{SIRPerson}
end
# These functions update the state of the 'world' when either an infection or recovery occurs.
function infection_update!(sim::Simulation,m::SIRModel)
push!(m.ta,now(sim))
decrement!(m.Sa)
increment!(m.Ia)
carryover!(m.Ra)
end;
function recovery_update!(sim::Simulation,m::SIRModel)
push!(m.ta,now(sim))
carryover!(m.Sa)
decrement!(m.Ia)
increment!(m.Ra)
end;
# The following is the main simulation function. It's not efficient, as it involves activating a process for all susceptibles; a more efficient algorithm would involve just considering infected individuals, and activating each susceptible individual when infection occurs. This however requires more bookkeeping and detracts from the ability to easily compare between implementations.
@resumable function live(sim::Simulation, individual::SIRPerson, m::SIRModel)
while individual.status==:S
# Wait until next contact
@yield timeout(sim,rand(Distributions.Exponential(1/m.c)))
# Choose random alter
alter=individual
while alter==individual
N=length(m.allIndividuals)
index=rand(Distributions.DiscreteUniform(1,N))
alter=m.allIndividuals[index]
end
# If alter is infected
if alter.status==:I
infect = rand(Distributions.Uniform(0,1))
if infect < m.β
individual.status=:I
infection_update!(sim,m)
end
end
end
if individual.status==:I
# Wait until recovery
@yield timeout(sim,rand(Distributions.Exponential(1/m.γ)))
individual.status=:R
recovery_update!(sim,m)
end
end;
function MakeSIRModel(u0,p)
(S,I,R) = u0
N = S+I+R
(β,c,γ) = p
sim = Simulation()
allIndividuals=Array{SIRPerson,1}(undef,N)
for i in 1:S
p=SIRPerson(i,:S)
allIndividuals[i]=p
end
for i in (S+1):(S+I)
p=SIRPerson(i,:I)
allIndividuals[i]=p
end
for i in (S+I+1):N
p=SIRPerson(i,:R)
allIndividuals[i]=p
end
ta=Array{Float64,1}(undef,0)
push!(ta,0.0)
Sa=Array{Int64,1}(undef,0)
push!(Sa,S)
Ia=Array{Int64,1}(undef,0)
push!(Ia,I)
Ra=Array{Int64,1}(undef,0)
push!(Ra,R)
SIRModel(sim,β,c,γ,ta,Sa,Ia,Ra,allIndividuals)
end;
function activate(m::SIRModel)
[@process live(m.sim,individual,m) for individual in m.allIndividuals]
end;
function sir_run(m::SIRModel,tf::Float64)
SimJulia.run(m.sim,tf)
end;
function out(m::SIRModel)
result = DataFrame()
result[!,:t] = m.ta
result[!,:S] = m.Sa
result[!,:I] = m.Ia
result[!,:R] = m.Ra
result
end;
# ## Time domain
tmax = 40.0;
# ## Initial conditions
u0 = [990,10,0];
# ## Parameter values
p = [0.05,10.0,0.25];
# ## Random number seed
Random.seed!(1234);
# ## Running the model
des_model = MakeSIRModel(u0,p)
activate(des_model)
sir_run(des_model,tmax)
# ## Postprocessing
data_des=out(des_model);
# ## Plotting
@df data_des plot(:t, [:S :I :R], labels = ["S" "I" "R"], xlab="Time", ylab="Number")
# ## Benchmarking
@benchmark begin
des_model = MakeSIRModel(u0,p)
activate(des_model)
sir_run(des_model,tmax)
end
| notebook/des/des.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plot
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import StandardScaler
dataset = pd.read_csv('datasets/ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=1/4, random_state=0
)
scaler_x = StandardScaler()
X_train = scaler_x.fit_transform(X_train)
X_test = scaler_x.transform(X_test)
classifier = GaussianNB()
classifier = classifier.fit(X_train, y_train)
# in order to see the accuracy of this model we use a confusion matrix
matrix = confusion_matrix(y_test, classifier.predict(X_test))
matrix
# +
# now we create a plotting function and plot our train and test sets
def plot_classifier(X_set, y_set, set_description='Training'):
"""
We visualise the decision boundary. First create a new meshgrid from
our test set and fill it with datapoints for every value of 0.01
in between our min and max of the first and second column.
Subtracting and adding 1 to each, so our datapoints don't
get squashed up to the sides of the graph.
"""
X1, X2 = np.meshgrid(
np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01),
np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01)
)
# we then go over each data point in our new mesh and predict if the value is 0 or 1 and apply
# a color to it.
plot.contourf(
X1, X2,
classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha=0.75,
cmap=ListedColormap(('red', 'green'))
)
# we set the limits of the graph to the limits of our mesh grid.
plot.xlim(X1.min(), X1.max())
plot.ylim(X2.min(), X2.max())
# and add our training set data points.
for i, j in enumerate(np.unique(y_set)):
plot.scatter(
X_set[y_set == j, 0],
X_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i),
label=j
)
plot.title(f'Naïve Bayes classification ({set_description})')
plot.xlabel('Age')
plot.ylabel('Estimated salary')
plot.legend()
plot.show()
plot_classifier(X_train, y_train)
plot_classifier(X_test, y_test, 'Testing')
# -
| Part 3 - Classification/naive-bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''DS'': conda)'
# metadata:
# interpreter:
# hash: f36447fc4b66244121124ec6fa98109a03b0559db4ad1e526e9df3ed1c079e00
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
file = pd.read_excel('D:\Earth.Org\Pollution\Pollution Index.xlsx')
file = file.set_index('Rank')
file
file[:10]
file[file['Place'] == 'Xuzhou, China']
file[file['Place'] == 'Delhi, India']
p_list = ['Sydney, Australia', 'Tokyo, Japan', 'Berlin, Germany', 'Delhi, India', 'New York, NY, United States', 'Prague, Czech Republic', 'London, United Kingdom', 'Beijing, China', 'Shanghai, China', 'Guangzhou, China', 'Suzhou, China', 'Chengdu, China', 'Rio de Janeiro, Brazil']
file[file['Place'].isin(p_list)]
p_list = pd.concat([file[:10],file[file['Place'].isin(p_list)]])
p_list = p_list.append({'Place':'Xinxiang, China', 'Pollution Index':115.52, 'Exp Pollution Index':213.83},ignore_index=True)
p_list = p_list.append({'Place':'Xuzhou, China', 'Pollution Index':81.55, 'Exp Pollution Index':156.38},ignore_index=True)
p_list
one_ciggarete_per_day = 22
p_list['PM 2.5 (micro g/m3)'] = [48, 39.2, 85, 58.8, 30.3, 62, 30.3, 110.2, 83.3, 97.7, 98.6, 42.1, 35.4, 28.9, 37.7, 42.4, 28.2, 11.4, 7, 11.7, 9.7, 11.5, 10.1, 84,60]
p_list['Equivalent Cigarettes/day'] = p_list['PM 2.5 (micro g/m3)']/one_ciggarete_per_day
p_list
p_list = p_list.sort_values(by=['Equivalent Cigarettes/day', 'Pollution Index'], ascending=False)
p_list
p_list.to_csv('D:\Earth.Org\Pollution\Pollution.csv')
#plt.plot(p_list['Place'], p_list['Pollution Index'])
plt.scatter(p_list['Place'], p_list['Equivalent Cigarettes/day'])
plt.show()
| calcp.ipynb |