markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
RM average number of rooms per dwelling | boston_data['RM'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
AGE proportion of owner-occupied units built prior to 1940 | boston_data['AGE'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
DIS weighted distances to five Boston employment centres | boston_data['DIS'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
TAX full-value property-tax rate per 10,000 | boston_data['TAX'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
PTRATIO pupil-teacher ratio by town | boston_data['PTRATIO'].unique() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
B where Bk is the proportion of blacks by town | boston_data['B'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
LSTAT percentage lower status of the population | boston_data['LSTAT'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
MEDV Median value of owner-occupied homes in 1000$ | boston_data['MEDV'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
after we checked the dat type of each variable. we have 2 discrete numerical variable and 10 floating or continuous variales. To understand wheather a variable is contious or discrete. we can also make a histogram for each:* CRIM per capita crime rate by town* ZN proportion of residential land zoned for lots over 25,000 sq.ft.* INDUS proportion of non-retail business acres per town* CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)* NOX nitric oxides concentration (parts per 10 million)* RM average number of rooms per dwelling* AGE proportion of owner-occupied units built prior to 1940* DIS weighted distances to five Boston employment centres* RAD index of accessibility to radial highways* TAX full-value property-tax rate per 10,000* PTRATIO pupil-teacher ratio by town* B where Bk is the proportion of blacks by town* LSTAT percentage lower status of the population* MEDV Median value of owner-occupied homes in 1000$ making histogram for crime rate by town `CRIM` vatiable by dividing the variable range into intervals. | n_data = len(boston_data['CRIM'])
bins = int(np.sqrt(n_data))
boston_data['CRIM'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for proportion of residential land zoned for lots over 25,000 sq.ft `ZN`, variable by dividing the variable range into intervals. | n_data = len(boston_data['ZN'])
bins = int(np.sqrt(n_data))
boston_data['ZN'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for proportion of non-retail business acres per town `INDUS`, variable by dividing the variable range into intervals. | n_data = len(boston_data['INDUS'])
bins = int(np.sqrt(n_data))
boston_data['INDUS'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for nitric oxides concentration (parts per 10 million) `NOX`, variable by dividing the variable range into intervals. | n_data = len(boston_data['NOX'])
bins = int(np.sqrt(n_data))
boston_data['NOX'].hist(bins=bins) | 22
| Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for average number of rooms per dwelling `RM`, variable by dividing the variable range into intervals. | n_data = len(boston_data['RM'])
bins = int(np.sqrt(n_data))
boston_data['RM'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for proportion of owner-occupied units built prior to 1940 `AGE`, variable by dividing the variable range into intervals. | n_data = len(boston_data['AGE'])
bins = int(np.sqrt(n_data))
boston_data['AGE'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for weighted distances to five Boston employment centres `DIS`, variable by dividing the variable range into intervals. | n_data = len(boston_data['DIS'])
bins = int(np.sqrt(n_data))
boston_data['DIS'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for full-value property-tax rate per 10,000 `TAX`, variable by dividing the variable range into intervals. | n_data = len(boston_data['TAX'])
bins = int(np.sqrt(n_data))
boston_data['TAX'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for pupil-teacher ratio by town `PTRATIO`, variable by dividing the variable range into intervals. | n_data = len(boston_data['PTRATIO'])
bins = int(np.sqrt(n_data))
boston_data['PTRATIO'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram where Bk is the proportion of blacks by town `B`, variable by dividing the variable range into intervals. | n_data = len(boston_data['B'])
bins = int(np.sqrt(n_data))
boston_data['B'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for percentage lower status of the population `LSTAT `, variable by dividing the variable range into intervals. | n_data = len(boston_data['LSTAT'])
bins = int(np.sqrt(n_data))
boston_data['LSTAT'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for Median value of owner-occupied homes in 1000$ ` MEDV`, variable by dividing the variable range into intervals. | n_data = len(boston_data['MEDV'])
bins = int(np.sqrt(n_data))
boston_data['MEDV'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
making histogram for index of accessibility to radial highways`RAD`, variable by dividing the variable range into intervals. | n_data = len(boston_data['RAD'])
bins = int(np.sqrt(n_data))
boston_data['RAD'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
by taking a look to histogram of features we noticing that the continuous variables values range is not discrete. making histogram for Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) ` CHAS`, variable by dividing the variable range into intervals. | n_data = len(boston_data['CHAS'])
bins = int(np.sqrt(n_data))
boston_data['CHAS'].hist(bins=bins) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
we noticing here the values of this variable is discrete. Quantifying Missing Datacalculating the missing values in the dataset. | boston_data.isnull().sum() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
There is no Missing Values Determining the cardinality in cateogrical varaibles find unique values in each categorical variable | boston_data.nunique() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
The nunique() method ignores missing values by default. If we want toconsider missing values as an additional category, we should set the dropna argument to False: data.nunique(dropna=False). let's print out the unique category in Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) ` CHAS` | boston_data['CHAS'].unique() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
pandas nunique() can be used in the entire dataframe. pandas unique(), on the other hand, works only on a pandas Series. Thus, weneed to specify the column name that we want to return the unique valuesfor. | boston_data[['CHAS','RAD']].nunique().plot.bar(figsize=(12,6))
plt.xlabel("Variables")
plt.ylabel("Number Of Unique Values")
plt.title("Cardinality")
plt.show() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
第5章 計算機を作る 5.1.2 スタックマシン | def calc(expression: str):
# 空白で分割して字句にする
tokens = expression.split()
stack = []
for token in tokens:
if token.isdigit():
# 数値はスタックに push する
stack.append(int(token))
continue
# 数値でないなら,演算子として処理する
x = stack.pop()
y = stack.pop()
if token == '+':
stack.append(x+y)
elif token == '*':
stack.append(x*y)
return stack.pop()
calc('1 2 + 2 3 + *')
# !pip install pegtree
import pegtree as pg
from pegtree.colab import peg, pegtree, example | _____no_output_____ | MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
構文木を表示するためには、graphviz があらかじめインストールされている必要がある。 | %%peg
Expr = Prod ("+" Prod)*
Prod = Value ("*" Value)*
Value = { [0-9]+ #Int } _
example Expr 1+2+3
%%peg
Expr = { Prod ("+" Prod)* #Add }
Prod = { Value ("*" Value)* #Mul }
Value = { [0-9]+ #Int } _
example Expr 1+2+3
%%peg
Expr = Prod {^ "+" Prod #Add }*
Prod = Value {^ "*" Value #Mul }*
Value = { [0-9]+ #Int } _
example Expr 1+2+3
%%peg
Expr = Prod {^ "+" Prod #Add }*
Prod = Value {^ "*" Value #Mul }*
Value = "(" Expr ")" / Int
Int = { [0-9]+ #Int} _
example Expr 1+(2+3) | _____no_output_____ | MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
PegTree によるパーザ生成 | %%peg calc.pegtree
Start = Expr EOF // 未消費文字を構文エラーに
Expr = Prod ({^ "+" Prod #Add } / {^ "-" Prod #Sub } )*
Prod = Value ({^ "*" Value #Mul } / {^ "/" Value #Div } )*
Value = { [0-9]+ #Int} _ / "(" Expr ")"
example Expr 1+2*3
example Expr (1+2)*3
example Expr 1*2+3 | _____no_output_____ | MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
PegTree 文法のロード | peg = pg.grammar('calc.pegtree')
GRAMMAR = '''
Start = Expr EOF
Expr = Prod ({^ "+" Prod #Add } / {^ "-" Prod #Sub } )*
Prod = Value ({^ "*" Value #Mul } / {^ "/" Value #Div } )*
Value = { [0-9]+ #Int} _ / "(" Expr ")"
'''
peg = pg.grammar(GRAMMAR)
peg['Expr'] | _____no_output_____ | MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
5.3.2 パーザの生成 | parser = pg.generate(peg)
tree = parser('1+2')
print(repr(tree))
tree = parser('3@14')
print(repr(tree)) | Syntax Error ((unknown source):1:1+1)
3@14
^
| MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
構文木とVisitor パターン | peg = pg.grammar('calc.pegtree')
parser = pg.generate(peg)
tree = parser('1+2*3')
tree.getTag()
len(tree)
left = tree[0]
left.getTag()
left = tree[0]
str(left)
def calc(tree):
tag = tree.getTag()
if tag == 'Add':
t0 = tree[0]
t1 = tree[1]
return calc(t0) + calc(t1)
if tag == 'Mul':
t0 = tree[0]
t1 = tree[1]
return calc(t0) * calc(t1)
if tag == 'Int':
token = tree.getToken()
return int(token)
print(f'TODO: {tag}') # 未実装のタグの報告
return 0
tree = parser('1+2*3')
print(calc(tree)) | 7
| MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
Visitor パターン | class Visitor(object):
def visit(self, tree):
tag = tree.getTag()
name = f'accept{tag}'
if hasattr(self, name): # accept メソッドがあるか調べる
# メソッド名からメソッドを得る
acceptMethod = getattr(self, name)
return acceptMethod(tree)
print(f'TODO: accept{tag} method')
return None
class Calc(Visitor): # Visitor の継承
def __init__(self, parser):
self.parser = parser
def eval(self, source):
tree = self.parser(source)
return self.visit(tree)
def acceptInt(self, tree):
token = tree.getToken()
return int(token)
def acceptAdd(self, tree):
t0 = tree.get(0)
t1 = tree.get(1)
v0 = self.visit(t0)
v1 = self.visit(t1)
return v0 + v1
def acceptMul(self, tree):
t0 = tree.get(0)
t1 = tree.get(1)
v0 = self.visit(t0)
v1 = self.visit(t1)
return v0 * v1
def accepterr(self, tree):
print(repr(tree))
raise SyntaxError()
calc = Calc(parser)
print(calc.eval("1+2*3"))
print(calc.eval("(1+2)*3"))
print(calc.eval("1*2+3"))
calc.eval('1@2') | Syntax Error ((unknown source):1:1+1)
1@2
^
| MIT | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 |
This notebook will walk through the steps of using the Colorado Information Marketplace (https://data.colorado.gov/) API with python The first step will be to aquire the API endpoint and user tokens. For this tutorial I will be using the Aquaculture Permittees in Colorado dataset (https://data.colorado.gov/Agriculture/Aquaculture-Permittees-in-Colorado/e6e8-qmi7) Once you have navigated to your database of interest click on the API button Click on the blue copy button to copy the entire API Endpoint url, you are going to need that later. To be granted access tokens you need to click on the API docs button and scroll down until you reach the section about tokens Click on the "Sign up for an app token!" button, which will take you to a sign in page. If you don't have an account, create one now, otherwise just sign in. Once you sign in, you will be asked to create an application. Fill out the form then click create. You should then have both an App Token and a Secret Token The second step is to retrieve some data using the api and putting it into a pandas dataframe | # import pandas
import pandas as pd
# create these variables using the information found in step one
api_url = 'https://data.colorado.gov/xxxxx/xxxxxx.json'
App_Token = 'xxxxxxxxxxxxxxxx' | _____no_output_____ | MIT | tutorial-notebook.ipynb | mtchem/python_for_colorado-API |
Create urls to request data. The maximum number of returns for each request is 50,000 records per page. The *limit* parameter chooses how many records to return per page, and the *offset* parameter determines which record the request will start with. | limit = 100 # limits the number of records to 100
offset = 20 # starts collecting records at record 20
# url_1 creates a url that will query the API and return all the fields (columns) of records 20-120
url_1 = api_url + '?' + '$$app_token=' + App_Token + '&$limit=' + str(limit) + '&$offset=' + str(offset) | _____no_output_____ | MIT | tutorial-notebook.ipynb | mtchem/python_for_colorado-API |
Pandas has a method that retrieves api data and creates a dataframe | raw_data = pd.read_json(url_1)
| _____no_output_____ | MIT | tutorial-notebook.ipynb | mtchem/python_for_colorado-API |
This is for conventional and irregular forc diagrams.two example of irforc and convential forc data in PmagPy/data_files/forc_diagram/The input data format could be very different due to different softwares and instruments,Nevertheless, only if the line [' Field Moment '] included before the measured data, the software will work. in command line:python3 forcdiagram /data_files/irforc_example.irforc 3will plot the FORC diagram with SF=3 | from forc_diagram import *
from matplotlib import pyplot as plt
#forc = Forc(fileAdres='/data_files/irforc_example.irforc',SF=3)
forc = Forc(fileAdres='../example/MSM33-60-1-d416_2.irforc',SF=3)
fig = plt.figure(figsize=(6,5), facecolor='white')
fig.subplots_adjust(left=0.18, right=0.97,
bottom=0.18, top=0.9, wspace=0.5, hspace=0.5)
plt.contour(forc.xi*1000,
forc.yi*1000,
forc.zi,9,
colors='k',linewidths=0.5)#mt to T
plt.pcolormesh(forc.xi*1000,
forc.yi*1000,
forc.zi,
cmap=plt.get_cmap('rainbow'))#vmin=np.min(rho)-0.2)
plt.colorbar()
plt.xlabel('B$_{c}$ (mT)',fontsize=12)
plt.ylabel('B$_{i}$ (mT)',fontsize=12)
plt.show() | _____no_output_____ | BSD-3-Clause | data_files/forc_diagram/.ipynb_checkpoints/forc_diagram-checkpoint.ipynb | apivarunas/PmagPy |
Chapter 1: The way of program The First Program | print('Hello, World!') | Hello, World!
| MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
Arithmetic Operators ( +, -, *, /, **, ^) (addition, substraction, multiplication, division, exponentiation, XOR) | 40 + 2 # add
43 - 1 # Sub
6 * 7 # multiply
84/ 2 # Division
6**2 + 6 # Exponent
6 ^ 2 # XOR(Bitwise operator) | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
Values and Types: A valus is one of the basic things a program works with,like a letter or a number. | type(2)
type(42.0)
type('Hello, World!')
1,000,000 | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
Formal and Natural Languages:1).Programming languages are formal languages that have been designed to express computations.2).Syntax rules comes with two flavors pertaining to token(such as, words, numbers, and chemical elements) and Token combination( order, well structured).3).parsing:figure outh the structure in formal language sentence in english or a statement.4).Programs: The meaning of a computer program is a unambiguous and literal, and can be understood entirely by analysis of the tokens and structure.5).Programming errors are called bugs and the process of tracking them down is called debugging. Glossory:problem solving, high-level language, low-level language, portability, interpeter, prompt, program, print statement, operator, value, type, integer, floating-point, string, natural language, formal language, token, syntax, parse, bug, debugging - refer book EXERCISES: | print('helloworld)
print 'helloworld'
2++2
02+02
2 2 | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
1. How many seconds are ther in 42 minutes 42 seconds? | 42 * 60 + 42 | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
2. How many miles are there in 10 kilometers? hint: there are 1.61 kilometers in a mile | 10/1.61 | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
3. If you run a 10 kilometer race in 42 minutes 42 seconds, what is your average pace(time per mile in minutes and seconds)? what is your average speed in miles per hour? | 10/1.61
42*60+42
6.214/2562
42+42/60
6.214/42.7 | _____no_output_____ | MIT | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python |
Final Exam ***Problem Statement 1***: Create a Python Program that will produce an output of sum of 10 numbers less than 5 using FOR LOOP statement. | sum = 0
num = [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]
for x in (num):
sum = sum + x
print ("The sum of 10 numbers less than five is",sum) | The sum of 10 numbers less than five is -5
| Apache-2.0 | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 |
***Problem Statement 2***: Create a Python program that will produce accept five numbers and determine the sum offirst and last number among the five numbers entered using WHILE LOOP | num = int(input("1st number: "))
while (num !=0):
l = int(input("2nd number: "))
o = int(input("3rd number: "))
v = int(input("4th number: "))
e = int(input("5th number: "))
break
x = e
while (x!=0):
x = num + e
print ("The sum of first and last number is", x)
num -=1
break | 1st number: 1
2nd number: 2
3rd number: 3
4th number: 4
5th number: 5
The sum of first and last number is 6
| Apache-2.0 | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 |
***Problem Statement 3***: Create a Python program to calculate student grades. It accepts a numerical grade as inputand it will display the character grade as output based on the given scale: (Use Nested-IF-Elsestatement) | grade = float(input("Enter numerical grade: "))
if grade >=90:
print ("Character Grade: A")
elif grade <=89 and grade >80:
print ("Character Grade: B")
elif grade <=79 and grade >70:
print ("Character Grade: C")
elif grade <= 69 and grade >60:
print ("Character Grade: D")
else:
print ("Character Grade: F")
| Enter numerical grade: 98.5
Character Grade: A
| Apache-2.0 | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 |
Week 3 - Ungraded Lab: Data LabelingWelcome to the ungraded lab for week 3 of Machine Learning Engineering for Production. In this lab, you will see how the data labeling process affects the performance of a classification model. Labeling data is usually a very labor intensive and costly task but it is of great importance.As you saw in the lectures there are many ways to label data, this is dependant on the strategy used. Recall the example with the iguanas, all of the following are valid labeling alternatives but they clearly follow different criteria. **You can think of every labeling strategy as a result of different labelers following different labeling rules**. If your data is labeled by people using different criteria this will have a negative impact on your learning algorithm. It is desired to have consistent labeling across your dataset.This lab will touch on the effect of labeling strategies from a slighlty different angle. You will explore how different strategies affect the performance of a machine learning model by simulating the process of having different labelers label the data. This, by defining a set of rules and performing automatic labeling based on those rules.**The main objective of this ungraded lab is to compare performance across labeling options to understand the role that good labeling plays on the performance of Machine Learning models**, these options are:1. Randomly generated labels (performance lower bound)2. Automatic generated labels based on three different label strategies3. True labels (performance upper bound)Although the example with the iguanas is a computer vision task, the same concepts regarding labeling can be applied to other types of data. In this lab you will be working with text data, concretely you will be using a dataset containing comments from the 2015 top 5 most popular Youtube videos. Each comment has been labeled as `spam` or `not_spam` depending on its contents. | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Loading the datasetThe dataset consists of 5 CSV files, one for each video. Pandas `DataFrame` are very powerful to handle data in CSV format. The following helper function will load the data using pandas: | def load_labeled_spam_dataset():
"""Load labeled spam dataset."""
# Path where csv files are located
base_path = "./data/"
# List of csv files with full path
csv_files = [os.path.join(base_path, csv) for csv in os.listdir(base_path)]
# List of dataframes for each file
dfs = [pd.read_csv(filename) for filename in csv_files]
# Concatenate dataframes into a single one
df = pd.concat(dfs)
# Rename columns
df = df.rename(columns={"CONTENT": "text", "CLASS": "label"})
# Set a seed for the order of rows
df = df.sample(frac=1, random_state=824)
return df.reset_index()
# Save the dataframe into the df_labeled variable
df_labeled = load_labeled_spam_dataset() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
To have a feeling of how the data is organized, let's inspect the top 5 rows of the data: | # Take a look at the first 5 rows
df_labeled.head() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Further inspection and preprocessing Checking for data imbalanceIt is fairly common to assume that the data you are working on is balanced. This means that the dataset contains a similar proportion of examples for all classes. Before moving forward let's actually test this assumption: | # Print actual value count
print(f"Value counts for each class:\n\n{df_labeled.label.value_counts()}\n")
# Display pie chart to visually check the proportion
df_labeled.label.value_counts().plot.pie(y='label', title='Proportion of each class')
plt.show() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
There is roughly the same number of data points for each class so class imbalance is not an issue for this particular dataset. Cleaning the datasetIf you scroll back to the cell where you inspected the data, you will realize that the dataframe includes information that is not relevant for the task at hand. At the moment, you are only interested in the comments and the corresponding labels (the video that each comment belongs to will be used later). Let's drop the remaining columns. | # Drop unused columns
df_labeled = df_labeled.drop(['index', 'COMMENT_ID', 'AUTHOR', 'DATE'], axis=1)
# Look at the cleaned dataset
df_labeled.head() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Now the dataset only includes the information you are going to use moving forward. Splitting the datasetBefore jumping to the data labeling section let's split the data into training and test sets so you can use the latter to measure the performance of models that were trained using data labeled through different methods. As a safety measure when doing this split, remember to use stratification so the proportion of classes is maintained within each split. | from sklearn.model_selection import train_test_split
# Save the text into the X variable
X = df_labeled.drop("label", axis=1)
# Save the true labels into the y variable
y = df_labeled["label"]
# Use 1/5 of the data for testing later
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
# Print number of comments for each set
print(f"There are {X_train.shape[0]} comments for training.")
print(f"There are {X_test.shape[0]} comments for testing") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Let's do a visual to check that the stratification actually worked: | plt.subplot(1, 3, 1)
y_train.value_counts().plot.pie(y='label', title='Proportion of each class for train set', figsize=(10, 6))
plt.subplot(1, 3, 3)
y_test.value_counts().plot.pie(y='label', title='Proportion of each class for test set', figsize=(10, 6))
plt.tight_layout()
plt.show() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Both, the training and test sets a balanced proportion of examples per class. So, the code successfully implemented stratification. Let's get going! Data Labeling Establishing performance lower and upper bounds for referenceTo properly compare different labeling strategies you need to establish a baseline for model accuracy, in this case you will establish both a lower and an upper bound to compare against. Calculate accuracy of a labeling strategy[CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.htmlsklearn.feature_extraction.text.CountVectorizer) is a handy tool included in the sklearn ecosystem to encode text based data.For more information on how to work with text data using sklearn check out this [resource](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html). | from sklearn.feature_extraction.text import CountVectorizer
# Allow unigrams and bigrams
vectorizer = CountVectorizer(ngram_range=(1, 5)) | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Now that the text encoding is defined, you need to select a model to make predictions. For simplicity you will use a [Multinomial Naive Bayes](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html) classifier. This model is well suited for text classification and is fairly quick to train.Let's define a function which will handle the model fitting and print out the accuracy on the test data: | from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import MultinomialNB
def calculate_accuracy(X_tr, y_tr, X_te=X_test, y_te=y_test,
clf=MultinomialNB(), vectorizer=vectorizer):
# Encode train text
X_train_vect = vectorizer.fit_transform(X_tr.text.tolist())
# Fit model
clf.fit(X=X_train_vect, y=y_tr)
# Vectorize test text
X_test_vect = vectorizer.transform(X_te.text.tolist())
# Make predictions for the test set
preds = clf.predict(X_test_vect)
# Return accuracy score
return accuracy_score(preds, y_te)
| _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Now let's create a dictionary to store the accuracy of each labeling method: | # Empty dictionary
accs = dict() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Random LabelingGenerating random labels is a natural way to establish a lower bound. You will expect that any successful alternative labeling model to outperform randomly generated labels. Now let's calculate the accuracy for the random labeling method | # Calculate random labels
rnd_labels = np.random.randint(0, 2, X_train.shape[0])
# Feed them alongside X_train to calculate_accuracy function
rnd_acc = calculate_accuracy(X_train, rnd_labels)
rnd_acc | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
You will see a different accuracy everytime you run the previous cell. This is due to the fact that the labeling is done randomly. Remember, this is a binary classification problem and both classes are balanced, so you can expect to see accuracies that revolve around 50%.To further gain intuition let's look at the average accuracy over 10 runs: | # Empty list to save accuracies
rnd_accs = []
for _ in range(10):
# Add every accuracy to the list
rnd_accs.append(calculate_accuracy(X_train, np.random.randint(0, 2, X_train.shape[0])))
# Save result in accs dictionary
accs['random-labels'] = sum(rnd_accs)/len(rnd_accs)
# Print result
print(f"The random labelling method achieved and accuracy of {accs['random-labels']*100:.2f}%") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Random labelling is completely disregarding the information from the solution space you are working on, and is just guessing the correct label. You can't probably do worse than this (or maybe you can). For this reason, this method serves as reference for comparing other labeling methods Labeling with true valuesNow let's look at the other end of the spectrum, this is using the correct labels for your data points. Let's retrain the Multinomial Naive Bayes classifier with the actual labels | # Calculate accuracy when using the true labels
true_acc = calculate_accuracy(X_train, y_train)
# Save the result
accs['true-labels'] = true_acc
print(f"The true labelling method achieved and accuracy of {accs['true-labels']*100:.2f}%") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Training with the true labels produced a noticeable boost in accuracy. This is expected as the classifier is now able to properly identify patterns in the training data which were lacking with randomly generated labels. Achieving higher accuracy is possible by either fine-tunning the model or even selecting a different one. For the time being you will keep the model as it is and use this accuracy as what we should strive for with the automatic labeling algorithms you will see next. Automatic labeling - Trying out different labeling strategies Let's suppose that for some reason you don't have access to the true labels associated with each data point in this dataset. It is a natural idea to think that there are patterns in the data that will provide clues of which are the correct labels. This is of course very dependant on the kind of data you are working with and to even hypothesize which patterns exist requires great domain knowledge.The dataset used in this lab was used for this reason. It is reasonable for many people to come up with rules that might help identify a spam comment from a non-spam one for a Youtube video. In the following section you will be performing automatic labeling using such rules. **You can think of each iteration of this process as a labeler with different criteria for labeling** and your job is to hire the most promising one.Notice the word **rules**. In order to perform automatic labeling you will define some rules such as "if the comment contains the word 'free' classify it as spam".First things first. Let's define how we are going to encode the labeling:- `SPAM` is represented by 1- `NOT_SPAM` by 0 - `NO_LABEL` as -1You might be wondering about the `NO_LABEL` keyword. Depending on the rules you come up with, these might not be applicable to some data points. For such cases it is better to refuse from giving a label rather than guessing, which you already saw yields poor results. First iteration - Define some rulesFor this first iteration you will create three rules based on the intuition of common patterns that appear on spam comments. The rules are simple, classify as SPAM if any of the following patterns is present within the comment or NO_LABEL otherwise:- `free` - spam comments usually lure users by promoting free stuff- `subs` - spam comments tend to ask users to subscribe to some website or channel- `http` - spam comments include links very frequently | def labeling_rules_1(x):
# Convert text to lowercase
x = x.lower()
# Define list of rules
rules = [
"free" in x,
"subs" in x,
"http" in x
]
# If the comment falls under any of the rules classify as SPAM
if any(rules):
return 1
# Otherwise, NO_LABEL
return -1
# Apply the rules the comments in the train set
labels = [labeling_rules_1(label) for label in X_train.text]
# Convert to a numpy array
labels = np.asarray(labels)
# Take a look at the automatic labels
labels | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
For lots of points the automatic labeling algorithm decided to not settle for a label, this is expected given the nature of the rules that were defined. These points should be deleted since they don't provide information about the classification process and tend to hurt performance. | # Create the automatic labeled version of X_train by removing points with NO_LABEL label
X_train_al = X_train[labels != -1]
# Remove predictions with NO_LABEL label
labels_al = labels[labels != -1]
print(f"Predictions with concrete label have shape: {labels_al.shape}")
print(f"Proportion of data points kept: {labels_al.shape[0]/labels.shape[0]*100:.2f}%") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Notice that only 379 data points remained out of the original 1564. The rules defined didn't provide enough context for the labeling algorithm to settle on a label, so around 75% of the data has been trimmed.Let's test the accuracy of the model when using these automatic generated labels: | # Compute accuracy when using these labels
iter_1_acc = calculate_accuracy(X_train_al, labels_al)
# Display accuracy
print(f"First iteration of automatic labeling has an accuracy of {iter_1_acc*100:.2f}%")
# Save the result
accs['first-iteration'] = iter_1_acc | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Let's compare this accuracy to the baselines by plotting: | def plot_accuracies(accs=accs):
colors = list("rgbcmy")
items_num = len(accs)
cont = 1
for x, y in accs.items():
if x in ['true-labels', 'random-labels', 'true-labels-best-clf']:
plt.hlines(y, 0, (items_num-2)*2, colors=colors.pop())
else:
plt.scatter(cont, y, s=100)
cont+=2
plt.legend(accs.keys(), loc="center left",bbox_to_anchor=(1, 0.5))
plt.show()
plot_accuracies() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
This first iteration had an accuracy very close to the random labeling, we should strive to do better than this. Before moving forward let's define the `label_given_rules` function that performs all of the steps you just saw, these are: - Apply the rules to a dataframe of comments- Cast the resulting labels to a numpy array- Delete all data points with NO_LABEL as label- Calculate the accuracy of the model using the automatic labels- Save the accuracy for plotting- Print some useful metrics of the process | def label_given_rules(df, rules_function, name,
accs_dict=accs, verbose=True):
# Apply labeling rules to the comments
labels = [rules_function(label) for label in df.text]
# Convert to a numpy array
labels = np.asarray(labels)
# Save initial number of data points
initial_size = labels.shape[0]
# Trim points with NO_LABEL label
X_train_al = df[labels != -1]
labels = labels[labels != -1]
# Save number of data points after trimming
final_size = labels.shape[0]
# Compute accuracy
acc = calculate_accuracy(X_train_al, labels)
# Print useful information
if verbose:
print(f"Proportion of data points kept: {final_size/initial_size*100:.2f}%\n")
print(f"{name} labeling has an accuracy of {acc*100:.2f}%\n")
# Save accuracy to accuracies dictionary
accs_dict[name] = acc
return X_train_al, labels, acc | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Going forward we should come up with rules that have a better coverage of the training data, thus making pattern discovery an easier task. Also notice how the rules were only able to label as either SPAM or NO_LABEL, we should also create some rules that help the identification of NOT_SPAM comments. Second iteration - Coming up with better rulesIf you inspect the comments in the dataset you might be able to distinguish certain patterns at a glimpse. For example, not spam comments often make references to either the number of views since these were the most watched videos of 2015 or the song in the video and its contents . As for spam comments other common patterns are to promote gifts or ask to follow some channel or website.Let's create some new rules that include these patterns: | def labeling_rules_2(x):
# Convert text to lowercase
x = x.lower()
# Define list of rules to classify as NOT_SPAM
not_spam_rules = [
"view" in x,
"song" in x
]
# Define list of rules to classify as SPAM
spam_rules = [
"free" in x,
"subs" in x,
"gift" in x,
"follow" in x,
"http" in x
]
# Classify depending on the rules
if any(not_spam_rules):
return 0
if any(spam_rules):
return 1
return -1 | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
This new set of rules looks more promising as it includes more patterns to classify as SPAM as well as some patterns to classify as NOT_SPAM. This should result in more data points with a label different to NO_LABEL.Let's check if this is the case. | label_given_rules(X_train, labeling_rules_2, "second-iteration")
plot_accuracies() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
This time 44% of the original dataset was given a decisive label and there were data points for both labels, this helped the model reach a higher accuracy when compared to the first iteration. Now the accuracy is considerably higher than the random labeling but it is still very far away from the upper bound.Let's see if we can make it even better! Third Iteration - Even more rulesThe rules we have defined so far are doing a fair job. Let's add two additional rules, one for classifying SPAM comments and the other for the opposite task.At a glimpse it looks like NOT_SPAM comments are usually shorter. This may be due to them not including hyperlinks but also in general they tend to be more concrete such as "I love this song!".Let's take a look at the average number of characters for SPAM comments vs NOT_SPAM oned: | from statistics import mean
print(f"NOT_SPAM comments have an average of {mean([len(t) for t in df_labeled[df_labeled.label==0].text]):.2f} characters.")
print(f"SPAM comments have an average of {mean([len(t) for t in df_labeled[df_labeled.label==1].text]):.2f} characters.") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
It sure looks like there is a big difference in the number of characters for both types of comments.To decide on a threshold to classify as NOT_SPAM let's plot a histogram of the number of characters for NOT_SPAM comments: | plt.hist([len(t) for t in df_labeled[df_labeled.label==0].text], range=(0,100))
plt.show() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
The majority of NOT_SPAM comments have 30 or less characters so we'll use that as a threshold.Another prevalent pattern in spam comments is to ask users to "check out" a channel, website or link.Let's add these two new rules: | def labeling_rules_3(x):
# Convert text to lowercase
x = x.lower()
# Define list of rules to classify as NOT_SPAM
not_spam_rules = [
"view" in x,
"song" in x,
len(x) < 30
]
# Define list of rules to classify as SPAM
spam_rules = [
"free" in x,
"subs" in x,
"gift" in x,
"follow" in x,
"http" in x,
"check out" in x
]
# Classify depending on the rules
if any(not_spam_rules):
return 0
if any(spam_rules):
return 1
return -1
label_given_rules(X_train, labeling_rules_3, "third-iteration")
plot_accuracies() | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
These new rules do a pretty good job at both, covering the dataset and having a good model accuracy. To be more concrete this labeling strategy reached an accuracy of ~86%! We are getting closer and closer to the upper bound defined by using the true labels.We could keep going on adding more rules to improve accuracy and we do encourage you to try it out yourself! Come up with your own rulesThe following cells contain some code to help you inspect the dataset for patterns and to test out these patterns. The ones used before are commented out in case you want start from scratch or re-use them. | # Configure pandas to print out all rows to check the complete dataset
pd.set_option('display.max_rows', None)
# Check NOT_SPAM comments
df_labeled[df_labeled.label==0]
# Check SPAM comments
df_labeled[df_labeled.label==1]
def your_labeling_rules(x):
# Convert text to lowercase
x = x.lower()
# Define your rules for classifying as NOT_SPAM
not_spam_rules = [
# "view" in x,
# "song" in x,
# len(x) < 30
]
# Define your rules for classifying as SPAM
spam_rules = [
# "free" in x,
# "subs" in x,
# "gift" in x,
# "follow" in x,
# "http" in x,
# "check out" in x
]
# Classify depending on your rules
if any(not_spam_rules):
return 0
if any(spam_rules):
return 1
return -1
try:
label_given_rules(X_train, your_labeling_rules, "your-iteration")
plot_accuracies()
except ValueError:
print("You have not defined any rules.") | _____no_output_____ | Apache-2.0 | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public |
Example - Simple Vertically Partitioned Split Neural Network- Alice - Has model Segment 1 - Has the handwritten Images- Bob - Has model Segment 2 - Has the image Labels Based on [SplitNN - Tutorial 3](https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/advanced/split_neural_network/Tutorial%203%20-%20Folded%20Split%20Neural%20Network.ipynb) from Adam J Hall - Twitter: [@AJH4LL](https://twitter.com/AJH4LL) · GitHub: [@H4LL](https://github.com/H4LL)Authors:- Pavlos Papadopoulos · GitHub: [@pavlos-p](https://github.com/pavlos-p)- Tom Titcombe · GitHub: [@TTitcombe](https://github.com/TTitcombe)- Robert Sandmann · GitHub: [@rsandmann](https://github.com/rsandmann) | class SplitNN:
def __init__(self, models, optimizers):
self.models = models
self.optimizers = optimizers
self.data = []
self.remote_tensors = []
def forward(self, x):
data = []
remote_tensors = []
data.append(self.models[0](x))
if data[-1].location == self.models[1].location:
remote_tensors.append(data[-1].detach().requires_grad_())
else:
remote_tensors.append(
data[-1].detach().move(self.models[1].location).requires_grad_()
)
i = 1
while i < (len(models) - 1):
data.append(self.models[i](remote_tensors[-1]))
if data[-1].location == self.models[i + 1].location:
remote_tensors.append(data[-1].detach().requires_grad_())
else:
remote_tensors.append(
data[-1].detach().move(self.models[i + 1].location).requires_grad_()
)
i += 1
data.append(self.models[i](remote_tensors[-1]))
self.data = data
self.remote_tensors = remote_tensors
return data[-1]
def backward(self):
for i in range(len(models) - 2, -1, -1):
if self.remote_tensors[i].location == self.data[i].location:
grads = self.remote_tensors[i].grad.copy()
else:
grads = self.remote_tensors[i].grad.copy().move(self.data[i].location)
self.data[i].backward(grads)
def zero_grads(self):
for opt in self.optimizers:
opt.zero_grad()
def step(self):
for opt in self.optimizers:
opt.step()
import sys
sys.path.append('../')
import torch
from torchvision import datasets, transforms
from torch import nn, optim
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
import syft as sy
from src.dataloader import VerticalDataLoader
from src.psi.util import Client, Server
from src.utils import add_ids
hook = sy.TorchHook(torch)
# Create dataset
data = add_ids(MNIST)(".", download=True, transform=ToTensor()) # add_ids adds unique IDs to data points
# Batch data
dataloader = VerticalDataLoader(data, batch_size=128) # partition_dataset uses by default "remove_data=True, keep_order=False" | _____no_output_____ | Apache-2.0 | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical |
Check if the datasets are unorderedIn MNIST, we have 2 datasets (the images and the labels). | # We need matplotlib library to plot the dataset
import matplotlib.pyplot as plt
# Plot the first 10 entries of the labels and the dataset
figure = plt.figure()
num_of_entries = 10
for index in range(1, num_of_entries + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(dataloader.dataloader1.dataset.data[index].numpy().squeeze(), cmap='gray_r')
print(dataloader.dataloader2.dataset[index][0], end=" ") | 1 4 0 7 0 1 5 1 5 1 | Apache-2.0 | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical |
Implement PSI and order the datasets accordingly | # Compute private set intersection
client_items = dataloader.dataloader1.dataset.get_ids()
server_items = dataloader.dataloader2.dataset.get_ids()
client = Client(client_items)
server = Server(server_items)
setup, response = server.process_request(client.request, len(client_items))
intersection = client.compute_intersection(setup, response)
# Order data
dataloader.drop_non_intersecting(intersection)
dataloader.sort_by_ids() | _____no_output_____ | Apache-2.0 | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical |
Check again if the datasets are ordered | # We need matplotlib library to plot the dataset
import matplotlib.pyplot as plt
# Plot the first 10 entries of the labels and the dataset
figure = plt.figure()
num_of_entries = 10
for index in range(1, num_of_entries + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(dataloader.dataloader1.dataset.data[index].numpy().squeeze(), cmap='gray_r')
print(dataloader.dataloader2.dataset[index][0], end=" ")
torch.manual_seed(0)
# Define our model segments
input_size = 784
hidden_sizes = [128, 640]
output_size = 10
models = [
nn.Sequential(
nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
),
nn.Sequential(nn.Linear(hidden_sizes[1], output_size), nn.LogSoftmax(dim=1)),
]
# Create optimisers for each segment and link to them
optimizers = [
optim.SGD(model.parameters(), lr=0.03,)
for model in models
]
# create some workers
alice = sy.VirtualWorker(hook, id="alice")
bob = sy.VirtualWorker(hook, id="bob")
# Send Model Segments to model locations
model_locations = [alice, bob]
for model, location in zip(models, model_locations):
model.send(location)
#Instantiate a SpliNN class with our distributed segments and their respective optimizers
splitNN = SplitNN(models, optimizers)
def train(x, target, splitNN):
#1) Zero our grads
splitNN.zero_grads()
#2) Make a prediction
pred = splitNN.forward(x)
#3) Figure out how much we missed by
criterion = nn.NLLLoss()
loss = criterion(pred, target)
#4) Backprop the loss on the end layer
loss.backward()
#5) Feed Gradients backward through the nework
splitNN.backward()
#6) Change the weights
splitNN.step()
return loss, pred
for i in range(epochs):
running_loss = 0
correct_preds = 0
total_preds = 0
for (data, ids1), (labels, ids2) in dataloader:
# Train a model
data = data.send(models[0].location)
data = data.view(data.shape[0], -1)
labels = labels.send(models[-1].location)
# Call model
loss, preds = train(data, labels, splitNN)
# Collect statistics
running_loss += loss.get()
correct_preds += preds.max(1)[1].eq(labels).sum().get().item()
total_preds += preds.get().size(0)
print(f"Epoch {i} - Training loss: {running_loss/len(dataloader):.3f} - Accuracy: {100*correct_preds/total_preds:.3f}")
print("Labels pointing to: ", labels)
print("Images pointing to: ", data) | Labels pointing to: (Wrapper)>[PointerTensor | me:88412365445 -> bob:61930132897]
Images pointing to: (Wrapper)>[PointerTensor | me:17470208323 -> alice:25706803556]
| Apache-2.0 | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical |
1-3. 複数量子ビットの記述 ここまでは1量子ビットの状態とその操作(演算)の記述について学んできた。この章の締めくくりとして、$n$個の量子ビットがある場合の状態の記述について学んでいこう。テンソル積がたくさん出てきてややこしいが、コードをいじりながら身につけていってほしい。$n$個の**古典**ビットの状態は$n$個の$0,1$の数字によって表現され、そのパターンの総数は$2^n$個ある。量子力学では、これらすべてのパターンの重ね合わせ状態が許されているので、$n$個の**量子**ビットの状態$|\psi \rangle$はどのビット列がどのような重みで重ね合わせになっているかという$2^n$個の複素確率振幅で記述される:$$\begin{eqnarray}|\psi \rangle &= & c_{00...0} |00...0\rangle +c_{00...1} |00...1\rangle + \cdots +c_{11...1} |11...1\rangle =\left(\begin{array}{c}c_{00...0}\\c_{00...1}\\\vdots\\c_{11...1}\end{array}\right).\end{eqnarray}$$ただし、複素確率振幅は規格化$\sum _{i_1,..., i_n} |c_{i_1...i_n}|^2=1$されているものとする。 そして、この$n$量子ビットの量子状態を測定するとビット列$i_1 ... i_n$が確率$$\begin{eqnarray}p_{i_1 ... i_n} &=&|c_{i_1 ... i_n}|^2\label{eq02}\end{eqnarray}$$でランダムに得られ、測定後の状態は$|i_1 \dotsc i_n\rangle$となる。**このように**$n$**量子ビットの状態は、**$n$**に対して指数的に大きい**$2^n$**次元の複素ベクトルで記述する必要があり、ここに古典ビットと量子ビットの違いが顕著に現れる**。そして、$n$量子ビット系に対する操作は$2^n \times 2^n$次元のユニタリ行列として表される。 言ってしまえば、量子コンピュータとは、量子ビット数に対して指数的なサイズの複素ベクトルを、物理法則に従ってユニタリ変換するコンピュータのことなのである。※ここで、複数量子ビットの順番と表記の関係について注意しておく。状態をケットで記述する際に、「1番目」の量子ビット、「2番目」の量子ビット、……の状態に対応する0と1を左から順番に並べて表記した。例えば$|011\rangle$と書けば、1番目の量子ビットが0、2番目の量子ビットが1、3番目の量子ビットが1である状態を表す。一方、例えば011を2進数の表記と見た場合、上位ビットが左、下位ビットが右となることに注意しよう。すなわち、一番左の0は最上位ビットであって$2^2$の位に対応し、真ん中の1は$2^1$の位、一番右の1は最下位ビットであって$2^0=1$の位に対応する。つまり、「$i$番目」の量子ビットは、$n$桁の2進数表記の$n-i+1$桁目に対応している。このことは、SymPyなどのパッケージで複数量子ビットを扱う際に気を付ける必要がある(下記「SymPyを用いた演算子のテンソル積」も参照)。(詳細は Nielsen-Chuang の `1.2.1 Multiple qbits` を参照) 例:2量子ビットの場合2量子ビットの場合は、 00, 01, 10, 11 の4通りの状態の重ね合わせをとりうるので、その状態は一般的に$$c_{00} |00\rangle + c_{01} |01\rangle + c_{10}|10\rangle + c_{11} |11\rangle = \left( \begin{array}{c}c_{00}\\c_{01}\\c_{10}\\c_{11}\end{array}\right)$$とかける。 一方、2量子ビットに対する演算は$4 \times 4$行列で書け、各列と各行はそれぞれ $\langle00|,\langle01|,\langle10|, \langle11|, |00\rangle,|01\rangle,|10\rangle, |01\rangle$ に対応する。 このような2量子ビットに作用する演算としてもっとも重要なのが**制御NOT演算(CNOT演算)**であり、行列表示では$$\begin{eqnarray}\Lambda(X) =\left(\begin{array}{cccc}1 & 0 & 0& 0\\0 & 1 & 0& 0\\0 & 0 & 0 & 1\\0 & 0 & 1& 0\end{array}\right)\end{eqnarray}$$となる。 CNOT演算が2つの量子ビットにどのように作用するか見てみよう。まず、1つ目の量子ビットが$|0\rangle$の場合、$c_{10} = c_{11} = 0$なので、$$\Lambda(X)\left(\begin{array}{c}c_{00}\\c_{01}\\0\\0\end{array}\right) =\left(\begin{array}{c}c_{00}\\c_{01}\\0\\0\end{array}\right)$$となり、状態は変化しない。一方、1つ目の量子ビットが$|1\rangle$の場合、$c_{00} = c_{01} = 0$なので、$$\Lambda(X)\left(\begin{array}{c}0\\0\\c_{10}\\c_{11}\end{array}\right) =\left(\begin{array}{c}0\\0\\c_{11}\\c_{10}\end{array}\right)$$となり、$|10\rangle$と$|11\rangle$の確率振幅が入れ替わる。すなわち、2つ目の量子ビットが反転している。つまり、CNOT演算は1つ目の量子ビットをそのままに保ちつつ、- 1つ目の量子ビットが$|0\rangle$の場合は、2つ目の量子ビットにも何もしない(恒等演算$I$が作用)- 1つ目の量子ビットが$|1\rangle$の場合は、2つ目の量子ビットを反転させる($X$が作用)という効果を持つ。そこで、1つ目の量子ビットを**制御量子ビット**、2つ目の量子ビットを**ターゲット量子ビット**と呼ぶ。このCNOT演算の作用は、$\oplus$を mod 2の足し算、つまり古典計算における排他的論理和(XOR)とすると、$$\begin{eqnarray}\Lambda(X) |ij \rangle = |i \;\; (i\oplus j)\rangle \:\:\: (i,j=0,1)\end{eqnarray}$$とも書ける。よって、CNOT演算は古典計算でのXORを可逆にしたものとみなせる(ユニタリー行列は定義$U^\dagger U = U U^\dagger = I$より可逆であることに注意)。例えば、1つ目の量子ビットを$|0\rangle$と$|1\rangle$の重ね合わせ状態にし、2つ目の量子ビットを$|0\rangle$として$$\begin{eqnarray}\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle )\otimes |0\rangle =\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\0\\1\\0\end{array}\right)\end{eqnarray}$$にCNOTを作用させると、$$\begin{eqnarray}\frac{1}{\sqrt{2}}( |00\rangle + |11\rangle ) =\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\0\\0\\1\end{array}\right)\end{eqnarray}$$が得られ、2つ目の量子ビットがそのままである状態$|00\rangle$と反転された状態$|11\rangle$の重ね合わせになる。(記号$\otimes$については次節参照)さらに、CNOT ゲートを組み合わせることで重要な2量子ビットゲートである**SWAP ゲート**を作ることができる。$$\Lambda(X)_{i,j}$$を$i$番目の量子ビットを制御、$j$番目の量子ビットをターゲットとするCNOT ゲートとして、$$\begin{align}\mathrm{SWAP} &= \Lambda(X)_{1,2} \Lambda(X)_{2,1} \Lambda(X)_{1,2}\\&=\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & 1 \\0 & 0 & 1 & 0\end{array}\right)\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\0 & 0 & 0 & 1 \\0 & 0 & 1 & 0 \\0 & 1 & 0 & 0\end{array}\right)\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & 1 \\0 & 0 & 1 & 0\end{array}\right)\\&=\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\0 & 0 & 1 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & 1\end{array}\right)\end{align}$$のように書ける。これは1 番目の量子ビットと2 番目の量子ビットが交換するゲートであることが分かる。このことは、上記のmod 2の足し算$\oplus$を使った表記で簡単に確かめることができる。3つのCNOTゲート$\Lambda(X)_{1,2} \Lambda(X)_{2,1} \Lambda(X)_{1,2}$の$|ij\rangle$への作用を1ステップずつ書くと、$i \oplus (i \oplus j) = (i \oplus i) \oplus j = 0 \oplus j = j$であることを使って、$$\begin{align}|ij\rangle &\longrightarrow|i \;\; (i\oplus j)\rangle\\&\longrightarrow|(i\oplus (i\oplus j)) \;\; (i\oplus j)\rangle =|j \;\; (i\oplus j)\rangle\\&\longrightarrow|j \;\; (j\oplus (i\oplus j))\rangle =|ji\rangle\end{align}$$となり、2つの量子ビットが交換されていることが分かる。(詳細は Nielsen-Chuang の `1.3.2 Multiple qbit gates` を参照) テンソル積の計算手計算や解析計算で威力を発揮するのは、**テンソル積**($\otimes$)である。これは、複数の量子ビットがある場合に、それをどのようにして、上で見た大きな一つのベクトルへと変換するのか?という計算のルールを与えてくれる。量子力学の世界では、2つの量子系があってそれぞれの状態が$|\psi \rangle$と$|\phi \rangle$のとき、$$|\psi \rangle \otimes |\phi\rangle$$とテンソル積 $\otimes$ を用いて書く。このような複数の量子系からなる系のことを**複合系**と呼ぶ。例えば2量子ビット系は複合系である。基本的にはテンソル積は、**多項式と同じような計算ルール**で計算してよい。例えば、$$ (\alpha |0\rangle + \beta |1\rangle )\otimes (\gamma |0\rangle + \delta |1\rangle )= \alpha \gamma |0\rangle |0\rangle + \alpha \delta |0\rangle |1\rangle + \beta \gamma |1 \rangle | 0\rangle + \beta \delta |1\rangle |1\rangle $$のように計算する。列ベクトル表示すると、$|00\rangle$, $|01\rangle$, $|10\rangle$, $|11\rangle$に対応する4次元ベクトル、$$\left(\begin{array}{c}\alpha\\\beta\end{array}\right)\otimes \left(\begin{array}{c}\gamma\\\delta\end{array}\right) =\left(\begin{array}{c}\alpha \gamma\\\alpha \delta\\\beta \gamma\\\beta \delta\end{array}\right)$$を得る計算になっている。 SymPyを用いたテンソル積の計算 | from IPython.display import Image, display_png
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra
from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP, CPHASE
init_printing() # ベクトルや行列を綺麗に表示するため
# Google Colaboratory上でのみ実行してください
from IPython.display import HTML
def setup_mathjax():
display(HTML('''
<script>
if (!window.MathJax && window.google && window.google.colab) {
window.MathJax = {
'tex2jax': {
'inlineMath': [['$', '$'], ['\\(', '\\)']],
'displayMath': [['$$', '$$'], ['\\[', '\\]']],
'processEscapes': true,
'processEnvironments': true,
'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],
'displayAlign': 'center',
},
'HTML-CSS': {
'styles': {'.MathJax_Display': {'margin': 0}},
'linebreaks': {'automatic': true},
// Disable to prevent OTF font loading, which aren't part of our
// distribution.
'imageFont': null,
},
'messageStyle': 'none'
};
var script = document.createElement("script");
script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe";
document.head.appendChild(script);
}
</script>
'''))
get_ipython().events.register('pre_run_cell', setup_mathjax)
a,b,c,d = symbols('alpha,beta,gamma,delta')
psi = a*Qubit('0')+b*Qubit('1')
phi = c*Qubit('0')+d*Qubit('1')
TensorProduct(psi, phi) #テンソル積
represent(TensorProduct(psi, phi)) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
さらに$|\psi\rangle$とのテンソル積をとると8次元のベクトルになる: | represent(TensorProduct(psi,TensorProduct(psi, phi))) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
演算子のテンソル積演算子についても何番目の量子ビットに作用するのか、というのをテンソル積をもちいて表現することができる。たとえば、1つめの量子ビットには$A$という演算子、2つめの量子ビットには$B$という演算子を作用させるという場合には、$$ A \otimes B$$としてテンソル積演算子が与えられる。$A$と$B$をそれぞれ、2×2の行列とすると、$A\otimes B$は4×4の行列として$$\left(\begin{array}{cc}a_{11} & a_{12}\\a_{21} & a_{22}\end{array}\right)\otimes \left(\begin{array}{cc}b_{11} & b_{12}\\b_{21} & b_{22}\end{array}\right) =\left(\begin{array}{cccc}a_{11} b_{11} & a_{11} b_{12} & a_{12} b_{11} & a_{12} b_{12}\\a_{11} b_{21} & a_{11} b_{22} & a_{12} b_{21} & a_{12} b_{22}\\a_{21} b_{11} & a_{21} b_{12} & a_{22} b_{11} & a_{22} b_{12}\\a_{21} b_{21} & a_{21} b_{22} & a_{22} b_{21} & a_{22} b_{22}\end{array}\right)$$のように計算される。テンソル積状態 $$|\psi \rangle \otimes | \phi \rangle $$ に対する作用は、$$ (A|\psi \rangle ) \otimes (B |\phi \rangle )$$となり、それぞれの部分系$|\psi \rangle$と$|\phi\rangle$に$A$と$B$が作用する。足し算に対しては、多項式のように展開してそれぞれの項を作用させればよい。$$(A+C)\otimes (B+D) |\psi \rangle \otimes | \phi \rangle =(A \otimes B +A \otimes D + C \otimes B + C \otimes D) |\psi \rangle \otimes | \phi \rangle\\ =(A|\psi \rangle) \otimes (B| \phi \rangle)+(A|\psi \rangle) \otimes (D| \phi \rangle)+(C|\psi \rangle) \otimes (B| \phi \rangle)+(C|\psi \rangle) \otimes (D| \phi \rangle)$$テンソル積やテンソル積演算子は左右横並びで書いているが、本当は$$\left(\begin{array}{c}A\\\otimes \\B\end{array}\right)\begin{array}{c}|\psi \rangle \\\otimes \\|\phi\rangle\end{array}$$のように縦に並べた方がその作用の仕方がわかりやすいのかもしれない。例えば、CNOT演算を用いて作られるエンタングル状態は、$$\left(\begin{array}{c}|0\rangle \langle 0|\\\otimes \\I\end{array}+\begin{array}{c}|1\rangle \langle 1|\\\otimes \\X\end{array}\right)\left(\begin{array}{c}\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)\\\otimes \\|0\rangle\end{array}\right) =\frac{1}{\sqrt{2}}\left(\begin{array}{c}|0 \rangle \\\otimes \\|0\rangle\end{array}+\begin{array}{c}|1 \rangle \\\otimes \\|1\rangle\end{array}\right)$$のようになる。 SymPyを用いた演算子のテンソル積SymPyで演算子を使用する時は、何桁目の量子ビットに作用する演算子かを常に指定する。「何**番目**」ではなく2進数表記の「何**桁目**」であることに注意しよう。$n$量子ビットのうちの左から$i$番目の量子ビットを指定する場合、SymPyのコードでは`n-i`を指定する(0を基点とするインデックス)。 `H(0)` は、1量子ビット空間で表示すると | represent(H(0),nqubits=1) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
2量子ビット空間では$H \otimes I$に対応しており、その表示は | represent(H(1),nqubits=2) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
CNOT演算は、 | represent(CNOT(1,0),nqubits=2) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
パウリ演算子のテンソル積$X\otimes Y \otimes Z$も、 | represent(X(2)*Y(1)*Z(0),nqubits=3) | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
このようにして、上記のテンソル積のルールを実際にたしかめてみることができる。 複数の量子ビットの一部分だけを測定した場合複数の量子ビットを全て測定した場合の測定結果の確率については既に説明した。複数の量子ビットのうち、一部だけを測定することもできる。その場合、測定結果の確率は、測定結果に対応する(部分系の)基底で射影したベクトルの長さの2乗になり、測定後の状態は射影されたベクトルを規格化したものになる。具体的に見ていこう。以下の$n$量子ビットの状態を考える。\begin{align}|\psi\rangle &=c_{00...0} |00...0\rangle +c_{00...1} |00...1\rangle + \cdots +c_{11...1} |11...1\rangle\\&= \sum_{i_1 \dotsc i_n} c_{i_1 \dotsc i_n} |i_1 \dotsc i_n\rangle =\sum_{i_1 \dotsc i_n} c_{i_1 \dotsc i_n} |i_1\rangle \otimes \cdots \otimes |i_n\rangle\end{align}1番目の量子ビットを測定するとしよう。1つ目の量子ビットの状態空間の正規直交基底$|0\rangle$, $|1\rangle$に対する射影演算子はそれぞれ$|0\rangle\langle0|$, $|1\rangle\langle1|$と書ける。1番目の量子ビットを$|0\rangle$に射影し、他の量子ビットには何もしない演算子$$|0\rangle\langle0| \otimes I \otimes \cdots \otimes I$$を使って、測定値0が得られる確率は$$\bigl\Vert \bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) |\psi\rangle \bigr\Vert^2 =\langle \psi | \bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) | \psi \rangle$$である。ここで$$\bigl(|0\rangle\langle0| \otimes I \otimes \cdots \otimes I\bigr) | \psi \rangle =\sum_{i_2 \dotsc i_n} c_{0 i_2 \dotsc i_n} |0\rangle \otimes |i_2\rangle \otimes \cdots \otimes |i_n\rangle$$なので、求める確率は$$p_0 = \sum_{i_2 \dotsc i_n} |c_{0 i_2 \dotsc i_n}|^2$$となり、測定後の状態は$$\frac{1}{\sqrt{p_0}}\sum_{i_2 \dotsc i_n} c_{0 i_2 \dotsc i_n} |0\rangle \otimes |i_2\rangle \otimes \cdots \otimes |i_n\rangle$$となる。0と1を入れ替えれば、測定値1が得られる確率と測定後の状態が得られる。ここで求めた$p_0$, $p_1$の表式は、測定値$i_1, \dotsc, i_n$が得られる同時確率分布$p_{i_1, \dotsc, i_n}$から計算される$i_1$の周辺確率分布と一致することに注意しよう。実際、$$\sum_{i_2, \dotsc, i_n} p_{i_1, \dotsc, i_n} = \sum_{i_2, \dotsc, i_n} |c_{i_1, \dotsc, i_n}|^2 = p_{i_1}$$である。測定される量子ビットを増やし、最初の$k$個の量子ビットを測定する場合も同様に計算できる。測定結果$i_1, \dotsc, i_k$を得る確率は$$p_{i_1, \dotsc, i_k} = \sum_{i_{k+1}, \dotsc, i_n} |c_{i_1, \dotsc, i_n}|^2$$であり、測定後の状態は$$\frac{1}{\sqrt{p_{i_1, \dotsc, i_k}}}\sum_{i_{k+1} \dotsc i_n} c_{i_1 \dotsc i_n} |i_1 \rangle \otimes \cdots \otimes |i_n\rangle$$となる。(和をとるのは$i_{k+1},\cdots,i_n$だけであることに注意) SymPyを使ってさらに具体的な例を見てみよう。H演算とCNOT演算を組み合わせて作られる次の状態を考える。$$|\psi\rangle = \Lambda(X) (H \otimes H) |0\rangle \otimes |0\rangle = \frac{|00\rangle + |10\rangle + |01\rangle + |11\rangle}{2}$$ | psi = qapply(CNOT(1, 0)*H(1)*H(0)*Qubit('00'))
psi | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
この状態の1つ目の量子ビットを測定して0になる確率は$$p_0 = \langle \psi | \bigl( |0\rangle\langle0| \otimes I \bigr) | \psi \rangle =\left(\frac{\langle 00 | + \langle 10 | + \langle 01 | + \langle 11 |}{2}\right)\left(\frac{| 00 \rangle + | 01 \rangle}{2}\right) =\frac{1}{2}$$で、測定後の状態は$$\frac{1}{\sqrt{p_0}} \bigl( |0\rangle\langle0| \otimes I \bigr) | \psi \rangle =\frac{| 00 \rangle + | 01 \rangle}{\sqrt{2}}$$である。 この結果をSymPyでも計算してみよう。SymPyには測定用の関数が数種類用意されていて、一部の量子ビットを測定した場合の確率と測定後の状態を計算するには、`measure_partial`を用いればよい。測定する状態と、測定を行う量子ビットのインデックスを引数として渡すと、測定後の状態と測定の確率の組がリストとして出力される。1つめの量子ビットが0だった場合の量子状態と確率は`[0]`要素を参照すればよい。 | from sympy.physics.quantum.qubit import measure_all, measure_partial
measured_state_and_probability = measure_partial(psi, (1,))
measured_state_and_probability[0] | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
上で手計算した結果と合っていることが分かる。測定結果が1だった場合も同様に計算できる。 | measured_state_and_probability[1] | _____no_output_____ | BSD-3-Clause | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo |
Evaluation of a Pipeline and its Components[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial5_Evaluation.ipynb)To be able to make a statement about the quality of results a question-answering pipeline or any other pipeline in haystack produces, it is important to evaluate it. Furthermore, evaluation allows determining which components of the pipeline can be improved.The results of the evaluation can be saved as CSV files, which contain all the information to calculate additional metrics later on or inspect individual predictions. Prepare environment Colab: Enable the GPU runtimeMake sure you enable the GPU runtime to experience decent speed in this tutorial.**Runtime -> Change Runtime type -> Hardware accelerator -> GPU** | # Make sure you have a GPU running
!nvidia-smi
# Install the latest release of Haystack in your own environment
#! pip install farm-haystack
# Install the latest master of Haystack
!pip install --upgrade pip
!pip install git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab] | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Start an Elasticsearch serverYou can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source. | # If Docker is available: Start Elasticsearch as docker container
# from haystack.utils import launch_es
# launch_es()
# Alternative in Colab / No Docker environments: Start Elasticsearch from source
! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q
! tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz
! chown -R daemon:daemon elasticsearch-7.9.2
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(
["elasticsearch-7.9.2/bin/elasticsearch"], stdout=PIPE, stderr=STDOUT, preexec_fn=lambda: os.setuid(1) # as daemon
)
# wait until ES has started
! sleep 30 | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Fetch, Store And Preprocess the Evaluation Dataset | from haystack.utils import fetch_archive_from_http
# Download evaluation data, which is a subset of Natural Questions development set containing 50 documents with one question per document and multiple annotated answers
doc_dir = "data/tutorial5"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted
doc_index = "tutorial5_docs"
label_index = "tutorial5_labels"
# Connect to Elasticsearch
from haystack.document_stores import ElasticsearchDocumentStore
# Connect to Elasticsearch
document_store = ElasticsearchDocumentStore(
host="localhost",
username="",
password="",
index=doc_index,
label_index=label_index,
embedding_field="emb",
embedding_dim=768,
excluded_meta_data=["emb"],
)
from haystack.nodes import PreProcessor
# Add evaluation data to Elasticsearch Document Store
# We first delete the custom tutorial indices to not have duplicate elements
# and also split our documents into shorter passages using the PreProcessor
preprocessor = PreProcessor(
split_length=200,
split_overlap=0,
split_respect_sentence_boundary=False,
clean_empty_lines=False,
clean_whitespace=False,
)
document_store.delete_documents(index=doc_index)
document_store.delete_documents(index=label_index)
# The add_eval_data() method converts the given dataset in json format into Haystack document and label objects. Those objects are then indexed in their respective document and label index in the document store. The method can be used with any dataset in SQuAD format.
document_store.add_eval_data(
filename="data/tutorial5/nq_dev_subset_v2.json",
doc_index=doc_index,
label_index=label_index,
preprocessor=preprocessor,
) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Initialize the Two Components of an ExtractiveQAPipeline: Retriever and Reader | # Initialize Retriever
from haystack.nodes import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
# Alternative: Evaluate dense retrievers (EmbeddingRetriever or DensePassageRetriever)
# The EmbeddingRetriever uses a single transformer based encoder model for query and document.
# In contrast, DensePassageRetriever uses two separate encoders for both.
# Please make sure the "embedding_dim" parameter in the DocumentStore above matches the output dimension of your models!
# Please also take care that the PreProcessor splits your files into chunks that can be completely converted with
# the max_seq_len limitations of Transformers
# The SentenceTransformer model "sentence-transformers/multi-qa-mpnet-base-dot-v1" generally works well with the EmbeddingRetriever on any kind of English text.
# For more information and suggestions on different models check out the documentation at: https://www.sbert.net/docs/pretrained_models.html
# from haystack.retriever import EmbeddingRetriever, DensePassageRetriever
# retriever = EmbeddingRetriever(document_store=document_store, model_format="sentence_transformers",
# embedding_model="sentence-transformers/multi-qa-mpnet-base-dot-v1")
# retriever = DensePassageRetriever(document_store=document_store,
# query_embedding_model="facebook/dpr-question_encoder-single-nq-base",
# passage_embedding_model="facebook/dpr-ctx_encoder-single-nq-base",
# use_gpu=True,
# max_seq_len_passage=256,
# embed_title=True)
# document_store.update_embeddings(retriever, index=doc_index)
# Initialize Reader
from haystack.nodes import FARMReader
reader = FARMReader("deepset/roberta-base-squad2", top_k=4, return_no_answer=True)
# Define a pipeline consisting of the initialized retriever and reader
from haystack.pipelines import ExtractiveQAPipeline
pipeline = ExtractiveQAPipeline(reader=reader, retriever=retriever)
# The evaluation also works with any other pipeline.
# For example you could use a DocumentSearchPipeline as an alternative:
# from haystack.pipelines import DocumentSearchPipeline
# pipeline = DocumentSearchPipeline(retriever=retriever) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Evaluation of an ExtractiveQAPipelineHere we evaluate retriever and reader in open domain fashion on the full corpus of documents i.e. a document is consideredcorrectly retrieved if it contains the gold answer string within it. The reader is evaluated based purely on thepredicted answer string, regardless of which document this came from and the position of the extracted span.The generation of predictions is seperated from the calculation of metrics. This allows you to run the computation-heavy model predictions only once and then iterate flexibly on the metrics or reports you want to generate. | from haystack.schema import EvaluationResult, MultiLabel
# We can load evaluation labels from the document store
# We are also opting to filter out no_answer samples
eval_labels = document_store.get_all_labels_aggregated(drop_negative_labels=True, drop_no_answers=False)
eval_labels = [label for label in eval_labels if not label.no_answer] # filter out no_answer cases
## Alternative: Define queries and labels directly
# eval_labels = [
# MultiLabel(
# labels=[
# Label(
# query="who is written in the book of life",
# answer=Answer(
# answer="every person who is destined for Heaven or the World to Come",
# offsets_in_context=[Span(374, 434)]
# ),
# document=Document(
# id='1b090aec7dbd1af6739c4c80f8995877-0',
# content_type="text",
# content='Book of Life - wikipedia Book of Life Jump to: navigation, search This article is
# about the book mentioned in Christian and Jewish religious teachings...'
# ),
# is_correct_answer=True,
# is_correct_document=True,
# origin="gold-label"
# )
# ]
# )
# ]
# Similar to pipeline.run() we can execute pipeline.eval()
eval_result = pipeline.eval(labels=eval_labels, params={"Retriever": {"top_k": 5}})
# The EvaluationResult contains a pandas dataframe for each pipeline node.
# That's why there are two dataframes in the EvaluationResult of an ExtractiveQAPipeline.
retriever_result = eval_result["Retriever"]
retriever_result.head()
reader_result = eval_result["Reader"]
reader_result.head()
# We can filter for all documents retrieved for a given query
query = "who is written in the book of life"
retriever_book_of_life = retriever_result[retriever_result["query"] == query]
# We can also filter for all answers predicted for a given query
reader_book_of_life = reader_result[reader_result["query"] == query]
# Save the evaluation result so that we can reload it later and calculate evaluation metrics without running the pipeline again.
eval_result.save("../") | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Calculating Evaluation MetricsLoad an EvaluationResult to quickly calculate standard evaluation metrics for all predictions,such as F1-score of each individual prediction of the Reader node or recall of the retriever.To learn more about the metrics, see [Evaluation Metrics](https://haystack.deepset.ai/guides/evaluationmetrics-retrieval) | saved_eval_result = EvaluationResult.load("../")
metrics = saved_eval_result.calculate_metrics()
print(f'Retriever - Recall (single relevant document): {metrics["Retriever"]["recall_single_hit"]}')
print(f'Retriever - Recall (multiple relevant documents): {metrics["Retriever"]["recall_multi_hit"]}')
print(f'Retriever - Mean Reciprocal Rank: {metrics["Retriever"]["mrr"]}')
print(f'Retriever - Precision: {metrics["Retriever"]["precision"]}')
print(f'Retriever - Mean Average Precision: {metrics["Retriever"]["map"]}')
print(f'Reader - F1-Score: {metrics["Reader"]["f1"]}')
print(f'Reader - Exact Match: {metrics["Reader"]["exact_match"]}') | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Generating an Evaluation ReportA summary of the evaluation results can be printed to get a quick overview. It includes some aggregated metrics and also shows a few wrongly predicted examples. | pipeline.print_eval_report(saved_eval_result) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Advanced Evaluation MetricsAs an advanced evaluation metric, semantic answer similarity (SAS) can be calculated. This metric takes into account whether the meaning of a predicted answer is similar to the annotated gold answer rather than just doing string comparison.To this end SAS relies on pre-trained models. For English, we recommend "cross-encoder/stsb-roberta-large", whereas for German we recommend "deepset/gbert-large-sts". A good multilingual model is "sentence-transformers/paraphrase-multilingual-mpnet-base-v2".More info on this metric can be found in our [paper](https://arxiv.org/abs/2108.06130) or in our [blog post](https://www.deepset.ai/blog/semantic-answer-similarity-to-evaluate-qa). | advanced_eval_result = pipeline.eval(
labels=eval_labels, params={"Retriever": {"top_k": 1}}, sas_model_name_or_path="cross-encoder/stsb-roberta-large"
)
metrics = advanced_eval_result.calculate_metrics()
print(metrics["Reader"]["sas"]) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Isolated Evaluation ModeThe isolated node evaluation uses labels as input to the Reader node instead of the output of the preceeding Retriever node.Thereby, we can additionally calculate the upper bounds of the evaluation metrics of the Reader. Note that even with isolated evaluation enabled, integrated evaluation will still be running. | eval_result_with_upper_bounds = pipeline.eval(
labels=eval_labels, params={"Retriever": {"top_k": 5}, "Reader": {"top_k": 5}}, add_isolated_node_eval=True
)
pipeline.print_eval_report(eval_result_with_upper_bounds) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Evaluation of Individual Components: RetrieverSometimes you might want to evaluate individual components, for example, if you don't have a pipeline but only a retriever or a reader with a model that you trained yourself.Here we evaluate only the retriever, based on whether the gold_label document is retrieved. | ## Evaluate Retriever on its own
# Note that no_answer samples are omitted when evaluation is performed with this method
retriever_eval_results = retriever.eval(top_k=5, label_index=label_index, doc_index=doc_index)
# Retriever Recall is the proportion of questions for which the correct document containing the answer is
# among the correct documents
print("Retriever Recall:", retriever_eval_results["recall"])
# Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank
print("Retriever Mean Avg Precision:", retriever_eval_results["map"]) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Just as a sanity check, we can compare the recall from `retriever.eval()` with the multi hit recall from `pipeline.eval(add_isolated_node_eval=True)`.These two recall metrics are only comparable since we chose to filter out no_answer samples when generating eval_labels. | metrics = eval_result_with_upper_bounds.calculate_metrics()
print(metrics["Retriever"]["recall_multi_hit"]) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Evaluation of Individual Components: ReaderHere we evaluate only the reader in a closed domain fashion i.e. the reader is given one queryand its corresponding relevant document and metrics are calculated on whether the right position in this text is selected bythe model as the answer span (i.e. SQuAD style) | # Evaluate Reader on its own
reader_eval_results = reader.eval(document_store=document_store, label_index=label_index, doc_index=doc_index)
# Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch
# reader_eval_results = reader.eval_on_file("../data/nq", "nq_dev_subset_v2.json", device=device)
# Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer
print("Reader Top-N-Accuracy:", reader_eval_results["top_n_accuracy"])
# Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer
print("Reader Exact Match:", reader_eval_results["EM"])
# Reader F1-Score is the average overlap between the predicted answers and the correct answers
print("Reader F1-Score:", reader_eval_results["f1"]) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack |
Calculate Rider-Bicycle Weight DistributionCompare weight distribution on front wheel (WDF) between force plate measurements and the CoM model prediction. Calculate WDF from CoM modelUse CoM model to predict WDF | # Set variables for gravity, mass, and frame geometry.
g = -9.81 #gravity (m/s^2)
Mb = 4.94 #mass of bike (kg)
Mr = 87.2562 #mass of rider (kg)
Mt = Mb + Mr #mass of rider + bike
Wb = Mb * g #weight of bike
Wr = Mr * g #weight of rider
Wt = Wb + Wr #weight of bike + rider
Lfc = 0.579 #length bottom bracket to center of front wheel (e.g. Tarmac=0.579, Shiv=0.596, Epic=719)
Lrc = 0.410 #length bottom bracket to center of rear wheel (e.g. Tarmac=0.410, Shiv=0.415, Epic=433)
Lt = Lfc + Lrc #length wheelbase
Ffwb = 3.45 * g #use if measured for trainer with rear wheel off bike
#Ffwb = Wb - ((Lfc / Lt) * Wb) #reaction force on front wheel - bike
Frwb = Wb - ((Lrc / Lt) * Wb) #reaction force on rear wheel - bike | _____no_output_____ | MIT | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com |
Mount your google drive folder | from google.colab import drive
drive.mount('/content/drive') | Mounted at /content/drive
| MIT | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com |
*NOTE* You need to create a copy of the "com2bb.json" file in your main google drive folder for this to work.Load CoM data from json file stored in Google Drive | import json
with open("/content/drive/My Drive/com2bb.json", "r") as read_file:
data = json.load(read_file)
data[:10] | _____no_output_____ | MIT | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com |
Work on data | import statistics as st
import numpy as np
Lcmbb = np.array(data) / 1000 #convert mm to m
Lcmrc = Lrc - Lcmbb #length rider CoM to rear center
Ffwt = (Wr * Lcmrc) / Lt + Ffwb #reaction force on front wheel - total (N)
Wdft = Ffwt / Wt * 100 #distribution of weight on front wheel - total (%)
print(st.mean(Lcmbb))
print(st.mean(Wdft)) | -0.04212060080728705
47.00747507077737
| MIT | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.