markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Subcategories
Pages are only members of the direct category they are in. If a page is in a category, and that category is a member of another category, then it will not be shown through the members() function. The basic rule is that if you're on a category's Wikipedia page (like http://enwp.org/Category:Universities_and_colleges_in_California), the members are only the items that are blue links on that page. So you have to iterate through the category to recursively access subcategory members. This exercise is left to the readers. :)
Note: Many Wikipedia categories aren't necessarily restricted to the kind of entity that is mentioned in the category name. So "Category:Universities and colleges in California" contains a subcategory "Category:People by university or college in California" that has people asssociated with each university. So you have to be careful when recursively going through subcategories, or else you might end up with different kinds of entities.
|
bay_cat = pywikibot.Category(site,'Category:Universities and colleges in California')
bay_gen = bay_cat.members()
for page in bay_gen:
print(page.title(), page.isCategory(), page.coordinates())
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
Other interesting information from pages
Backlinks are all the pages that link to a page. Note: this can get very, very long with even minorly popular articles.
|
telegraph_page = pywikibot.Page(site, u"Telegraph Avenue")
telegraph_backlinks = telegraph_page.backlinks
telegraph_backlinks()
for bl_page in telegraph_backlinks():
if(bl_page.namespace()==1):
print(bl_page.title())
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
Who has contributed to a page, and how many times have they edited?
|
telegraph_page.contributors()
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
Templates are all the extensions to wikimarkup that give you things like citations, tables, infoboxes, etc. You can iterate over all the templates in a page.
Templates
Wikipedia articles are filled with templates, which are kinds of scripts written in wikimarkup. Everything you see in a Wikipedia article that isn't a markdown-like feature (bolding, links, lists, images) is presented through a template. One of the most important templates are infoboxes, which are on the right-hand side of articles.
But templates are complicated and very difficult to parse -- which is why Wikidata is such a big deal! However, it is possible to parse the same kind of template with pywikibot's textlib parser. For infoboxes, there are different kinds of infoboxes based on what the article's topic is an instance of. So cities, towns, and other similar articles use "infobox settlement" -- which you can see by getting the first part of the article's wikitext.
|
bky_page = pywikibot.Page(site, "Berkeley, California")
bky_page.text
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
If you go to the raw text on the Wikipedia (by clicking the edit button), you can see that this is a little bit more ordered:
<img src="berkeley-wikitext.png">
We use the textlib module from pywikibot, which has a function that parses an article's wikitext into a list of templates. Each item in the list is an OrderedDict mapping parameters to values.
|
from pywikibot import textlib
import pandas as pd
bky_templates = textlib.extract_templates_and_params_regex(bky_page.text)
bky_templates[:5]
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
We iterate through all the templates on the page until we find the one containing the "Infobox settlement" template.
|
for template in bky_templates:
if(template[0]=="Infobox settlement"):
infobox = template[1]
infobox.keys()
print(infobox['elevation_ft'])
print(infobox['area_total_sq_mi'])
print(infobox['utc_offset_DST'])
print(infobox['population_total'])
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
However, sometimes parameters contain templates, such as citations or references.
|
print(infobox['government_type'])
print(infobox['website'])
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
Putting it all together
This script gets data about all the cities in the Bay Area -- only traversing through this category, because all the pages are direct members of this with no subcategories.
|
bay_cat = pywikibot.Category(site,'Category:Cities_in_the_San_Francisco_Bay_Area')
bay_gen = bay_cat.members()
for page in bay_gen:
# If the page is not a category
if(page.isCategory()==False):
print(page.title())
page_templates = textlib.extract_templates_and_params_regex(page.text)
for template in page_templates:
if(template[0]=="Infobox settlement"):
infobox = template[1]
if 'elevation_ft' in infobox:
print(" Elevation (ft): ", infobox['elevation_ft'])
if 'population_total' in infobox:
print(" Population: ", infobox['population_total'])
if 'area_total_sq_mi' in infobox:
print(" Area (sq mi): ", infobox['area_total_sq_mi'])
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
This is a script for Katy, getting data about U.S. Nuclear power plants. Wikipedia articles on nuclear power plants have many subcategories:
https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_by_country
https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_in_the_United_States
https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_in_the_United_States_by_state
https://en.wikipedia.org/wiki/Category:Nuclear_power_plants_in_California
https://en.wikipedia.org/wiki/Diablo_Canyon_Power_Plant
https://en.wikipedia.org/wiki/Rancho_Seco_Nuclear_Generating_Station
etc...
https://en.wikipedia.org/wiki/Category:Nuclear_power_plants_in_New_York
etc...
etc...
etc...
So we are going to begin with the Category:Nuclear power stations in the United States by state and just go one subcategory down. There is probably a more elegant way of doing this with recursion and functions....
|
power_cat = pywikibot.Category(site,'Category:Nuclear power stations in the United States by state')
power_gen = power_cat.members()
for page in power_gen:
print(page.title())
# If the page is not a category
if(page.isCategory()==False):
print("\n",page.title(),"\n")
page_templates = textlib.extract_templates_and_params_regex(page.text)
for template in page_templates:
if(template[0]=="Infobox power station"):
infobox = template[1]
if 'ps_units_operational' in infobox:
print(" Units operational:", infobox['ps_units_operational'])
if 'owner' in infobox:
print(" Owner:", infobox['owner'])
else:
for subpage in pywikibot.Category(site,page.title()).members():
print("\n",subpage.title())
subpage_templates = textlib.extract_templates_and_params_regex(subpage.text)
for template in subpage_templates:
if(template[0]=="Infobox power station"):
infobox = template[1]
if 'ps_units_operational' in infobox:
print(" Units operational:", infobox['ps_units_operational'])
if 'owner' in infobox:
print(" Owner:", infobox['owner'])
|
scraping_wikipedia/wikipedia-api-query.ipynb
|
katyhuff/berkeley
|
bsd-3-clause
|
Replication notebook
Using this notebook, we replicate the 2002 model by Ciarelli and Lori (2002).
Based on code from agentFin website by Blake Lebaron.
|
# set default parameters
number_of_agents = 1000
init_time = 100
max_time = 500
av_return_interval_min = 5
av_return_interval_max = 50 # previously Lmax
fundamental_value = 1000. # previously pf
allowed_price_steps = 0.1 # deltaP
variance_noise_forecast = 0.01 # prev sigmae
order_noise_max = 0.5 # prev kMax
order_expiration_time = 50 # prev tau
runType = 0
if runType == 0:
fundamental_weight = 0. # prev sigmaF
momentum_weight = 0. # prev sigmaM
noise_weight = 1. # prev sigmaN
if runType == 1:
fundamental_weight = 1.
momentum_weight = 0.
noise_weight = 1.
if runType == 2:
fundamental_weight = 1.
momentum_weight = 10.
noise_weight = 1.
ticks_per_day = 100
|
comparablemodels/chiarellaIori/chialori2002.ipynb
|
LCfP/abm
|
mit
|
Simulation:
|
day_price, day_volume, day_return, price, returns, total_volume = ciarellilori2002(seed, max_time, init_time, number_of_agents, av_return_interval_min,
av_return_interval_max, fundamental_value, allowed_price_steps, variance_noise_forecast,
order_noise_max, order_expiration_time, fundamental_weight, momentum_weight, noise_weight,
ticks_per_day):
|
comparablemodels/chiarellaIori/chialori2002.ipynb
|
LCfP/abm
|
mit
|
Some utility functions
|
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# Reformat the dataset for the convolutional networks
def reformat(dataset):
dataset = dataset.reshape((-1, image_size, image_size, num_channels)).astype(np.float32)
return dataset
|
.ipynb_checkpoints/Diploma_munka_notebook_0-checkpoint.ipynb
|
Kkari/bsc_thesis
|
apache-2.0
|
This notebook demonstrates the generation of a propensity audience for a remarketing use case. It relies on the sample size calculations from the 6.media_experiment_design.ipynb notebook to create the Test and Control audiences which are written to a new BigQuery table. This data can then be uploaded via measurement protocol to GA and used for the activation with the Google Ads products as demonstrated in 9.audience_upload.ipynb notebook notebook.
Requirements:
An already scored dataset from the 7.batch_scoring.ipynb notebook: this is the model prediction dataset containing ML prediction for each user_id and snapshot_ts, from which we create the remarketing audience.
Statistical sample size calculations from the 6.media_experiment_design.ipynb notebook for each propensity audience group.
Install and import required modules
|
# Uncomment to install required python modules
# !sh ../utils/setup.sh
# Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
import numpy as np
import pandas as pd
import random
from gps_building_blocks.cloud.utils import bigquery as bigquery_utils
from utils import helpers
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
Set parameters
|
configs = helpers.get_configs('config.yaml')
dest_configs, run_id_configs = configs.destination, configs.run_id
# GCP project ID
PROJECT_ID = dest_configs.project_id
# Name of BigQuery dataset,
# destination for created tables for modelling and activation.
DATASET_NAME = dest_configs.dataset_name
# To distinguish the seperate runs of the ML Windowing Pipeline
RUN_ID = run_id_configs.score
# BigQuery table name containing the predictions (e.g. generated by
# 7.batch_scoring.ipynb notebook)
PREDICTIONS_TABLE = f'scored_{RUN_ID}'
# Snapshot date to select the ML instances to create the marketing audience in
# YYYY-MM-DD format
SELECTED_SNAPSHOT_DATE = '2017-06-15'
# Name of the column in the predictions table with the predicted label values
PREDICTED_LABEL_NAME = 'predicted_label_probs'
# Label value for the positive class
POSITIVE_CLASS_LABEL = True
# Number of propensity audience groups to devide the scored users into
# (e.g. 3 bins for High, Medium and Low propensity audience groups)
AUDIENCE_GROUPS = 3
# Minimum samples sizes to select as the Test and Control groups for each the
# propensity audience groups based on the output of the
# 6.media_experiment_design.ipynb notebook (following are some example numbers).
MIN_SAMPLE_SIZES = [1000, 2000, 3000]
# Name of the BigQuery table with exported audience
AUDIENCE_EXPORT_TABLE = f'audience_export_{RUN_ID}'
bq_utils = bigquery_utils.BigQueryUtils(project_id=PROJECT_ID)
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
Read the prediction dataset
In this step, we assume the prediction dataset is available as a BigQuery table.
|
# SQL for extracting prediction dataset when using BQML.
sql = f"""
SELECT
user_id,
snapshot_ts,
days_since_latest_activity,
days_since_first_activity,
probs.label AS predicted_score_label,
probs.prob AS score
FROM
`{PROJECT_ID}.{DATASET_NAME}.{PREDICTIONS_TABLE}` AS predictions,
UNNEST({PREDICTED_LABEL_NAME}) AS probs
WHERE
probs.label={POSITIVE_CLASS_LABEL}
AND snapshot_ts='{SELECTED_SNAPSHOT_DATE}';
"""
print (sql)
df_prediction = bq_utils.run_query(sql).to_dataframe()
df_prediction.head()
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
If required, the users can be filtered by using days_since_latest_activity
(tenure) and days_since_first_activity (recency) columns before creating the
audience groups.
Create audience groups
|
# Separate the users into <AUDIENCE_GROUPS> number of audience groups
df_prediction = df_prediction.sort_values(by='score',
ascending=False).reset_index()
# To avoid duplicate edges of bins we use the index
# as rank in the qcut function below
df_prediction['audience_group'] = pd.qcut(df_prediction.index,
q=AUDIENCE_GROUPS, labels=False)
# When AUDIENCE_GROUPS=3, audience_group column contains '0', '1' and '2' values
# representing 'High', 'Medium' and 'Low' propensity groups respectively
# Separate each audience group into Test and Control
df_prediction['test_control'] = 'NA'
for i in range(len(MIN_SAMPLE_SIZES)):
group = df_prediction[df_prediction['audience_group'] == i]
# Select Control set size based on the minimum sample size
control_user_ids = random.sample(list(group['user_id']), MIN_SAMPLE_SIZES[i])
remaining_user_ids = list(set(group.user_id) - set(control_user_ids))
# Select Test set based on the minimum sample size
test_user_ids = random.sample(remaining_user_ids, MIN_SAMPLE_SIZES[i])
# Alternatively, select Test set to include all the remaining users as below
# or a subset of users greater than MIN_SAMPLE_SIZES[i] depending on the
# available campaign budget
# test_user_ids = remaining_user_ids
df_prediction.loc[df_prediction['user_id'].isin(test_user_ids),
'test_control'] = 'Test'
df_prediction.loc[df_prediction['user_id'].isin(control_user_ids),
'test_control'] = 'Control'
# Explore the created audience sizes and statistics of the predicted
# probabilities. We should expect to see simillar statistics of probabilities
# between each Test and Control pair
df_prediction.groupby(['audience_group', 'test_control']).agg(
{'score':['count', 'min', 'mean', 'median', 'max']})
# TODO(): Add box plots to visualize the values of Test and
# Control groups
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
Write the audience data to a BigQuery table
|
# Inspect table before uploading to BigQuery
cols_to_load = ['user_id',
'snapshot_ts',
'days_since_latest_activity',
'days_since_first_activity',
'predicted_score_label',
'score',
'audience_group',
'test_control']
df_audience = df_prediction[cols_to_load]
df_audience
destination_table =f'{DATASET_NAME}.{AUDIENCE_EXPORT_TABLE}'
df_audience.to_gbq(
destination_table=destination_table,
project_id=PROJECT_ID,
if_exists='replace')
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
Check final audience uploaded to a BigQuery table
|
sql = f"""
SELECT
snapshot_ts,
audience_group,
test_control,
COUNT(*) as count
FROM
`{PROJECT_ID}.{destination_table}`
GROUP BY
1,2,3
ORDER BY
1,2,3;
"""
print (sql)
df_check = bq_utils.run_query(sql).to_dataframe()
df_check
|
packages/propensity/08.audience_generation.ipynb
|
google/compass
|
apache-2.0
|
Import relevant stingray libraries.
|
from stingray.simulator.transfer import TransferFunction
from stingray.simulator.transfer import simple_ir, relativistic_ir
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Creating TransferFunction
A transfer function can be initialized by passing a 2-d array containing time across the first dimension and energy across the second. For example, if the 2-d array is defined by arr, then arr[1][5] defines a time of 5 units and energy of 1 unit.
For the purpose of this tutorial, we have stored a 2-d array in a text file named intensity.txt. The script to generate this file is explained in Data Preparation notebook.
|
response = np.loadtxt('intensity.txt')
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Initialize transfer function by passing the array defined above.
|
transfer = TransferFunction(response)
transfer.data.shape
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
By default, time and energy spacing across both axes are set to 1. However, they can be changed by supplying additional parameters dt and de.
Obtaining Time-Resolved Response
The 2-d transfer function can be converted into a time-resolved/energy-averaged response.
|
transfer.time_response()
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
This sets time parameter which can be accessed by transfer.time
|
transfer.time[1:10]
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Additionally, energy interval over which to average, can be specified by specifying e0 and e1 parameters.
Obtaining Energy-Resolved Response
Energy-resolved/time-averaged response can be also be formed from 2-d transfer function.
|
transfer.energy_response()
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
This sets energy parameter which can be accessed by transfer.energy
|
transfer.energy[1:10]
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Plotting Responses
TransferFunction() creates plots of time-resolved, energy-resolved and 2-d responses. These plots can be saved by setting save parameter.
|
transfer.plot(response='2d')
transfer.plot(response='time')
transfer.plot(response='energy')
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
By enabling save=True parameter, the plots can be also saved.
IO
TransferFunction can be saved in pickle format and retrieved later.
|
transfer.write('transfer.pickle')
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Saved files can be read using static read() method.
|
transfer_new = TransferFunction.read('transfer.pickle')
transfer_new.time[1:10]
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Artificial Responses
For quick testing, two helper impulse response models are provided.
1- Simple IR
simple_ir() allows to define an impulse response of constant height. It takes in time resolution starting time, width and intensity as arguments.
|
s_ir = simple_ir(dt=0.125, start=10, width=5, intensity=0.1)
plt.plot(s_ir)
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
2- Relativistic IR
A more realistic impulse response mimicking black hole dynamics can be created using relativistic_ir(). Its arguments are: time_resolution, primary peak time, secondary peak time, end time, primary peak value, secondary peak value, rise slope and decay slope. These paramaters are set to appropriate values by default.
|
r_ir = relativistic_ir(dt=0.125)
plt.plot(r_ir)
|
Transfer Functions/TransferFunction Tutorial.ipynb
|
StingraySoftware/notebooks
|
mit
|
Next get the login credentials
All requests to the SQE API require valid credentials. You can get them like this:
|
r = requests.post(api, json={"transaction": "validateSession", "PASSWORD":"asdf", "USER_NAME":"test"})
session = r.json()['SESSION_ID']
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
Making requests
All calls to the SQE API will use a transaction in the post request payload data. This should be accompanied by the necessary data to perform that transaction.
Finding all available scrolls
Try, for instance, downloading a list of scrolls with the transaction getCombs. You can also use the little python function scrollIdByName here to find a scroll_version_id in the API response by its scroll name.
|
r = requests.post(api, json={"transaction": "getCombs", "SESSION_ID":session})
scrolls = r.json()['results']
def scrollIdByName(name):
sid = None
for scroll in scrolls:
if name == scroll['name']:
sid = scroll['scroll_version_id']
break
return sid
selectedScroll = scrollIdByName('4Q51')
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
Finding available cols/frags
The API transaction getColOfComb will send you all columns and fragments of a scroll in their canonical order—you must supply the desired scroll_version_id.
|
r = requests.post(api, json={"transaction": "getColOfComb", "scroll_version_id": selectedScroll, "SESSION_ID":session})
cols = r.json()['results']
print(json.dumps(cols, indent=2, sort_keys=True))
col2 = cols[1]
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
transcriptions
There are several different ways to work with transcribed text. After downloading it with the transaction getSignStreamOfFrag, you will want to serialize it into something more human freindly. The transcriptions in the database are a DAG, but these initial API calls serialize it into an ordered array for you (we do have functionality to download the graph, but I will add more broad support for that later).
The schema of this output looks as follows:
|
r = requests.post(api, json={"transaction": "getSignStreamOfFrag", "scroll_version_id": selectedScroll, "col_id": col2['col_id'], "SESSION_ID":session})
text = r.json()['text']
builder = SchemaBuilder()
builder.add_object(text)
print(json.dumps(builder.to_schema(), indent=2, sort_keys=False))
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
The actual data looks like this:
|
print(json.dumps(r.json(), indent=2, sort_keys=False))
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
Since the data already comes in order, you could simply iterate over the lists to quickly see the text (note the helper functions at the beginning of the cell):
|
#The following helpers serialize each element to a list, since they could be either a scalar or list
def serializeChars(sign):
if isinstance(sign['chars'], list):
return sign['chars']
else:
return [sign['chars']]
def serializeCharLetters(char):
if isinstance(char['sign_char'], list):
return char['sign_char']
else:
return [char['sign_char']]
def serializeCharAttributes(char):
try:
if isinstance(char['attributes'], list):
return char['attributes']
else:
return [char['attributes']]
except:
return []
def serializeAttrValues(attr):
if isinstance(attr['values'], list):
#These are ordered so we can easily open and close HTML tags
sortorder={
"SCROLL_START":0,
"COLUMN_START":1,
"LINE_START":2,
"LINE_END":3,
"COLUMN_END":4,
"SCROLL_END":5
}
return sorted(attr['values'], key=lambda k: sortorder[k['attribute_value']])
else:
return [attr['values']]
#This function formats the output
def outputAllText():
#Begin printing the output
print(r.json()['text'][0]['scroll_name'])
# Cycle through the cols/fragments
for fragment in r.json()['text'][0]['fragments']:
print(fragment['fragment_name'], end='')
#Cycle through the lines
for line in fragment['lines']:
print('\n', line['line_name'], '\t', end='')
#Cycle through the signs
for sign in line['signs']:
#Whether there is more than one sign possible, print the first
char = serializeChars(sign)[0]
letter = serializeCharLetters(char)[0]
print(letter, end='')
#Check the attributes (if there are any) to see if we have a space
attrs = serializeCharAttributes(char)
if len(attrs) > 0:
for attr in attrs:
values = serializeAttrValues(attr)
for value in values:
if value['attribute_value'] == 'SPACE':
print(' ', end='')
outputAllText()
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
The previous method does not do any advanced checking to see if signs are damaged or reconstructed. It just prints the entirety of the transcribed text.
We could do a minimal output that only prints those transcribed characters which are fully visible (this information is transmitted in the attribute_id and attribute_value fields).
|
def outputMinimalText():
#Begin printing the output
print(r.json()['text'][0]['scroll_name'])
# Cycle through the cols/fragments
for fragment in r.json()['text'][0]['fragments']:
print(fragment['fragment_name'], end='')
#Cycle through the lines
for line in fragment['lines']:
print('\n', line['line_name'], '\t', end='')
#Cycle through the signs
for sign in line['signs']:
#Whether there is more than one sign possible, print the first
char = serializeChars(sign)[0]
letter = serializeCharLetters(char)[0]
#Check the attributes for damage and to see if we have a space
attrs = serializeCharAttributes(char)
damaged = False
space = False
if len(attrs) > 0:
for attr in attrs:
values = serializeAttrValues(attr)
for value in values:
if value['attribute_value'] == 'SPACE':
space = True
if (value['attribute_value'] == 'INCOMPLETE_BUT_CLEAR'
or value['attribute_value'] == 'INCOMPLETE_AND_NOT_CLEAR') or (
attr['attribute_id'] == 6 and value['attribute_value'] == 'TRUE'):
damaged = True
if not damaged:
print(letter, end='')
if space:
print(' ', end='')
outputMinimalText()
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
You could also serialize this to HTML by reading the all of the attribute tags more closely and adding some nice CSS.
|
def outputHTMLText():
print('<!DOCTYPE html>')
print('<html>')
print('<head>')
print('\t<meta charset="UTF-8">')
print('\t<title>SQE Transcription Output</title>')
print("""
<style>
span.non-rcnst + span.reconstructed:before {
content: '[';
}
span.reconstructed + span.non-rcnst:before {
content: ']';
}
span.reconstructed:first-child:before {
content: '[';
}
span.reconstructed:last-child:after {
content: ']';
}
</style>
""")
print('</head>')
print('\n<body>')
#Begin printing the output
print('\t<h1>', r.json()['text'][0]['scroll_name'], '</h1>')
# Cycle through the cols/fragments
for fragment in r.json()['text'][0]['fragments']:
#Cycle through the lines
for line in fragment['lines']:
#Cycle through the signs
for sign in line['signs']:
#Whether there is more than one sign possible, print the first
char = serializeChars(sign)[0]
letter = serializeCharLetters(char)[0]
#Check the attributes for damage and to see if we have a space
attrs = serializeCharAttributes(char)
damaged = False
space = False
if len(attrs) > 0:
for attr in attrs:
values = serializeAttrValues(attr)
for value in values:
if value['attribute_value'] == 'COLUMN_START':
print('\t<div dir="rtl">')
print('\t\t<h2>', fragment['fragment_name'], '</h2>')
print('\t\t<p>')
if value['attribute_value'] == 'COLUMN_END':
print('\t\t</p>')
print('\t</div>')
if value['attribute_value'] == 'LINE_START':
print('\t\t\t<div>')
print('\t\t\t\t<span class="line-name non-rcnst">', line['line_name'], '</span>')
print('\t\t\t\t<span>', end='')
if value['attribute_value'] == 'LINE_END':
print('</span>')
print('\t\t\t</div>')
if (value['attribute_value'] == 'INCOMPLETE_BUT_CLEAR'
or value['attribute_value'] == 'INCOMPLETE_AND_NOT_CLEAR') or (
attr['attribute_id'] == 6 and value['attribute_value'] == 'TRUE'):
damaged = True
if value['attribute_value'] == 'SPACE':
print(' ', end='')
else:
if value['attribute_value'] == 'INCOMPLETE_BUT_CLEAR':
print(f'<span class="incomplete-but-clear non-rcnst">{letter}ׄ</span>', end='')
elif value['attribute_value'] == 'INCOMPLETE_AND_NOT_CLEAR':
print(f'<span class="incomplete-and-not-clear non-rcnst">{letter}֯</span>', end='')
elif attr['attribute_id'] == 6 and value['attribute_value'] == 'TRUE':
print(f'<span class="reconstructed">{letter}</span>', end='')
elif value['attribute_value'] == 'ABOVE_LINE':
print(f'<span class="non-rcnst"><sup>{letter}</sup></span>', end='')
elif value['attribute_value'] == 'BELOW_LINE':
print(f'<span class="non-rcnst"><sub>{letter}</sub></span>', end='')
else: print(f'<span class="non-rcnst">{letter}</span>', end='')
print('</body>')
print('</html>')
outputHTMLText()
|
Text_Extraction/SQE-API-Demo.ipynb
|
Scripta-Qumranica-Electronica/Data-Processing
|
mit
|
TESS End-to-End 6 Simulated Light Curve Time Series<br>
Source: https://archive.stsci.edu/tess/ete-6.html
|
from skdaccess.astro.tess.simulated.cache import DataFetcher as TESS_DF
from skdaccess.framework.param_class import *
import numpy as np
tess_fetcher = TESS_DF([AutoList([376664523])])
tess_dw = tess_fetcher.output()
label, data = next(tess_dw.getIterator())
|
skdaccess/examples/Demo_TESS_Simulated_Data.ipynb
|
skdaccess/skdaccess
|
mit
|
Plot Relative PDCSAP Flux vs time
|
plt.gcf().set_size_inches(6,2);
plt.scatter(data.loc[valid_index, 'TIME'], data.loc[valid_index, 'RELATIVE_PDCSAP_FLUX'], s=2, edgecolor='none');
plt.xlabel('Time');
plt.ylabel('Relative PDCSAP Flux');
plt.title('Simulated Data TID: ' + str(int(label)));
|
skdaccess/examples/Demo_TESS_Simulated_Data.ipynb
|
skdaccess/skdaccess
|
mit
|
Vertex client library: Hyperparameter tuning image classification model
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to do hyperparameter tuning for a custom image classification model.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this notebook, you learn how to create a hyperparameter tuning job for a custom image classification model from a Python script in a docker container using the Vertex client library. You can alternatively hyperparameter tune models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create an Vertex hyperparameter turning job for training a custom model.
Tune the custom model.
Evaluate the study results.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
|
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Tutorial
Now you are ready to start creating your own hyperparameter tuning and training of a custom image classification.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Job Service for hyperparameter tuning.
|
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
for client in clients.items():
print(client)
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom hyperparameter tuning job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom hyperparameter tuning job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the hyperparameter tuning script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The hyperparameter tuning distribution strategy to use for single or distributed hyperparameter tuning.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
|
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Examine the hyperparameter tuning package
Package layout
Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
|
# Make folder for Python hyperparameter tuning script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration hyperparameter tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
|
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as val_accuracy.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps:
Import the HyperTune module: from hypertune import HyperTune().
At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are:
hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification.
metric_value: The value of the objective metric to report back to the hyperparameter service.
global_step: The epoch iteration, starting at 0.
Hyperparameter Tune the model
Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter:
-hpt_job: The specification for the hyperparameter tuning job.
The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-hyperparameter_tuning_job: The specification for the hyperparameter tuning job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom hyperparameter tuning job.
|
def create_hyperparameter_tuning_job(hpt_job):
response = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hpt_job
)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_hyperparameter_tuning_job(hpt_job)
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Tuning a model - CIFAR10
Now that you have seen the overall steps for hyperparameter tuning a custom training job using a Python package that mimics training a model, you will do a new hyperparameter tuning job for a custom training job for a CIFAR10 model.
For this example, you will change two parts:
Specify the CIFAR10 custom hyperparameter tuning Python package.
Specify a study specification specific to the hyperparameters used in the CIFAR10 custom hyperparameter tuning Python package.
Create a study specification
In this study, you will tune for two hyperparameters using the random search algorithm:
learning rate: The search space is a set of discrete values.
learning rate decay: The search space is a continuous range between 1e-6 and 1e-2.
The objective (goal) is to maximize the validation accuracy.
You will run a maximum of six trials.
|
study_spec = {
"metrics": [
{
"metric_id": "val_accuracy",
"goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE,
}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
{
"parameter_id": "decay",
"double_value_spec": {"min_value": 1e-6, "max_value": 1e-2},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
],
"algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH,
}
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary:
Parse the command line arguments for the hyperparameter settings for the current trial.
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Download and preprocess the CIFAR10 dataset.
Build a CNN model.
The learning rate and decay hyperparameter values are used during the compile of the model.
A definition of a callback HPTCallback which obtains the validation accuracy at the end of each epoch (on_epoch_end()) and reports it to the hyperparameter tuning service using hpt.report_hyperparameter_tuning_metric().
Train the model with the fit() method and specify a callback which will report the validation accuracy back to the hyperparameter tuning service.
|
%%writefile custom/trainer/task.py
# Custom Job for CIFAR10
import tensorflow_datasets as tfds
import tensorflow as tf
from hypertune import HyperTune
import argparse
import os
import sys
# Command Line arguments
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--decay', dest='decay',
default=0.98, type=float,
help='Decay rate')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
# Scaling CIFAR-10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
# Download the dataset
datasets = tfds.load(name='cifar10', as_supervised=True)
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = datasets['train'].map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = datasets['test'].map(scale).batch(BATCH_SIZE)
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr, decay=args.decay),
metrics=['accuracy'])
return model
model = build_and_compile_cnn_model()
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_accuracy',
metric_value=logs['val_accuracy'],
global_step=epoch)
# Train the model
model.fit(train_dataset, epochs=5, steps_per_epoch=10, validation_data=test_dataset.take(8),
callbacks=[HPTCallback()])
model.save(args.model_dir)
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_image_classification.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
The first thing to understand when working with pandas dataframes in HoloViews is how data is indexed. Pandas dataframes are structured as tables with any number of columns and indexes. HoloViews, on the other hand, deals with Dimensions. HoloViews container objects such as HoloMap, NdLayout, GridSpace and NdOverlay have kdims, which provide metadata about the data along that dimension and how they can be sliced. Element objects, on the other hand, have both key dimensions (kdims) and value dimensions (vdims). The kdims of a HoloViews datastructure represent the position, bin or category along a particular dimension, while the value dimensions usually represent some continuous variable.
Let's start by constructing a Pandas dataframe of a few columns and display it as its HTML format (throughout this notebook we will visualize the dataframes using the IPython HTML display function, to allow this notebook to be tested automatically, but in ordinary work you can visualize dataframes directly without this mechanism).
|
df = pd.DataFrame({'a':[1,2,3,4], 'b':[4,5,6,7], 'c':[8, 9, 10, 11]})
HTML(df.to_html())
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
Now that we have a basic dataframe, we can wrap it in the HoloViews Table Element:
|
example = hv.Table(df)
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
The data on the Table Element is accessible via the .data attribute like on all other Elements.
|
list(example.data.columns)
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
As you can see, we now have a Table, which has a and b as its kdims and c as its value_dimension. Because it is not needed by HoloViews, the index of the original dataframe was dropped, but if the indexes are meaningful you make that column available using the .reset_index method on the pandas dataframe:
|
HTML(df.reset_index().to_html())
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
Now we can employ the HoloViews slicing semantics to select the desired subset of the data and use the usual compositing + operator to lay the data out side by side:
|
example[:, 4:8:2] + example[2:5:2, :]
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
Dropping and reducing columns
The above was the simple case: we converted all the dataframe columns to a Table object. Where pandas excels, however, is making a large set of data available in a form that makes selection easy. This time, let's only select a subset of the Dimensions.
|
example.to.scatter('a', 'b', [])
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
As you can see, HoloViews simply ignored the remaining Dimension. By default, the conversion functions ignore any numeric unselected Dimensions. All non-numeric Dimensions are converted to Dimensions on the returned HoloMap, however. Both of these behaviors can be overridden by supplying explicit map dimensions and/or a reduce_fn.
You can perform this conversion with any type and lay your results out side by side, making it easy to look at the same dataset in any number of ways.
|
%%opts Curve [xticks=3 yticks=3]
example.to.curve('a', 'b', []) + example
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
Finally, we can convert all homogenous HoloViews types (i.e. anything except Layout and Overlay) back to a pandas dataframe using the dframe method.
|
HTML(example.dframe().to_html())
|
doc/Tutorials/Pandas_Conversion.ipynb
|
vascotenner/holoviews
|
bsd-3-clause
|
As the name suggests boundary_conditions.py contains the information about the boundary conditions for the setup considered. While the current problem in consideration is for periodic boundary conditions, $\texttt{Bolt}$ also supports Dirichlet, mirror, and shearing box boundary conditions. The setups for these boundary conditions can be found in other example problems.
|
!cat domain.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
domain.py contains data about the phase space domain and resolution that has been considered. Note that we've taken the number of grid points along q2 as 3 although it's a 1D problem. It must be ensured that the number of grid zones in q1 and q2 are greater than or equal to the the number of ghost zones that are taken in q space. This is due to an internal restriction placed on us by one of the libraries we use for parallelization. Additionally, we've taken the domain zones and sizes for p3 and p3 such that dp2 and dp3 come out to be one. This way the integral measure, dp1 dp2 dp3 boils down to be dp1 which is then used for moment computations.
|
!cat params.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
Let's go over each of the attributes mentioned above to understand their usage. While some of these attributes are flags and options that need to be mentioned for every system being solved, there are a few attributes which are specific to the non-relativistic Boltzmann system being solved. The params module can be used to add attributes, and functions which you intend to declare somewhere in src/ :
Attributes Native To the Solver:
fields_type is used to declare what sort of fields are being solved in the problem of consideration. It can be electrostatic where magnetic fields stay at zero, electrodynamic where magnetic fields are also evolved and user-defined where the evolution of the fields in time are declared by the user(this is primarily used in debugging). This attribute can be set appropriately as electrostatic, electrodynamic and user-defined. The setup we've considered is an electrostatic one.
fields_initialize is used to declare which method is used to set the initialize the values for the electromagnetic fields. 2 methods are available for initializing electrostatic fields from the density - snes and fft. The fft method of initialization can only be used for serial runs with periodic boundary conditions. snes is a more versatile method capable of being run in parallel with other boundary conditions as well. It makes use of the SNES routines in PETSc which use Krylov subspace methods to solve for the fields. Additionally, this can also be set to user-defined, where the initial conditions for the electric and magnetic fields are defined in terms of q1 and q2 under initialize.
fields_solver is used to declare which method is used to set the method that is used to evolve the fields with time. The same methods are available for computing electrostatic fields snes and fft. The fdtd method is to be used when solving for electrodynamic systems.
solver_method_in_q and solver_method_in_p are used to set the specific solver method used in q-space and p-space which can be set to FVM or ASL for the finite volume method and the advective semi-lagrangian method.
reconstruction_method_in_q and reconstruction_method_in_p are used to set the specific reconstruction scheme used for the FVM method in q-space and p-space. This can be set to piecewise-constant, minmod, ppm and weno5.
riemann_solver_in_q and riemann_solver_in_p are used to set the specific riemann solver which is to be used for the FVM method in q-space and p-space. This can be set to upwind-flux for the first order upwind flux method, and lax-friedrichs for the local Lax-Friedrichs method.
num_devices is used in parallel runs when run on nodes which contain more than a single accelerator. For instance when running on nodes which contain 4 GPUs each, this attribute is set to 4.
EM_fields_enabled is a solver flag which is used to indicate whether the case considered is one where we solve in p-space as well. Similarly, source_enabled is used to switch on and off the source term. For now, we have set both to False.
charge as the name suggests is used to assign charge to the species considered in the simulation. This is used internally in the fields solver. For this tutorial we've taken the charge of the particle to be -10 units. However this won't matter if EM_fields_enabled is set to False.
instantaneous_collisions is a flag which is turned on when we want to update the distribution function f to a distribution function array as returned by the source term. For instance in the case of the BGK operator we want to solve $\frac{d f}{d t} = -\frac{f - f_0}{\tau}$. But as $\tau \to 0$, $f = f_0$. For solving systems in the $\tau = 0$ regime this flag is turned to True. How this is carried out is explained further in the section that explains how the equation to be modelled is input.
Attributes Native To the System Solved:
p_dim is used to set the dimensionality considered in p-space. This becomes important especially for the definition of various physical quantities which vary as a function of the dimensionality and the moments. This would be used in the collision operator as we'll discuss further in this tutorial.
mass and boltzmann_constant are pretty explanatory from their name, and are used for initialization and defining the system solved.
tau is the collision timescale in the BGK operator and used in solving for the source part of the equation. This parameter would only make a difference in the simulation results if the switch source_enabled is set to True.
The remaining parameters as they are mentioned are used in the initialize module
|
!cat initialize.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
As you can see the initialize module contains the function initialize_f which initializes the distribution function using the parameters that were declared.
Now that we've setup the parameters for the specific test problem that we want to solve, we'll proceed to describe how we input the desired equation of our model into $\texttt{Bolt}$.
How the equation to be modelled is introduced into Bolt:
As one navigates from the root folder of this repository into the folder main folder for the package bolt, there's two separate subfolders lib and src. While all the files in lib contain the solver algorithms, and the structure for the solvers, src is where we introduce the models that we intend to model. For instance, the files that we'll be using for this test problem can be found under bolt/src/nonrelativistic_boltzmann. First let's import all necessary modules.
|
import bolt.src.nonrelativistic_boltzmann.advection_terms as advection_terms
import bolt.src.nonrelativistic_boltzmann.collision_operator as collision_operator
import bolt.src.nonrelativistic_boltzmann.moments as moments
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
Let's start of by seeing how we've introduced the advection terms specific to the non-relativistic Boltzmann equation into the framework of $\texttt{Bolt}$. Advection terms are introduced into $\texttt{Bolt}$ through the advection_terms which has the functions A_q, C_q, C_p, C_p.
It is expected that A_q and C_q take the arguments (f, t, q1, q2, v1, v2, v3, params), where f is the distribution function, t is the time elapsed, (q1, q2, v1, v2, v3) are phase space grid data for the position space and velocity space respectively. Additionally, it also accepts a module params which can take contain user defined attributes and functions that can be injected into this function.
While A_p and C_p take all the arguments that are taken by A_q and C_q, it also takes the additional argument of a fields_solver object. The get_fields() method of this object returns the electromagnetic fields at the current instance. The fields_solver objects are internal to the solvers, and can be chosen as an electrostatic and electrodynamic as we've seen in the parameters above.
|
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/advection_terms.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
We hope the model described is quite clear from the docstrings. Note that we describe the model in terms of variables in velocities v and not the canonical variables p to avoid confusion with momentum.
Next, we proceed to see how moments we define moments for our system of interest.
|
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/moments.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
As you can see all the moment quantities take (f, v1, v2, v3, integral_measure) as arguments interms of which we define the moments for the system. By default, integral_measure is taken to be dv1 dv2 dv3. These definitions are referred to by the solver routine compute_moments, which calls the appropriate moment routine as a string. For instance, if we want to compute density at the current state, calling compute_moments('density') gets the job done.
It's to be noted that when fields are enabled in the problem of consideration, density, mom_v1_bulk, mom_v1_bulk and mom_v1_bulk must be defined since this is used internally when solving for electromagnetic fields.
NOTE: Density is number density here
Now we proceed to the final information regarding our equation which is the source term which in our case is the BGK collision operator.
|
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/collision_operator.py
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
Here the BGK function is our source term, and takes the arguments (f, t, q1, q2, v1, v2, v3, moments, params, flag). Note that moments is the solver routine of compute_moments which is used to compute the moments at the current instance of computing the collison operator.
In parameters, we had defined an attribute instantaneous_collision which when set to True, returns the value that is returned by the source function when flag is set to True. Above we had mentioned how this maybe necessary in our model when solving for purely collisional cases(hydrodynamic regime)
We'll start by importing dependencies for the solver.
ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures including CPUs, GPUs, and other hardware acceleration devices. $\texttt{Bolt}$ uses it's Python API for the creation and manipulation of arrays which allows us to run our code on a range of devices at optimal speed.
We use NumPy for declaring the time data and storing data which we indend to plot, and pylab(matplotlib), for post-processing.
|
# Importing dependencies:
import arrayfire as af
import numpy as np
import pylab as pl
%matplotlib inline
# Importing the classes which are used to declare the physical_system and solver objects
from bolt.lib.physical_system import physical_system
from bolt.lib.nonlinear.nonlinear_solver import nonlinear_solver
from bolt.lib.linear.linear_solver import linear_solver
# Optimized plot parameters to make beautiful plots:
pl.rcParams['figure.figsize'] = 12, 7.5
pl.rcParams['figure.dpi'] = 300
pl.rcParams['image.cmap'] = 'jet'
pl.rcParams['lines.linewidth'] = 1.5
pl.rcParams['font.family'] = 'serif'
pl.rcParams['font.weight'] = 'bold'
pl.rcParams['font.size'] = 20
pl.rcParams['font.sans-serif'] = 'serif'
pl.rcParams['text.usetex'] = True
pl.rcParams['axes.linewidth'] = 1.5
pl.rcParams['axes.titlesize'] = 'medium'
pl.rcParams['axes.labelsize'] = 'medium'
pl.rcParams['xtick.major.size'] = 8
pl.rcParams['xtick.minor.size'] = 4
pl.rcParams['xtick.major.pad'] = 8
pl.rcParams['xtick.minor.pad'] = 8
pl.rcParams['xtick.color'] = 'k'
pl.rcParams['xtick.labelsize'] = 'medium'
pl.rcParams['xtick.direction'] = 'in'
pl.rcParams['ytick.major.size'] = 8
pl.rcParams['ytick.minor.size'] = 4
pl.rcParams['ytick.major.pad'] = 8
pl.rcParams['ytick.minor.pad'] = 8
pl.rcParams['ytick.color'] = 'k'
pl.rcParams['ytick.labelsize'] = 'medium'
pl.rcParams['ytick.direction'] = 'in'
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
We define the system we want to solve through the physical_system class. This class takes it's arguments as (domain, boundary_conditions, params, initialize, advection_terms, source_function, moments) which we had explored above. The declared object is then passed to the linear and nonlinear solver objects to provide information about the system solved.
|
# Defining the physical system to be solved:
system = physical_system(domain,
boundary_conditions,
params,
initialize,
advection_terms,
collision_operator.BGK,
moments
)
N_g_q = system.N_ghost_q
# Declaring a linear system object which will evolve the defined physical system:
nls = nonlinear_solver(system)
ls = linear_solver(system)
# Time parameters:
dt = 0.001
t_final = 0.5
time_array = np.arange(0, t_final + dt, dt)
rho_data_nls = np.zeros(time_array.size)
rho_data_ls = np.zeros(time_array.size)
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
The default data format in $\texttt{Bolt}$ is (Np, Ns, N_q1, N_q2), where Np is the number of zones in p-space, Ns is the number of species and N_q1 and N_q2 are the number of gridzones along q1 and q2 respectively.
Below since we want to obtain the amplitude for the density in the physical domain non-inclusive of the ghost zones, we use max(density[:, :, N_g:-N_g, -N_g:N_g]).
|
# Storing data at time t = 0:
n_nls = nls.compute_moments('density')
# Check for the data non inclusive of the ghost zones:
rho_data_nls[0] = af.max(n_nls[:, :, N_g_q:-N_g_q, N_g_q:-N_g_q])
n_ls = ls.compute_moments('density')
rho_data_ls[0] = af.max(n_ls)
for time_index, t0 in enumerate(time_array[1:]):
nls.strang_timestep(dt)
ls.RK4_timestep(dt)
n_nls = nls.compute_moments('density')
rho_data_nls[time_index + 1] = af.max(n_nls[:, :, N_g_q:-N_g_q, N_g_q:-N_g_q])
n_ls = ls.compute_moments('density')
rho_data_ls[time_index + 1] = af.max(n_ls)
pl.plot(time_array, rho_data_nls, label='Nonlinear Solver')
pl.plot(time_array, rho_data_ls, '--', color = 'black', label = 'Linear Solver')
pl.ylabel(r'MAX($\rho$)')
pl.xlabel('Time')
pl.legend()
|
example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb
|
ShyamSS-95/Bolt
|
gpl-3.0
|
We have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique.
The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature.
Linearizing the Kalman Filter
The Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The drag coefficient varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.
For the linear filter we have these equations for the process and measurement models:
$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\
\mathbf z &= \mathbf{Hx} + w_z
\end{aligned}$$
Where $\mathbf A$ is the systems dynamic matrix. Using the state space methods covered in the Kalman Filter Math chapter these equations can be tranformed into
$$\begin{aligned}\bar{\mathbf x} &= \mathbf{Fx} \
\mathbf z &= \mathbf{Hx}
\end{aligned}$$
where $\mathbf F$ is the fundamental matrix. The noise $w_x$ and $w_z$ terms are incorporated into the matrices $\mathbf R$ and $\mathbf Q$. This form of the equations allow us to compute the state at step $k$ given a measurement at step $k$ and the state estimate at step $k-1$. In earlier chapters I built your intuition and minimized the math by using problems describable with Newton's equations. We know how to design $\mathbf F$ based on high school physics.
For the nonlinear model the linear expression $\mathbf{Fx} + \mathbf{Bu}$ is replaced by a nonlinear function $f(\mathbf x, \mathbf u)$, and the linear expression $\mathbf{Hx}$ is replaced by a nonlinear function $h(\mathbf x)$:
$$\begin{aligned}\dot{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\
\mathbf z &= h(\mathbf x) + w_z
\end{aligned}$$
You might imagine that we could proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the Nonlinear Filtering chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.
The EKF does not alter the Kalman filter's linear equations. Instead, it linearizes the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter.
Linearize means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2−2x$ at $x=1.5$.
|
import kf_book.ekf_internal as ekf_internal
ekf_internal.show_linearization()
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.
We linearize systems by taking the derivative, which finds the slope of a curve:
$$\begin{aligned}
f(x) &= x^2 -2x \
\frac{df}{dx} &= 2x - 2
\end{aligned}$$
and then evaluating it at $x$:
$$\begin{aligned}m &= f'(x=1.5) \&= 2(1.5) - 2 \&= 1\end{aligned}$$
Linearizing systems of differential equations is similar. We linearize $f(\mathbf x, \mathbf u)$, and $h(\mathbf x)$ by taking the partial derivatives of each to evaluate $\mathbf F$ and $\mathbf H$ at the point $\mathbf x_t$ and $\mathbf u_t$. We call the partial derivative of a matrix the Jacobian. This gives us the the discrete state transition matrix and measurement model matrix:
$$
\begin{aligned}
\mathbf F
&= {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|{{\mathbf x_t},{\mathbf u_t}} \
\mathbf H &= \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|{\bar{\mathbf x}_t}
\end{aligned}
$$
This leads to the following equations for the EKF. I put boxes around the differences from the linear filter:
$$\begin{array}{l|l}
\text{linear Kalman filter} & \text{EKF} \
\hline
& \boxed{\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|{{\mathbf x_t},{\mathbf u_t}}} \
\mathbf{\bar x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\bar x} = f(\mathbf x, \mathbf u)} \
\mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \
\hline
& \boxed{\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|{\bar{\mathbf x}_t}} \
\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z - \boxed{h(\bar{x})}\
\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \
\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \
\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}
\end{array}$$
We don't normally use $\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\bar{\mathbf x}$ using a suitable numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\mathbf{\bar x} = f(\mathbf x, \mathbf u)$. For the same reasons we don't use $\mathbf{H\bar{x}}$ in the computation for the residual, opting for the more accurate $h(\bar{\mathbf x})$.
I think the easiest way to understand the EKF is to start off with an example. Later you may want to come back and reread this section.
Example: Tracking a Airplane
This example tracks an airplane using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.
Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the slant distance - the straight line distance from the radar installation to the object.
The relationship between the radar's slant range distance $r$ and elevation angle $\epsilon$ with the horizontal position $x$ and altitude $y$ of the aircraft is illustrated in the figure below:
|
ekf_internal.show_radar_chart()
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
This gives us the equalities:
$$\begin{aligned}
\epsilon &= \tan^{-1} \frac y x\
r^2 &= x^2 + y^2
\end{aligned}$$
Design the State Variables
We want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:
$$\mathbf x = \begin{bmatrix}\mathtt{distance} \\mathtt{velocity}\ \mathtt{altitude}\end{bmatrix}= \begin{bmatrix}x \ \dot x\ y\end{bmatrix}$$
Design the Process Model
We assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want
$$\mathbf F = \left[\begin{array}{cc|c} 1 & \Delta t & 0\
0 & 1 & 0 \ \hline
0 & 0 & 1\end{array}\right]$$
I've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.
However, let's practice finding these matrices. We model systems with a set of differential equations. We need an equation in the form
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{w}$$
where $\mathbf{w}$ is the system noise.
The variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:
$$\begin{aligned}v &= \dot x \
a &= \ddot{x} = 0\end{aligned}$$
Now we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as
$$\begin{aligned}\begin{bmatrix}\dot x \ \ddot{x}\end{bmatrix} &= \begin{bmatrix}0&1\0&0\end{bmatrix} \begin{bmatrix}x \
\dot x\end{bmatrix} \ \dot{\mathbf x} &= \mathbf{Ax}\end{aligned}$$
where $\mathbf A=\begin{bmatrix}0&1\0&0\end{bmatrix}$.
Recall that $\mathbf A$ is the system dynamics matrix. It describes a set of linear differential equations. From it we must compute the state transition matrix $\mathbf F$. $\mathbf F$ describes a discrete set of linear equations which compute $\mathbf x$ for a discrete time step $\Delta t$.
A common way to compute $\mathbf F$ is to use the power series expansion of the matrix exponential:
$$\mathbf F(\Delta t) = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A \Delta t)^3}{3!} + ... $$
$\mathbf A^2 = \begin{bmatrix}0&0\0&0\end{bmatrix}$, so all higher powers of $\mathbf A$ are also $\mathbf{0}$. Thus the power series expansion is:
$$
\begin{aligned}
\mathbf F &=\mathbf{I} + \mathbf At + \mathbf{0} \
&= \begin{bmatrix}1&0\0&1\end{bmatrix} + \begin{bmatrix}0&1\0&0\end{bmatrix}\Delta t\
\mathbf F &= \begin{bmatrix}1&\Delta t\0&1\end{bmatrix}
\end{aligned}$$
This is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate finding the state transition matrix from linear differential equations. We will conclude the chapter with an example that will require the use of this technique.
Design the Measurement Model
The measurement function takes the state estimate of the prior $\bar{\mathbf x}$ and turn it into a measurement of the slant range distance. We use the Pythagorean theorem to derive:
$$h(\bar{\mathbf x}) = \sqrt{x^2 + y^2}$$
The relationship between the slant distance and the position on the ground is nonlinear due to the square root. We linearize it by evaluating its partial derivative at $\mathbf x_t$:
$$
\mathbf H = \frac{\partial{h(\bar{\mathbf x})}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
$$
The partial derivative of a matrix is called a Jacobian, and takes the form
$$\frac{\partial \mathbf H}{\partial \bar{\mathbf x}} =
\begin{bmatrix}
\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \
\frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \
\vdots & \vdots
\end{bmatrix}
$$
In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the $x$ variables. For our problem we have
$$\mathbf H = \begin{bmatrix}{\partial h}/{\partial x} & {\partial h}/{\partial \dot{x}} & {\partial h}/{\partial y}\end{bmatrix}$$
Solving each in turn:
$$\begin{aligned}
\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \
&= \frac{x}{\sqrt{x^2 + y^2}}
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial \dot{x}} &=
\frac{\partial}{\partial \dot{x}} \sqrt{x^2 + y^2} \
&= 0
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial y} &= \frac{\partial}{\partial y} \sqrt{x^2 + y^2} \
&= \frac{y}{\sqrt{x^2 + y^2}}
\end{aligned}$$
giving us
$$\mathbf H =
\begin{bmatrix}
\frac{x}{\sqrt{x^2 + y^2}} &
0 &
&
\frac{y}{\sqrt{x^2 + y^2}}
\end{bmatrix}$$
This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf x$ so we need to take the derivative of the slant range with respect to $\mathbf x$. For the linear Kalman filter $\mathbf H$ was a constant that we computed prior to running the filter. For the EKF $\mathbf H$ is updated at each step as the evaluation point $\bar{\mathbf x}$ changes at each epoch.
To make this more concrete, let's now write a Python function that computes the Jacobian of $h$ for this problem.
|
from math import sqrt
def HJacobian_at(x):
""" compute Jacobian of H matrix at x """
horiz_dist = x[0]
altitude = x[2]
denom = sqrt(horiz_dist**2 + altitude**2)
return array ([[horiz_dist/denom, 0., altitude/denom]])
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Finally, let's provide the code for $h(\bar{\mathbf x})$:
|
def hx(x):
""" compute measurement for slant range that
would correspond to state x.
"""
return (x[0]**2 + x[2]**2) ** 0.5
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Now let's write a simulation for our radar.
|
from numpy.random import randn
import math
class RadarSim(object):
""" Simulates the radar signal returns from an object
flying at a constant altityude and velocity in 1D.
"""
def __init__(self, dt, pos, vel, alt):
self.pos = pos
self.vel = vel
self.alt = alt
self.dt = dt
def get_range(self):
""" Returns slant range to the object. Call once
for each new measurement at dt time from last call.
"""
# add some process noise to the system
self.vel = self.vel + .1*randn()
self.alt = self.alt + .1*randn()
self.pos = self.pos + self.vel*self.dt
# add measurement noise
err = self.pos * 0.05*randn()
slant_dist = math.sqrt(self.pos**2 + self.alt**2)
return slant_dist + err
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Design Process and Measurement Noise
The radar measures the range to a target. We will use $\sigma_{range}= 5$ meters for the noise. This gives us
$$\mathbf R = \begin{bmatrix}\sigma_{range}^2\end{bmatrix} = \begin{bmatrix}25\end{bmatrix}$$
The design of $\mathbf Q$ requires some discussion. The state $\mathbf x= \begin{bmatrix}x & \dot x & y\end{bmatrix}^\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use Q_discrete_white_noise noise to compute the values for the upper left hand side of $\mathbf Q$. The third element of $\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\mathbf Q$ of:
$$\mathbf Q = \begin{bmatrix}\mathbf Q_\mathtt{x} & 0 \ 0 & \mathbf Q_\mathtt{y}\end{bmatrix}$$
Implementation
FilterPy provides the class ExtendedKalmanFilter. It works similarly to the KalmanFilter class we have been using, except that it allows you to provide a function that computes the Jacobian of $\mathbf H$ and the function $h(\mathbf x)$.
We start by importing the filter and creating it. The dimension of x is 3 and z has dimension 1.
```python
from filterpy.kalman import ExtendedKalmanFilter
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
We create the radar simulator:python
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
```
We will initialize the filter near the airplane's actual position:
python
rk.x = array([radar.pos, radar.vel-10, radar.alt+100])
We assign the system matrix using the first term of the Taylor series expansion we computed above:
python
dt = 0.05
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt
After assigning reasonable values to $\mathbf R$, $\mathbf Q$, and $\mathbf P$ we can run the filter with a simple loop. We pass the functions for computing the Jacobian of $\mathbf H$ and $h(x)$ into the update method.
python
for i in range(int(20/dt)):
z = radar.get_range()
rk.update(array([z]), HJacobian_at, hx)
rk.predict()
Adding some boilerplate code to save and plot the results we get:
|
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import ExtendedKalmanFilter
from numpy import eye, array, asarray
import numpy as np
dt = 0.05
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
# make an imperfect starting guess
rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000])
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]]) * dt
range_std = 5. # meters
rk.R = np.diag([range_std**2])
rk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)
rk.Q[2,2] = 0.1
rk.P *= 50
xs, track = [], []
for i in range(int(20/dt)):
z = radar.get_range()
track.append((radar.pos, radar.vel, radar.alt))
rk.update(array([z]), HJacobian_at, hx)
xs.append(rk.x)
rk.predict()
xs = asarray(xs)
track = asarray(track)
time = np.arange(0, len(xs)*dt, dt)
ekf_internal.plot_radar(xs, track, time)
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Using SymPy to compute Jacobians
Depending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.
As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.
|
import sympy
sympy.init_printing(use_latex=True)
x, x_vel, y = sympy.symbols('x, x_vel y')
H = sympy.Matrix([sympy.sqrt(x**2 + y**2)])
state = sympy.Matrix([x, x_vel, y])
H.jacobian(state)
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
This result is the same as the result we computed above, and with much less effort on our part!
Robot Localization
It's time to try a real problem. I warn you that this section is difficult. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to solve a real world problem.
We will consider the problem of robot localization. We already implemented this in the Unscented Kalman Filter chapter, and I recommend you read it now if you haven't already. In this scenario we have a robot that is moving through a landscape using a sensor to detect landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.
The robot has 4 wheels in the same configuration used by automobiles. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model.
The robot has a sensor that measures the range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry.
Both the process model and measurement models are nonlinear. The EKF accommodates both, so we provisionally conclude that the EKF is a viable choice for this problem.
Robot Motion Model
At a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations.
For lower speed robotic applications a simpler bicycle model has been found to perform well. This is a depiction of the model:
|
ekf_internal.plot_bicycle()
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
In the Unscented Kalman Filter chapter we derived these equations:
$$\begin{aligned}
\beta &= \frac d w \tan(\alpha) \
x &= x - R\sin(\theta) + R\sin(\theta + \beta) \
y &= y + R\cos(\theta) - R\cos(\theta + \beta) \
\theta &= \theta + \beta
\end{aligned}
$$
where $\theta$ is the robot's heading.
You do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter.
Design the State Variables
For our filter we will maintain the position $x,y$ and orientation $\theta$ of the robot:
$$\mathbf x = \begin{bmatrix}x \ y \ \theta\end{bmatrix}$$
Our control input $\mathbf u$ is the velocity $v$ and steering angle $\alpha$:
$$\mathbf u = \begin{bmatrix}v \ \alpha\end{bmatrix}$$
Design the System Model
We model our system as a nonlinear motion model plus noise.
$$\bar x = f(x, u) + \mathcal{N}(0, Q)$$
Using the motion model for a robot that we created above, we can expand this to
$$\bar{\begin{bmatrix}x\y\\theta\end{bmatrix}} = \begin{bmatrix}x\y\\theta\end{bmatrix} +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \
R\cos(\theta) - R\cos(\theta + \beta) \
\beta\end{bmatrix}$$
We find The $\mathbf F$ by taking the Jacobian of $f(x,u)$.
$$\mathbf F = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix}
\frac{\partial f_1}{\partial x} &
\frac{\partial f_1}{\partial y} &
\frac{\partial f_1}{\partial \theta}\
\frac{\partial f_2}{\partial x} &
\frac{\partial f_2}{\partial y} &
\frac{\partial f_2}{\partial \theta} \
\frac{\partial f_3}{\partial x} &
\frac{\partial f_3}{\partial y} &
\frac{\partial f_3}{\partial \theta}
\end{bmatrix}
$$
When we calculate these we get
$$\mathbf F = \begin{bmatrix}
1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \
0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \
0 & 0 & 1
\end{bmatrix}$$
We can double check our work with SymPy.
|
import sympy
from sympy.abc import alpha, x, y, v, w, R, theta
from sympy import symbols, Matrix
sympy.init_printing(use_latex="mathjax", fontsize='16pt')
time = symbols('t')
d = v*time
beta = (d/w)*sympy.tan(alpha)
r = w/sympy.tan(alpha)
fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)],
[theta+beta]])
F = fxu.jacobian(Matrix([x, y, theta]))
F
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
That looks a bit complicated. We can use SymPy to substitute terms:
|
# reduce common expressions
B, R = symbols('beta, R')
F = F.subs((d/w)*sympy.tan(alpha), B)
F.subs(w/sympy.tan(alpha), R)
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
This form verifies that the computation of the Jacobian is correct.
Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in control space. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system.
$$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \ 0 & \sigma_\alpha^2\end{bmatrix}$$
If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$.
$$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}
\frac{\partial f_1}{\partial v} & \frac{\partial f_1}{\partial \alpha} \
\frac{\partial f_2}{\partial v} & \frac{\partial f_2}{\partial \alpha} \
\frac{\partial f_3}{\partial v} & \frac{\partial f_3}{\partial \alpha}
\end{bmatrix}$$
These partial derivatives become very difficult to work with. Let's compute them with SymPy.
|
V = fxu.jacobian(Matrix([v, alpha]))
V = V.subs(sympy.tan(alpha)/w, 1/R)
V = V.subs(time*v/R, B)
V = V.subs(time*v, 'd')
V
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
This should give you an appreciation of how quickly the EKF become mathematically intractable.
This gives us the final form of our prediction equations:
$$\begin{aligned}
\mathbf{\bar x} &= \mathbf x +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \
R\cos(\theta) - R\cos(\theta + \beta) \
\beta\end{bmatrix}\
\mathbf{\bar P} &=\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}
\end{aligned}$$
This form of linearization is not the only way to predict $\mathbf x$. For example, we could use a numerical integration technique such as Runge Kutta to compute the movement
of the robot. This will be required if the time step is relatively large. Things are not as cut and dried with the EKF as for the Kalman filter. For a real problem you have to carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns.
Design the Measurement Model
The robot's sensor provides a noisy bearing and range measurement to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf T$ into a range and bearing to the landmark. If $\mathbf p$
is the position of a landmark, the range $r$ is
$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$
The sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:
$$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$
Thus our measurement model $h$ is
$$\begin{aligned}
\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\
&= \begin{bmatrix}
\sqrt{(p_x - x)^2 + (p_y - y)^2} \
\arctan(\frac{p_y - y}{p_x - x}) - \theta
\end{bmatrix} &+ \mathcal{N}(0, R)
\end{aligned}$$
This is clearly nonlinear, so we need linearize $h$ at $\mathbf x$ by taking its Jacobian. We compute that with SymPy below.
|
px, py = symbols('p_x, p_y')
z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)],
[sympy.atan2(py-y, px-x) - theta]])
z.jacobian(Matrix([x, y, theta]))
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Now we need to write that as a Python function. For example we might write:
|
from math import sqrt
def H_of(x, landmark_pos):
""" compute Jacobian of H matrix where h(x) computes
the range and bearing to a landmark for state x """
px = landmark_pos[0]
py = landmark_pos[1]
hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2
dist = sqrt(hyp)
H = array(
[[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0],
[ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]])
return H
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
We also need to define a function that converts the system state into a measurement.
|
from math import atan2
def Hx(x, landmark_pos):
""" takes a state variable and returns the measurement
that would correspond to that state.
"""
px = landmark_pos[0]
py = landmark_pos[1]
dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2)
Hx = array([[dist],
[atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]])
return Hx
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Design Measurement Noise
It is reasonable to assume that the noise of the range and bearing measurements are independent, hence
$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \ 0 & \sigma_{bearing}^2\end{bmatrix}$$
Implementation
We will use FilterPy's ExtendedKalmanFilter class to implement the filter. Its predict() method uses the standard linear equations for the process model. Ours is nonlinear, so we will have to override predict() with our own implementation. I'll want to also use this class to simulate the robot, so I'll add a method move() that computes the position of the robot which both predict() and my simulation can call.
The matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's evalf function. evalf evaluates a SymPy Matrix with specific values for the variables. I decided to demonstrate this technique to you, and used evalf in the Kalman filter code. You'll need to understand a couple of points.
First, evalf uses a dictionary to specify the values. For example, if your matrix contains an x and y, you can write
python
M.evalf(subs={x:3, y:17})
to evaluate the matrix for x=3 and y=17.
Second, evalf returns a sympy.Matrix object. Use numpy.array(M).astype(float) to convert it to a NumPy array. numpy.array(M) creates an array of type object, which is not what you want.
Here is the code for the EKF:
|
from filterpy.kalman import ExtendedKalmanFilter as EKF
from numpy import dot, array, sqrt
class RobotEKF(EKF):
def __init__(self, dt, wheelbase, std_vel, std_steer):
EKF.__init__(self, 3, 2, 2)
self.dt = dt
self.wheelbase = wheelbase
self.std_vel = std_vel
self.std_steer = std_steer
a, x, y, v, w, theta, time = symbols(
'a, x, y, v, w, theta, t')
d = v*time
beta = (d/w)*sympy.tan(a)
r = w/sympy.tan(a)
self.fxu = Matrix(
[[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)],
[theta+beta]])
self.F_j = self.fxu.jacobian(Matrix([x, y, theta]))
self.V_j = self.fxu.jacobian(Matrix([v, a]))
# save dictionary and it's variables for later use
self.subs = {x: 0, y: 0, v:0, a:0,
time:dt, w:wheelbase, theta:0}
self.x_x, self.x_y, = x, y
self.v, self.a, self.theta = v, a, theta
def predict(self, u=0):
self.x = self.move(self.x, u, self.dt)
self.subs[self.theta] = self.x[2, 0]
self.subs[self.v] = u[0]
self.subs[self.a] = u[1]
F = array(self.F_j.evalf(subs=self.subs)).astype(float)
V = array(self.V_j.evalf(subs=self.subs)).astype(float)
# covariance of motion noise in control space
M = array([[self.std_vel*u[0]**2, 0],
[0, self.std_steer**2]])
self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T)
def move(self, x, u, dt):
hdg = x[2, 0]
vel = u[0]
steering_angle = u[1]
dist = vel * dt
if abs(steering_angle) > 0.001: # is robot turning?
beta = (dist / self.wheelbase) * tan(steering_angle)
r = self.wheelbase / tan(steering_angle) # radius
dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)],
[r*cos(hdg) - r*cos(hdg + beta)],
[beta]])
else: # moving in straight line
dx = np.array([[dist*cos(hdg)],
[dist*sin(hdg)],
[0]])
return x + dx
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a angular difference of $-358^\circ$, whereas the correct value is $2^\circ$. We have to write code to correctly compute the bearing residual.
|
def residual(a, b):
""" compute residual (a-b) between measurements containing
[range, bearing]. Bearing is normalized to [-pi, pi)"""
y = a - b
y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi)
if y[1] > np.pi: # move to [-pi, pi)
y[1] -= 2 * np.pi
return y
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable landmarks that contains the landmark coordinates. I update the simulated robot position 10 times a second, but run the EKF only once per second. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.
|
from filterpy.stats import plot_covariance_ellipse
from math import sqrt, tan, cos, sin, atan2
import matplotlib.pyplot as plt
dt = 1.0
def z_landmark(lmark, sim_pos, std_rng, std_brg):
x, y = sim_pos[0, 0], sim_pos[1, 0]
d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2)
a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0]
z = np.array([[d + randn()*std_rng],
[a + randn()*std_brg]])
return z
def ekf_update(ekf, z, landmark):
ekf.update(z, HJacobian=H_of, Hx=Hx,
residual=residual,
args=(landmark), hx_args=(landmark))
def run_localization(landmarks, std_vel, std_steer,
std_range, std_bearing,
step=10, ellipse_step=20, ylim=None):
ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel,
std_steer=std_steer)
ekf.x = array([[2, 6, .3]]).T # x, y, steer angle
ekf.P = np.diag([.1, .1, .1])
ekf.R = np.diag([std_range**2, std_bearing**2])
sim_pos = ekf.x.copy() # simulated position
# steering command (vel, steering angle radians)
u = array([1.1, .01])
plt.figure()
plt.scatter(landmarks[:, 0], landmarks[:, 1],
marker='s', s=60)
track = []
for i in range(200):
sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot
track.append(sim_pos)
if i % step == 0:
ekf.predict(u=u)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='k', alpha=0.3)
x, y = sim_pos[0, 0], sim_pos[1, 0]
for lmark in landmarks:
z = z_landmark(lmark, sim_pos,
std_range, std_bearing)
ekf_update(ekf, z, lmark)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='g', alpha=0.8)
track = np.array(track)
plt.plot(track[:, 0], track[:,1], color='k', lw=2)
plt.axis('equal')
plt.title("EKF Robot localization")
if ylim is not None: plt.ylim(*ylim)
plt.show()
return ekf
landmarks = array([[5, 10], [10, 5], [15, 15]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
print('Final P:', ekf.P.diagonal())
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
I have plotted the landmarks as solid squares. The path of the robot is drawn with a black line. The covariance ellipses for the predict step are light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$.
We can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We determine that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements and the errors improve.
I used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\mathbf x$ is concerned.
Now let's add another landmark.
|
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
plt.show()
print('Final P:', ekf.P.diagonal())
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
The uncertainly in the estimates near the end of the track are smaller. We can see the effect that multiple landmarks have on our uncertainty by only using the first two landmarks.
|
ekf = run_localization(
landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:
|
ekf = run_localization(
landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
As you probably suspected, one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.
|
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10],
[10,14], [23, 14], [25, 20], [10, 20]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1, ylim=(0, 21))
print('Final P:', ekf.P.diagonal())
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Discussion
I said that this was a real problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to simpler Jacobians. On the other hand, my model of the movement is also simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in Probabilistic Robots that this simplified model is justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the CPU time required to perform the linear algebra.
Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic.
UKF vs EKF
In the last chapter I used the UKF to solve this problem. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model.
There are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That undertaking is not trivial, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates.
So, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.
Let's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result.
The EKF linearizes the function by taking the derivative to find the slope at the evaluation point $x$. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.
|
import kf_book.nonlinear_plots as nonlinear_plots
nonlinear_plots.plot_ekf_vs_mc()
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
The EKF computation is rather inaccurate. In contrast, here is the performance of the UKF:
|
nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/11-Extended-Kalman-Filters.ipynb
|
zaqwes8811/micro-apps
|
mit
|
Changing value changes the function behavior as well
|
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
|
misc/function_map.ipynb
|
juditacs/snippets
|
lgpl-3.0
|
Solution 2 - function factory using closure
|
d = {'a': 'AB', 'b': 'C'}
def foo(value):
def bar(v):
return v in value
return bar
funcs = {}
for key, value in d.items():
funcs[key] = foo(value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
d = {'a': 'AB', 'b': 'C'}
def foo(value):
def bar(v):
return v in value
return bar
funcs = {}
for key, value in d.items():
funcs[key] = foo(value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
print(funcs['b']('AB'), funcs['b']('C'))
|
misc/function_map.ipynb
|
juditacs/snippets
|
lgpl-3.0
|
Changing value doesn't affect the functions
|
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
|
misc/function_map.ipynb
|
juditacs/snippets
|
lgpl-3.0
|
Solution 3 - partial
|
from functools import partial
d = {'a': 'AB', 'b': 'C'}
funcs = {}
for key, value in d.items():
funcs[key] = partial(lambda v, value: v in value, value=value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
print(funcs['b']('AB'), funcs['b']('C'))
|
misc/function_map.ipynb
|
juditacs/snippets
|
lgpl-3.0
|
Changing value
|
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
|
misc/function_map.ipynb
|
juditacs/snippets
|
lgpl-3.0
|
When the document is ready (loaded and the DOM constrcuted), jQuery will register the click callback on the elements with the #flip id.
|
%%writefile jquery_slide_toggle.html -a
<style>
#panel, #flip {
padding: 5px;
text-align: center;
background-color: #e5eecc;
border: solid 1px #c3c3c3;
}
#panel {
padding: 50px;
display: none;
}
</style>
%%writefile jquery_slide_toggle.html -a
<div id="flip">Click to slide the panel down or up</div>
<div id="panel">Hello world!</div>
HTML('./jquery_slide_toggle.html')
|
web/jquery_slide.ipynb
|
satishgoda/learning
|
mit
|
Note
I was first typing the entire html code in one cell and executing it.
But later I need to make notes, so I came up with the %%writefile cell magic way to organize the html code.
|
%%html
<script>
$(document).ready(function(){
$("#flip0").click(function(){
$("#panel0").slideToggle("fast");
});
});
</script>
<style>
#panel0, #flip0 {
padding: 5px;
text-align: center;
background-color: #e5eecc;
border: solid 1px #c3c3c3;
}
#panel0 {
padding: 50px;
display: none;
}
</style>
<div id="flip0">Click to slide the panel down or up</div>
<div id="panel0">Hello world!</div>
|
web/jquery_slide.ipynb
|
satishgoda/learning
|
mit
|
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
|
tdr=train_dataset.reshape((-1, 28*28)).astype(np.float32)
print(train_dataset.shape)
print (tdr.shape)
print(train_labels)
print (type(train_labels))
print((train_labels[:,None]))
(np.arange(10) == train_labels[:,None]).astype(np.float32)
#np.arange(10)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
|
random/catfish/2_fullyconnected.ipynb
|
emsi/ml-toolbox
|
agpl-3.0
|
(Only if you are using a colab notebook) We need to authenticate to Google Cloud and create the service client. After running the cell below, a link will appear which you need to click on and follow the instructions.
|
# ONLY RUN IF YOU ARE IN A COLAB NOTEBOOK.
from google.colab import auth
auth.authenticate_user()
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Next, we need to set our project. Replace 'PROJECT_ID' with your GCP project ID.
|
%env GOOGLE_CLOUD_PROJECT=PROJECT_ID
!gcloud config set project $GOOGLE_CLOUD_PROJECT
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Third, you need to activate some of the GCP services that we will be using. Run the following cell if you need to activate API's. Which can also be done via the GUI via APIs and Services -> Enable APIS and Services.
|
!gcloud services enable ml.googleapis.com
!gcloud services enable bigquery-json.googleapis.com
|
examples/cloudml-bank-marketing/bank_marketing_classification_model.ipynb
|
GoogleCloudPlatform/professional-services
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.