markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Query the database system catalog to retrieve table metadata You can verify that the table creation was successful by retrieving the list of all tables in your schema and checking whether the SCHOOLS table was created
# type in your query to retrieve list of all tables in the database for your db2 schema (username)
coursera/databases_and_sql_for_data_science/DB0201EN-Week4-1-1-RealDataPractice-v3-py.ipynb
mohanprasath/Course-Work
gpl-3.0
Double-click here for a hint <!-- In Db2 the system catalog table called SYSCAT.TABLES contains the table metadata --> Double-click here for the solution. <!-- Solution: %sql select TABSCHEMA, TABNAME, CREATE_TIME from SYSCAT.TABLES where TABSCHEMA='YOUR-DB2-USERNAME' or, you can retrieve list of all tables where the schema name is not one of the system created ones: %sql select TABSCHEMA, TABNAME, CREATE_TIME from SYSCAT.TABLES \ where TABSCHEMA not in ('SYSIBM', 'SYSCAT', 'SYSSTAT', 'SYSIBMADM', 'SYSTOOLS', 'SYSPUBLIC') or, just query for a specifc table that you want to verify exists in the database %sql select * from SYSCAT.TABLES where TABNAME = 'SCHOOLS' --> Query the database system catalog to retrieve column metadata The SCHOOLS table contains a large number of columns. How many columns does this table have?
# type in your query to retrieve the number of columns in the SCHOOLS table
coursera/databases_and_sql_for_data_science/DB0201EN-Week4-1-1-RealDataPractice-v3-py.ipynb
mohanprasath/Course-Work
gpl-3.0
Double-click here for a hint <!-- In Db2 the system catalog table called SYSCAT.COLUMNS contains the column metadata --> Double-click here for the solution. <!-- Solution: %sql select count(*) from SYSCAT.COLUMNS where TABNAME = 'SCHOOLS' --> Now retrieve the the list of columns in SCHOOLS table and their column type (datatype) and length.
# type in your query to retrieve all column names in the SCHOOLS table along with their datatypes and length
coursera/databases_and_sql_for_data_science/DB0201EN-Week4-1-1-RealDataPractice-v3-py.ipynb
mohanprasath/Course-Work
gpl-3.0
OSM文件分类捡取工具。 by openthings@163.com, 2016-05-04. 将osm文件按照tag分类,并转为不同的文件,以方便后续的处理。 每一个tag对象转为独立的一行(去掉换行符),以便Spark读入。 采用递归方式处理,占用内存较少,可以处理大型文件。 后续工作: 每一tag对象数据转为dict并保存为json到一行。 每一tag对象数据转为wkt格式。
import os import lxml from lxml import etree import xmltodict, sys, gc from pymongo import MongoClient gc.enable() #Enable Garbadge Collection client = MongoClient() db = client.re streetsDB = db.streets hwTypes = ['motorway', 'trunk', 'primary', 'secondary', 'tertiary', 'pedestrian', 'unclassified', 'service']
geospatial/openstreetmap/osm-tag2json.ipynb
supergis/git_notebook
gpl-3.0
递归方式读取osm的xml结构数据。 http://www.ibm.com/developerworks/xml/library/x-hiperfparse/
def process_element(elem): print("element:",str(elem.attrib)) if (elem.tag=="node"): fnode.write((etree.tostring(elem).decode('utf-8'))+"\r\n") elif (elem.tag=="way"): fway.write((etree.tostring(elem).decode('utf-8'))+"\r\n") elif (elem.tag=="relation"): frelation.write((etree.tostring(elem)).decode('utf-8')+"\r\n") data = etree.tostring(elem) #data = etree.tostring(elem) #data = xmltodict.parse(data) #print(data.decode('ascii')) #print(str(elem))
geospatial/openstreetmap/osm-tag2json.ipynb
supergis/git_notebook
gpl-3.0
快速迭代处理,func为迭代的element处理函数。
from pprint import * def fast_iter(context, func, file, maxline): print('Process XML...') placement = 0 try: for event, elem in context: placement += 1 if (maxline > 0): if (placement >= maxline): break print(placement,"elem: ") #print("element",str(elem.attrib)) data = etree.tostring(elem) print(data) global data2 data2 = xmltodict.parse(data) pprint(data2) #if (file): # file.write(str(elem.attrib) + "\n") #else: # print("file is null.") #func(elem) elem.clear() #while elem.getprevious() is not None: # del elem.getparent()[0] except Exception as ex: print("Error:",ex) del context
geospatial/openstreetmap/osm-tag2json.ipynb
supergis/git_notebook
gpl-3.0
将指定tag的对象提取,写入json文件。 osmfile:输入的*.osm文件 tagname:'node','way','relation'
def process_tag(osmfile, tagname, maxline): filename_tag = osmfile + "_" + tagname + ".json" print("Filename output: ",filename_tag) ftag = open(filename_tag,"w+") context = etree.iterparse(osmfile, tag = tagname) fast_iter(context,process_element,ftag,maxline) ftag.close() osmfile = '../data/muenchen.osm' #process_tag(osmfile,'node',5) process_tag(osmfile,'way',2) #process_tag(osmfile,'relation',0) pprint(data2) for i in data2["way"]["nd"]: print("nd=",i["@ref"]) for i in data2["way"]["tag"]: print(i["@k"],"=",i["@v"]) import json jsonStr = json.dumps(data2) pprint(jsonStr) jsonobj = json.loads(jsonStr) pprint(jsonobj) jsonobj["way"]["tag"]
geospatial/openstreetmap/osm-tag2json.ipynb
supergis/git_notebook
gpl-3.0
For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
# You will need to download the example dataset above, # extract the files therein, # and update the filepaths below per your local machine img = envi.open('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr', '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Note that the information is stored differently when read in with envi.open. We can find the wavelength information in img.bands.centers. Let's take a look at the first and last wavelengths values:
print('First 3 Band Center Wavelengths:',img.bands.centers[:3]) print('Last 3 Band Center Wavelengths:',img.bands.centers[-3:])
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
We'll set the Water Vapor Band windows to NaN:
img.bands.centers[191:211]==np.nan img.bands.centers[281:314]==np.nan img.bands.centers[-10:]==np.nan
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
To get a quick look at the img data, use the params method:
img.params
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Metadata information is stored in img.metadata, a dictionary. Let's look at the metadata contents:
md = img.metadata print('Metadata Contents:') for item in md: print('\t',item)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
To access any of these metadata items, use the syntax md['description'] or md['map info']:
print('description:',md['description']) print('map info:',md['map info'])
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
You can also use type and len to look at the type and length (or number) of some of the metadata contents:
print(type(md['wavelength'])) print('Number of Bands:',len(md['wavelength']))
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Let's look at the data using imshow, a wrapper around matplotlib's imshow for multi-band images:
view = imshow(img,bands=(58,34,19),stretch=0.05,title="RGB Image of 2017 SERC Tile") print(view)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
When dealing with NEON hyperspectral data, we first want to remove the water vapor & noisy bands, keeping only the valid bands. To speed up the classification algorithms for demonstration purposes, we'll look at a subset of the data using read_subimage, a built in method to subset by area and bands. Type help(img.read_subimage) to see how it works.
valid_band_range = [i for j in (range(0,191), range(212, 281), range(315,415)) for i in j] #remove water vapor bands img_subset = img.read_subimage(range(400,600),range(400,600),bands=valid_band_range) #subset image by area and bands
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Plot the subsetted image for reference:
view = imshow(img_subset,bands=(58,34,19),stretch=0.01,title="RGB Image of 2017 SERC Tile Subset")
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Now that we have the image subsetted, lets run the k-means algorithm. Type help(kmeans) to show how the function works. To run the k-means algorithm on the image and create 5 clusters, using a maximum of 50 iterations, use the following syntax:
(m,c) = kmeans(img_subset,5,50)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Note that the algorithm terminated afte 14 iterations, when the pixels stopped being reassigned. Data Tip: You can iterrupt the algorithm with a keyboard interrupt (CTRL-C) if you notice that the number of reassigned pixels drops off. Kmeans catches the KeyboardInterrupt exception and returns the clusters generated at the end of the previous iteration. If you are running the algorithm interactively, this feature allows you to set the max number of iterations to an arbitrarily high number and then stop the algorithm when the clusters have converged to an acceptable level. If you happen to set the max number of iterations too small (many pixels are still migrating at the end of the final iteration), you can simply call kmeans again to resume processing by passing the cluster centers generated by the previous call as the optional start_clusters argument to the function. Let's take a look at the cluster centers c. In this case, these represent spectras of the five clusters of reflectance that the data were grouped into.
print(c.shape)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:
%matplotlib inline import pylab pylab.figure() for i in range(c.shape[0]): pylab.plot(c[i]) pylab.show pylab.title('Spectral Classes from K-Means Clustering') pylab.xlabel('Bands (with Water Vapor Windows Removed)') pylab.ylabel('Reflectance') #%matplotlib notebook view = imshow(img_subset, bands=(58,34,19),stretch=0.01, classes=m) view.set_display_mode('overlay') view.class_alpha = 0.5 #set transparency view.show_data
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Challenges: K-Means What do you think the spectral classes in the figure you just created represent? Try using a different number of clusters in the kmeans algorithm (e.g., 3 or 10) to see what spectral classes and classifications result. Principal Component Analysis (PCA) Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands) .
pc = principal_components(img_subset) pc_view = imshow(pc.cov) xdata = pc.transform(img_subset)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero.
pcdata = pc.reduce(num=10).transform(img_subset) pc_0999 = pc.reduce(fraction=0.999) # How many eigenvalues are left? print(len(pc_0999.eigenvalues)) img_pc = pc_0999.transform(img_subset) print(img_pc.shape) v = imshow(img_pc[:,:,:5], stretch_all=True)
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Introduction http://en.wikipedia.org/wiki/Quantum_gate Gates in QuTiP and their representation Controlled-PHASE
cphase(pi/2) Image(filename='images/cphase.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Rotation about X-axis
rx(pi/2) Image(filename='images/rx.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Rotation about Y-axis
ry(pi/2) Image(filename='images/ry.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Rotation about Z-axis
rz(pi/2) Image(filename='images/rz.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
CNOT
cnot() Image(filename='images/cnot.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
CSIGN
csign() Image(filename='images/csign.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Berkeley
berkeley() Image(filename='images/berkeley.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
SWAPalpha
swapalpha(pi/2) Image(filename='images/swapalpha.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
FREDKIN
fredkin() Image(filename='images/fredkin.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
TOFFOLI
toffoli() Image(filename='images/toffoli.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
SWAP
swap() Image(filename='images/swap.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
ISWAP
iswap() Image(filename='images/iswap.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
SQRTiSWAP
sqrtiswap() Image(filename='images/sqrtiswap.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
SQRTSWAP
sqrtswap() Image(filename='images/sqrtswap.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
SQRTNOT
sqrtnot() Image(filename='images/sqrtnot.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
HADAMARD
snot() Image(filename='images/snot.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
PHASEGATE
phasegate(pi/2) Image(filename='images/phasegate.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
GLOBALPHASE
globalphase(pi/2) Image(filename='images/globalphase.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Mølmer–Sørensen gate
molmer_sorensen(pi/2)
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Qubit rotation gate
qrot(pi/2, pi/4)
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Expanding gates to larger qubit registers The example above show how to generate matrice representations of the gates implemented in QuTiP, in their minimal qubit requirements. If the same gates is to be represented in a qubit register of size $N$, the optional keywork argument N can be specified when calling the gate function. For example, to generate the matrix for the CNOT gate for a $N=3$ bit register:
cnot(N=3) Image(filename='images/cnot310.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Furthermore, the control and target qubits (when applicable) can also be similarly specified using keyword arguments control and target (or in some cases controls or targets):
cnot(N=3, control=2, target=0) Image(filename='images/cnot302.png')
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Setup of a Qubit Circuit The gates implemented in QuTiP can be used to build any qubit circuit using the class QubitCircuit. The output can be obtained in the form of a unitary matrix or a latex representation. In the following example, we take a SWAP gate. It is known that a swap gate is equivalent to three CNOT gates applied in the given format.
N = 2 qc0 = QubitCircuit(N) qc0.add_gate("SWAP", [0, 1], None) qc0.png U_list0 = qc0.propagators() U0 = gate_sequence_product(U_list0) U0 qc1 = QubitCircuit(N) qc1.add_gate("CNOT", 0, 1) qc1.add_gate("CNOT", 1, 0) qc1.add_gate("CNOT", 0, 1) qc1.png U_list1 = qc1.propagators() U1 = gate_sequence_product(U_list1) U1
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
In place of manually converting the SWAP gate to CNOTs, it can be automatically converted using an inbuilt function in QubitCircuit
qc2 = qc0.resolve_gates("CNOT") qc2.png U_list2 = qc2.propagators() U2 = gate_sequence_product(U_list2) U2
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Example of basis transformation
qc3 = QubitCircuit(3) qc3.add_gate("CNOT", 1, 0) qc3.add_gate("RX", 0, None, pi/2, r"\pi/2") qc3.add_gate("RY", 1, None, pi/2, r"\pi/2") qc3.add_gate("RZ", 2, None, pi/2, r"\pi/2") qc3.add_gate("ISWAP", [1, 2]) qc3.png U3 = gate_sequence_product(qc3.propagators()) U3
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The transformation can either be only in terms of 2-qubit gates:
qc4 = qc3.resolve_gates("CNOT") qc4.png U4 = gate_sequence_product(qc4.propagators()) U4 qc5 = qc3.resolve_gates("ISWAP") qc5.png U5 = gate_sequence_product(qc5.propagators()) U5
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Or the transformation can be in terms of any 2 single qubit rotation gates along with the 2-qubit gate.
qc6 = qc3.resolve_gates(["ISWAP", "RX", "RY"]) qc6.png U6 = gate_sequence_product(qc6.propagators()) U6 qc7 = qc3.resolve_gates(["CNOT", "RZ", "RX"]) qc7.png U7 = gate_sequence_product(qc7.propagators()) U7
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Resolving non-adjacent interactions Interactions between non-adjacent qubits can be resolved by QubitCircuit to a series of adjacent interactions, which is useful for systems such as spin chain models.
qc8 = QubitCircuit(3) qc8.add_gate("CNOT", 2, 0) qc8.png U8 = gate_sequence_product(qc8.propagators()) U8 qc9 = qc8.adjacent_gates() qc9.png U9 = gate_sequence_product(qc9.propagators()) U9 qc10 = qc9.resolve_gates("CNOT") qc10.png U10 = gate_sequence_product(qc10.propagators()) U10
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
User defined gates A user defined gate can be defined by a python function that takes at most one parameter and return a Qobj, the dimension of the Qobj has to match the qubit system.
import numpy as np def user_gate1(arg_value): # controlled rotation X mat = np.zeros((4, 4), dtype=np.complex) mat[0, 0] = mat[1, 1] = 1. mat[2:4, 2:4] = rx(arg_value) return Qobj(mat, dims=[[2, 2], [2, 2]]) def user_gate2(): # S gate mat = np.array([[1., 0], [0., 1.j]]) return Qobj(mat, dims=[[2], [2]])
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
To let the QubitCircuit process those gates, one can modify its the attributes QubitCircuit.user_gates, which is a python dictionary in the form {name: gate_function}.
qc = QubitCircuit(2) qc.user_gates = {"CTRLRX": user_gate1, "S" : user_gate2}
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
When calling the add_gate method, the targets qubits and the argument need to be given.
# qubit 0 controlls qubit 1 qc.add_gate("CTRLRX", targets=[0,1], arg_value=pi/2) # qubit 1 controlls qutbi 0 qc.add_gate("CTRLRX", targets=[1,0], arg_value=pi/2) # a gate can also be added using the Gate class g_T = Gate("S", targets=[1]) qc.add_gate("S", targets=[1]) props = qc.propagators() props[0] # qubit 0 controlls qubit 1 props[1] # qubit 1 controlls qutbi 0 props[2] # S gate acts on qubit 1
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Software versions
from qutip.ipynbtools import version_table version_table()
examples/quantum-gates - Copy.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P
%%sql -- meine Lösung select distinct(Land) from Fahrzeughersteller; %%sql -- deine Lösung SELECT DISTINCT Land FROM fahrzeughersteller
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P
%%sql -- meine Lösung select fahrzeugtyp.Bezeichnung, count(fahrzeug.iD) as Anzahl from fahrzeugtyp left join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id group by fahrzeugtyp.bezeichnung having count(Anzahl) > 2 %%sql -- deine Lösung select f.Bezeichnung, ( select count(f.ID)) as Anzahl from fahrzeugtyp ft where ft.ID = f.Fahrzeugtyp_ID )# Tabellenverknüpfung from fahrzeugtyp f order by ft.Bezeichnung;
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat.
%%sql -- meine Lösung select Name, vorname, Bezeichnung from Mitarbeiter inner join Abteilung on Mitarbeiter.Abteilung_ID = Abteilung.ID where Abteilung.Ort in('Dortmund', 'Bochum') %%sql select concat(m.Name, ' ',m.Vorname) as Mitarbeiter, # Zusammenführung von Vor- & Nachname ab.Bezeichnung as Abteilung from mitarbeiter m, abteilung ab where m.Abteilung_ID = ab.ID and upper(ab.Ort) in ('DORTMUND', 'BOCHUM'); # Ort in Upper-case Buchstaben selektieren um Matchquote zu erhöhen
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe. Geben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P
%%sql -- meine Lösung select fahrzeughersteller.id, year(Datum), min(zuordnung_sf_fz.schadenshoehe), max(zuordnung_sf_fz.Schadenshoehe), (max(zuordnung_sf_fz.schadenshoehe) - min(zuordnung_sf_fz.schadenshoehe)) as Differenz from fahrzeughersteller left join fahrzeugtyp on fahrzeughersteller.id = fahrzeugtyp.hersteller_ID inner join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id inner join zuordnung_sf_fz on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id inner join schadensfall on zuordnung_sf_fz.Schadensfall_ID = schadensfall.ID group by fahrzeughersteller.id, year(Datum) %%sql -- deine Lösung select f.id, year(s.Datum) as Jahr, # Verwendung von der Systemfunktin year() um das Datum zu konvertieren (select min(Schadenshoehe) from schadensfall where year(Datum) = Jahr ) as Min, # Subselect für min (select max(Schadenshoehe) from schadensfall where year(Datum) = Jahr ) as MAX # Subselect für max # Berechnung der Differenz (Max - Min) from fahrzeug f, zuordnung_sf_fz z, schadensfall s where z.Fahrzeug_ID = f.ID and s.ID = z.Schadensfall_ID group by f.ID, year(s.Datum);
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren. 4 P
%%sql select Mitarbeiter.Name, dienstwagen.Kennzeichen from Mitarbeiter inner join dienstwagen on mitarbeiter.id = dienstwagen.Mitarbeiter_id inner join fahrzeugtyp on dienstwagen.fahrzeugtyp_Id = fahrzeugtyp.id inner join fahrzeughersteller on fahrzeugtyp.hersteller_id = fahrzeughersteller.id where fAhrzeughersteller.NAme = 'Opel' %%sql -- deine Lösung select concat(m.Name, ' ',m.Vorname) as Mitarbeiter, d.Kennzeichen, fh.Name as Hersteller from mitarbeiter m, dienstwagen d, fahrzeugtyp ft, fahrzeughersteller fh where d.Mitarbeiter_ID = m.ID and ft.ID = d.Fahrzeugtyp_ID and fh.ID = ft.Hersteller_ID and upper(fh.Name) = 'OPEL';
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P
%%sql -- meine Lösung select fahrzeug.kennzeichen, sum(schadenshoehe) from fahrzeug inner join zuordnung_sf_fz on fahrzeug.id = zuordnung_sf_fz.Fahrzeug_ID group by fahrzeug.kennzeichen having sum(schadenshoehe) > (select avg(schadenshoehe) from zuordnung_sf_fz) %%sql -- deine Lösung select f.ID , f.Kennzeichen, s.Schadenshoehe from fahrzeug f, zuordnung_sf_fz z, schadensfall s where z.Fahrzeug_ID = f.ID and s.ID = z.Schadensfall_ID and s.Schadenshoehe > ( select avg(Schadenshoehe) from schadensfall ) # Durchschnitt durch Subselect ermitteln group by f.ID;
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P
%%sql select Mitarbeiter.Name, Mitarbeiter.Geburtsdatum from Mitarbeiter where Geburtsdatum > (select avg(Geburtsdatum) from Mitarbeiter) order by Mitarbeiter.Name -- oder anders -- where (now() - Geburtsdatum) > (select now() - (select avg(geburtsdatum) from mitarbeiter); %%sql select concat(m.Name, ' ',m.Vorname) as Mitarbeiter, m.Geburtsdatum from mitarbeiter m where m.Geburtsdatum > ( select avg(Geburtsdatum) from mitarbeiter);
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
steinam/teacher
mit
2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". Tip: "how to join a list Python" might be a helpful search
items = data['artists']['items'] for item in items: if len(item['genres'])==0: print("No genres listed") else: print(','.join(item['genres'])) # AGGREGATION PROBLEM all_genres = [] # THE LOOP for item in items: print ("All genres so far: ", all_genres) # THE CONDITIONAL print ("Current artist has: ", item['genres']) all_genres = all_genres + item['genres'] print ('++++++++++++++++++++++++++++++++') print ("All final genres: ", all_genres) for genre in all_genres: genre_count = all_genres.count(genre) print (genre, genre_count) unique_genres = set(all_genres) print ('+++++++++++++++++++++++') for genre in unique_genres: genre_count = all_genres.count(genre) print(genre, genre_count) print ("dirty south rap is the most represented genre")
05/spotify_api.ipynb
juneeseo/foundations-homework
mit
3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
# popularity items = data['artists']['items'] # AGGREGATION PROBLEM most_popular_name = "" most_popular_score = 0 for item in items: print ("Looking at", item['name'], "who has popularity of", item['popularity']) print ("Comapring", item['popularity'], "to", most_popular_score) # THE CONDITIONAL if item['popularity'] > most_popular_score: if item['name'] == 'Lil Wayne': pass else: most_popular_name = item['name'] most_popular_score = item['popularity'] else: pass print ('++++++++++++++++++++') print (most_popular_name, most_popular_score) data['artists']['items'][0]['followers']['total'] # Followers # popularity items = data['artists']['items'] # AGGREGATION PROBLEM artist_most_followers = "" most_followers = 0 for item in items: print ("Looking at", item['name'], "who has", item['followers']['total'], "followers") print ("Comapring", item['followers']['total'], "to", most_followers) # THE CONDITIONAL if item['followers']['total'] > most_followers: if item['name'] == 'Lil Wayne': pass else: artist_most_followers = item['name'] most_followers = item['followers']['total'] else: pass print ('++++++++++++++++++++') print (artist_most_followers, most_followers)
05/spotify_api.ipynb
juneeseo/foundations-homework
mit
4) print a list of Lil's that are more popular than Lil' Kim.
items = data['artists']['items'] for item in items: if item['name'] == "Lil' Kim": print (item['name'], item['popularity']) else: pass lil_kim_popularity = 62 # AGGREGATION PROBLEM more_popular_than_lil_kim = [] # THE LOOP for item in items: if item['popularity'] > lil_kim_popularity: more_popular_than_lil_kim.append(item['name']) else: pass for item in more_popular_than_lil_kim: print(item) more_popular_string = ", ".join(more_popular_than_lil_kim) print(more_popular_string)
05/spotify_api.ipynb
juneeseo/foundations-homework
mit
5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
#5 # Lil Wayne and Lil June items = data['artists']['items'] for item in items: print (item['name'], item['id']) import requests lil_wayne_id = '55Aa2cqylxrFIXC767Z865' lil_june_id = '3GH3KD2078kLPpEkN1UN26' lil_wayne_response = requests.get(https://api.spotify.com/v1/artists/lil_wayne_id/top-tracks?country='US') lil_wayne_data = lil_wayne_response.json() lil_june_response = requests.get(https://api.spotify.com/v1/artists/lil_june_id/top-tracks?country='US') lil_june_data = lil_june_response.json() lil_wayne_data # Lil Wayne Top Tracks wayne_response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks') wayne_data = wayne_response.json() wayne_data wayne_data.keys()
05/spotify_api.ipynb
juneeseo/foundations-homework
mit
CONTENTS Simple MDP State dependent reward function State and action dependent reward function State, action and next state dependent reward function Grid MDP Pathfinding problem POMDP Two state POMDP SIMPLE MDP State dependent reward function Markov Decision Processes are formally described as processes that follow the Markov property which states that "The future is independent of the past given the present". MDPs formally describe environments for reinforcement learning and we assume that the environment is fully observable. Let us take a toy example MDP and solve it using the functions in mdp.py. This is a simple example adapted from a similar problem by Dr. David Silver, tweaked to fit the limitations of the current functions. Let's say you're a student attending lectures in a university. There are three lectures you need to attend on a given day. <br> Attending the first lecture gives you 4 points of reward. After the first lecture, you have a 0.6 probability to continue into the second one, yielding 6 more points of reward. <br> But, with a probability of 0.4, you get distracted and start using Facebook instead and get a reward of -1. From then onwards, you really can't let go of Facebook and there's just a 0.1 probability that you will concentrate back on the lecture. <br> After the second lecture, you have an equal chance of attending the next lecture or just falling asleep. Falling asleep is the terminal state and yields you no reward, but continuing on to the final lecture gives you a big reward of 10 points. <br> From there on, you have a 40% chance of going to study and reach the terminal state, but a 60% chance of going to the pub with your friends instead. You end up drunk and don't know which lecture to attend, so you go to one of the lectures according to the probabilities given above. <br> We now have an outline of our stochastic environment and we need to maximize our reward by solving this MDP. <br> <br> We first have to define our Transition Matrix as a nested dictionary to fit the requirements of the MDP class.
t = { 'leisure': { 'facebook': {'leisure':0.9, 'class1':0.1}, 'quit': {'leisure':0.1, 'class1':0.9}, 'study': {}, 'sleep': {}, 'pub': {} }, 'class1': { 'study': {'class2':0.6, 'leisure':0.4}, 'facebook': {'class2':0.4, 'leisure':0.6}, 'quit': {}, 'sleep': {}, 'pub': {} }, 'class2': { 'study': {'class3':0.5, 'end':0.5}, 'sleep': {'end':0.5, 'class3':0.5}, 'facebook': {}, 'quit': {}, 'pub': {}, }, 'class3': { 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16}, 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24}, 'facebook': {}, 'quit': {}, 'sleep': {} }, 'end': {} }
mdp_apps.ipynb
Chipe1/aima-python
mit
We now need to define the reward for each state.
rewards = { 'class1': 4, 'class2': 6, 'class3': 10, 'leisure': -1, 'end': 0 }
mdp_apps.ipynb
Chipe1/aima-python
mit
This MDP has only one terminal state.
terminals = ['end']
mdp_apps.ipynb
Chipe1/aima-python
mit
Let's now set the initial state to Class 1.
init = 'class1'
mdp_apps.ipynb
Chipe1/aima-python
mit
We will write a CustomMDP class to extend the MDP class for the problem at hand. This class will implement the T method to implement the transition model. This is the exact same class as given in mdp.ipynb.
class CustomMDP(MDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): # All possible actions. actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.reward = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()]
mdp_apps.ipynb
Chipe1/aima-python
mit
We now need an instance of this class.
mdp = CustomMDP(t, rewards, terminals, init, gamma=.9)
mdp_apps.ipynb
Chipe1/aima-python
mit
The utility of each state can be found by value_iteration.
value_iteration(mdp)
mdp_apps.ipynb
Chipe1/aima-python
mit
Now that we can compute the utility values, we can find the best policy.
pi = best_policy(mdp, value_iteration(mdp, .01))
mdp_apps.ipynb
Chipe1/aima-python
mit
pi stores the best action for each state.
print(pi)
mdp_apps.ipynb
Chipe1/aima-python
mit
We can confirm that this is the best policy by verifying this result against policy_iteration.
policy_iteration(mdp)
mdp_apps.ipynb
Chipe1/aima-python
mit
Everything looks perfect, but let us look at another possibility for an MDP. <br> Till now we have only dealt with rewards that the agent gets while it is on a particular state. What if we want to have different rewards for a state depending on the action that the agent takes next. The agent gets the reward during its transition to the next state. <br> For the sake of clarity, we will call this the transition reward and we will call this kind of MDP a dynamic MDP. This is not a conventional term, we just use it to minimize confusion between the two. <br> This next section deals with how to create and solve a dynamic MDP. State and action dependent reward function Let us consider a very similar problem, but this time, we do not have rewards on states, instead, we have rewards on the transitions between states. This state diagram will make it clearer. A very similar scenario as the previous problem, but we have different rewards for the same state depending on the action taken. <br> To deal with this, we just need to change the R method of the MDP class, but to prevent confusion, we will write a new similar class DMDP.
class DMDP: """A Markov Decision Process, defined by an initial state, transition model, and reward model. We also keep track of a gamma value, for use by algorithms. The transition model is represented somewhat differently from the text. Instead of P(s' | s, a) being a probability number for each state/state/action triplet, we instead have T(s, a) return a list of (p, s') pairs. The reward function is very similar. We also keep track of the possible states, terminal states, and actions for each state.""" def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9): if not (0 < gamma <= 1): raise ValueError("An MDP must have 0 < gamma <= 1") if states: self.states = states else: self.states = set() self.init = init self.actlist = actlist self.terminals = terminals self.transitions = transitions self.rewards = rewards self.gamma = gamma def R(self, state, action): """Return a numeric reward for this state and this action.""" if (self.rewards == {}): raise ValueError('Reward model is missing') else: return self.rewards[state][action] def T(self, state, action): """Transition model. From a state and an action, return a list of (probability, result-state) pairs.""" if(self.transitions == {}): raise ValueError("Transition model is missing") else: return self.transitions[state][action] def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist
mdp_apps.ipynb
Chipe1/aima-python
mit
The transition model will be the same
t = { 'leisure': { 'facebook': {'leisure':0.9, 'class1':0.1}, 'quit': {'leisure':0.1, 'class1':0.9}, 'study': {}, 'sleep': {}, 'pub': {} }, 'class1': { 'study': {'class2':0.6, 'leisure':0.4}, 'facebook': {'class2':0.4, 'leisure':0.6}, 'quit': {}, 'sleep': {}, 'pub': {} }, 'class2': { 'study': {'class3':0.5, 'end':0.5}, 'sleep': {'end':0.5, 'class3':0.5}, 'facebook': {}, 'quit': {}, 'pub': {}, }, 'class3': { 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16}, 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24}, 'facebook': {}, 'quit': {}, 'sleep': {} }, 'end': {} }
mdp_apps.ipynb
Chipe1/aima-python
mit
The reward model will be a dictionary very similar to the transition dictionary with a reward for every action for every state.
r = { 'leisure': { 'facebook':-1, 'quit':0, 'study':0, 'sleep':0, 'pub':0 }, 'class1': { 'study':-2, 'facebook':-1, 'quit':0, 'sleep':0, 'pub':0 }, 'class2': { 'study':-2, 'sleep':0, 'facebook':0, 'quit':0, 'pub':0 }, 'class3': { 'study':10, 'pub':1, 'facebook':0, 'quit':0, 'sleep':0 }, 'end': { 'study':0, 'pub':0, 'facebook':0, 'quit':0, 'sleep':0 } }
mdp_apps.ipynb
Chipe1/aima-python
mit
The MDP has only one terminal state
terminals = ['end']
mdp_apps.ipynb
Chipe1/aima-python
mit
We will write a CustomDMDP class to extend the DMDP class for the problem at hand. This class will implement everything that the previous CustomMDP class implements along with a new reward model.
class CustomDMDP(DMDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) DMDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.rewards = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()] def R(self, state, action): if action is None: return 0 else: return self.rewards[state][action]
mdp_apps.ipynb
Chipe1/aima-python
mit
One thing we haven't thought about yet is that the value_iteration algorithm won't work now that the reward model is changed. It will be quite similar to the one we currently have nonetheless. The Bellman update equation now is defined as follows $$U(s)=\max_{a\epsilon A(s)}\bigg[R(s, a) + \gamma\sum_{s'}P(s'\ |\ s,a)U(s')\bigg]$$ It is not difficult to see that the update equation we have been using till now is just a special case of this more generalized equation. We also need to max over the reward function now as the reward function is action dependent as well. <br> We will use this to write a function to carry out value iteration, very similar to the one we are familiar with.
def value_iteration_dmdp(dmdp, epsilon=0.001): U1 = {s: 0 for s in dmdp.states} R, T, gamma = dmdp.R, dmdp.T, dmdp.gamma while True: U = U1.copy() delta = 0 for s in dmdp.states: U1[s] = max([(R(s, a) + gamma*sum([(p*U[s1]) for (p, s1) in T(s, a)])) for a in dmdp.actions(s)]) delta = max(delta, abs(U1[s] - U[s])) if delta < epsilon * (1 - gamma) / gamma: return U
mdp_apps.ipynb
Chipe1/aima-python
mit
We're all set. Let's instantiate our class.
dmdp = CustomDMDP(t, r, terminals, init, gamma=.9)
mdp_apps.ipynb
Chipe1/aima-python
mit
Calculate utility values by calling value_iteration_dmdp.
value_iteration_dmdp(dmdp)
mdp_apps.ipynb
Chipe1/aima-python
mit
These are the expected utility values for our new MDP. <br> As you might have guessed, we cannot use the old best_policy function to find the best policy. So we will write our own. But, before that we need a helper function to calculate the expected utility value given a state and an action.
def expected_utility_dmdp(a, s, U, dmdp): return dmdp.R(s, a) + dmdp.gamma*sum([(p*U[s1]) for (p, s1) in dmdp.T(s, a)])
mdp_apps.ipynb
Chipe1/aima-python
mit
Now we write our modified best_policy function.
from utils import argmax def best_policy_dmdp(dmdp, U): pi = {} for s in dmdp.states: pi[s] = argmax(dmdp.actions(s), key=lambda a: expected_utility_dmdp(a, s, U, dmdp)) return pi
mdp_apps.ipynb
Chipe1/aima-python
mit
Find the best policy.
pi = best_policy_dmdp(dmdp, value_iteration_dmdp(dmdp, .01)) print(pi)
mdp_apps.ipynb
Chipe1/aima-python
mit
From this, we can infer that value_iteration_dmdp tries to minimize the negative reward. Since we don't have rewards for states now, the algorithm takes the action that would try to avoid getting negative rewards and take the lesser of two evils if all rewards are negative. You might also want to have state rewards alongside transition rewards. Perhaps you can do that yourself now that the difficult part has been done. <br> State, action and next-state dependent reward function For truly stochastic environments, we have noticed that taking an action from a particular state doesn't always do what we want it to. Instead, for every action taken from a particular state, it might be possible to reach a different state each time depending on the transition probabilities. What if we want different rewards for each state, action and next-state triplet? Mathematically, we now want a reward function of the form R(s, a, s') for our MDP. This section shows how we can tweak the MDP class to achieve this. <br> Let's now take a different problem statement. The one we are working with is a bit too simple. Consider a taxi that serves three adjacent towns A, B, and C. Each time the taxi discharges a passenger, the driver must choose from three possible actions: 1. Cruise the streets looking for a passenger. 2. Go to the nearest taxi stand. 3. Wait for a radio call from the dispatcher with instructions. <br> Subject to the constraint that the taxi driver cannot do the third action in town B because of distance and poor reception. Let's model our MDP. <br> The MDP has three states, namely A, B and C. <br> It has three actions, namely 1, 2 and 3. <br> Action sets: <br> $K_{a}$ = {1, 2, 3} <br> $K_{b}$ = {1, 2} <br> $K_{c}$ = {1, 2, 3} <br> We have the following transition probability matrices: <br> <br> Action 1: Cruising streets <br> $\ P^{1} = \left[ {\begin{array}{ccc} \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \ \frac{1}{2} & 0 & \frac{1}{2} \ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \ \end{array}}\right] \ \ $ <br> <br> Action 2: Waiting at the taxi stand <br> $\ P^{2} = \left[ {\begin{array}{ccc} \frac{1}{16} & \frac{3}{4} & \frac{3}{16} \ \frac{1}{16} & \frac{7}{8} & \frac{1}{16} \ \frac{1}{8} & \frac{3}{4} & \frac{1}{8} \ \end{array}}\right] \ \ $ <br> <br> Action 3: Waiting for dispatch <br> $\ P^{3} = \left[ {\begin{array}{ccc} \frac{1}{4} & \frac{1}{8} & \frac{5}{8} \ 0 & 1 & 0 \ \frac{3}{4} & \frac{1}{16} & \frac{3}{16} \ \end{array}}\right] \ \ $ <br> <br> For the sake of readability, we will call the states A, B and C and the actions 'cruise', 'stand' and 'dispatch'. We will now build the transition model as a dictionary using these matrices.
t = { 'A': { 'cruise': {'A':0.5, 'B':0.25, 'C':0.25}, 'stand': {'A':0.0625, 'B':0.75, 'C':0.1875}, 'dispatch': {'A':0.25, 'B':0.125, 'C':0.625} }, 'B': { 'cruise': {'A':0.5, 'B':0, 'C':0.5}, 'stand': {'A':0.0625, 'B':0.875, 'C':0.0625}, 'dispatch': {'A':0, 'B':1, 'C':0} }, 'C': { 'cruise': {'A':0.25, 'B':0.25, 'C':0.5}, 'stand': {'A':0.125, 'B':0.75, 'C':0.125}, 'dispatch': {'A':0.75, 'B':0.0625, 'C':0.1875} } }
mdp_apps.ipynb
Chipe1/aima-python
mit
The reward matrices for the problem are as follows: <br> <br> Action 1: Cruising streets <br> $\ R^{1} = \left[ {\begin{array}{ccc} 10 & 4 & 8 \ 14 & 0 & 18 \ 10 & 2 & 8 \ \end{array}}\right] \ \ $ <br> <br> Action 2: Waiting at the taxi stand <br> $\ R^{2} = \left[ {\begin{array}{ccc} 8 & 2 & 4 \ 8 & 16 & 8 \ 6 & 4 & 2\ \end{array}}\right] \ \ $ <br> <br> Action 3: Waiting for dispatch <br> $\ R^{3} = \left[ {\begin{array}{ccc} 4 & 6 & 4 \ 0 & 0 & 0 \ 4 & 0 & 8\ \end{array}}\right] \ \ $ <br> <br> We now build the reward model as a dictionary using these matrices.
r = { 'A': { 'cruise': {'A':10, 'B':4, 'C':8}, 'stand': {'A':8, 'B':2, 'C':4}, 'dispatch': {'A':4, 'B':6, 'C':4} }, 'B': { 'cruise': {'A':14, 'B':0, 'C':18}, 'stand': {'A':8, 'B':16, 'C':8}, 'dispatch': {'A':0, 'B':0, 'C':0} }, 'C': { 'cruise': {'A':10, 'B':2, 'C':18}, 'stand': {'A':6, 'B':4, 'C':2}, 'dispatch': {'A':4, 'B':0, 'C':8} } }
mdp_apps.ipynb
Chipe1/aima-python
mit
The Bellman update equation now is defined as follows $$U(s)=\max_{a\epsilon A(s)}\sum_{s'}P(s'\ |\ s,a)(R(s'\ |\ s,a) + \gamma U(s'))$$ It is not difficult to see that all the update equations we have used till now is just a special case of this more generalized equation. If we did not have next-state-dependent rewards, the first term inside the summation exactly sums up to R(s, a) or the state-reward for a particular action and we would get the update equation used in the previous problem. If we did not have action dependent rewards, the first term inside the summation sums up to R(s) or the state-reward and we would get the first update equation used in mdp.ipynb. <br> For example, as we have the same reward regardless of the action, let's consider a reward of r units for a particular state and let's assume the transition probabilities to be 0.1, 0.2, 0.3 and 0.4 for 4 possible actions for that state. We will further assume that a particular action in a state leads to the same state every time we take that action. The first term inside the summation for this case will evaluate to (0.1 + 0.2 + 0.3 + 0.4)r = r which is equal to R(s) in the first update equation. <br> There are many ways to write value iteration for this situation, but we will go with the most intuitive method. One that can be implemented with minor alterations to the existing value_iteration algorithm. <br> Our DMDP class will be slightly different. More specifically, the R method will have one more index to go through now that we have three levels of nesting in the reward model. We will call the new class DMDP2 as I have run out of creative names.
class DMDP2: """A Markov Decision Process, defined by an initial state, transition model, and reward model. We also keep track of a gamma value, for use by algorithms. The transition model is represented somewhat differently from the text. Instead of P(s' | s, a) being a probability number for each state/state/action triplet, we instead have T(s, a) return a list of (p, s') pairs. The reward function is very similar. We also keep track of the possible states, terminal states, and actions for each state.""" def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9): if not (0 < gamma <= 1): raise ValueError("An MDP must have 0 < gamma <= 1") if states: self.states = states else: self.states = set() self.init = init self.actlist = actlist self.terminals = terminals self.transitions = transitions self.rewards = rewards self.gamma = gamma def R(self, state, action, state_): """Return a numeric reward for this state, this action and the next state_""" if (self.rewards == {}): raise ValueError('Reward model is missing') else: return self.rewards[state][action][state_] def T(self, state, action): """Transition model. From a state and an action, return a list of (probability, result-state) pairs.""" if(self.transitions == {}): raise ValueError("Transition model is missing") else: return self.transitions[state][action] def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist
mdp_apps.ipynb
Chipe1/aima-python
mit
Only the R method is different from the previous DMDP class. <br> Our traditional custom class will be required to implement the transition model and the reward model. <br> We call this class CustomDMDP2.
class CustomDMDP2(DMDP2): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) DMDP2.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.rewards = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()] def R(self, state, action, state_): if action is None: return 0 else: return self.rewards[state][action][state_]
mdp_apps.ipynb
Chipe1/aima-python
mit
We can finally write value iteration for this problem. The latest update equation will be used.
def value_iteration_taxi_mdp(dmdp2, epsilon=0.001): U1 = {s: 0 for s in dmdp2.states} R, T, gamma = dmdp2.R, dmdp2.T, dmdp2.gamma while True: U = U1.copy() delta = 0 for s in dmdp2.states: U1[s] = max([sum([(p*(R(s, a, s1) + gamma*U[s1])) for (p, s1) in T(s, a)]) for a in dmdp2.actions(s)]) delta = max(delta, abs(U1[s] - U[s])) if delta < epsilon * (1 - gamma) / gamma: return U
mdp_apps.ipynb
Chipe1/aima-python
mit
These algorithms can be made more pythonic by using cleverer list comprehensions. We can also write the variants of value iteration in such a way that all problems are solved using the same base class, regardless of the reward function and the number of arguments it takes. Quite a few things can be done to refactor the code and reduce repetition, but we have done it this way for the sake of clarity. Perhaps you can try this as an exercise. <br> We now need to define terminals and initial state.
terminals = ['end'] init = 'A'
mdp_apps.ipynb
Chipe1/aima-python
mit
Let's instantiate our class.
dmdp2 = CustomDMDP2(t, r, terminals, init, gamma=.9) value_iteration_taxi_mdp(dmdp2)
mdp_apps.ipynb
Chipe1/aima-python
mit
These are the expected utility values for the states of our MDP. Let's proceed to write a helper function to find the expected utility and another to find the best policy.
def expected_utility_dmdp2(a, s, U, dmdp2): return sum([(p*(dmdp2.R(s, a, s1) + dmdp2.gamma*U[s1])) for (p, s1) in dmdp2.T(s, a)]) from utils import argmax def best_policy_dmdp2(dmdp2, U): pi = {} for s in dmdp2.states: pi[s] = argmax(dmdp2.actions(s), key=lambda a: expected_utility_dmdp2(a, s, U, dmdp2)) return pi
mdp_apps.ipynb
Chipe1/aima-python
mit
Find the best policy.
pi = best_policy_dmdp2(dmdp2, value_iteration_taxi_mdp(dmdp2, .01)) print(pi)
mdp_apps.ipynb
Chipe1/aima-python
mit
We have successfully adapted the existing code to a different scenario yet again. The takeaway from this section is that you can convert the vast majority of reinforcement learning problems into MDPs and solve for the best policy using simple yet efficient tools. GRID MDP Pathfinding Problem Markov Decision Processes can be used to find the best path through a maze. Let us consider this simple maze. This environment can be formulated as a GridMDP. <br> To make the grid matrix, we will consider the state-reward to be -0.1 for every state. <br> State (1, 1) will have a reward of -5 to signify that this state is to be prohibited. <br> State (9, 9) will have a reward of +5. This will be the terminal state. <br> The matrix can be generated using the GridMDP editor or we can write it ourselves.
grid = [ [None, None, None, None, None, None, None, None, None, None, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, +5.0, None], [None, -0.1, None, None, None, None, None, None, None, -0.1, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], [None, -0.1, None, None, None, None, None, None, None, None, None], [None, -0.1, None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], [None, -0.1, None, None, None, None, None, -0.1, None, -0.1, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, -0.1, None], [None, None, None, None, None, -0.1, None, -0.1, None, -0.1, None], [None, -5.0, -0.1, -0.1, -0.1, -0.1, None, -0.1, None, -0.1, None], [None, None, None, None, None, None, None, None, None, None, None] ]
mdp_apps.ipynb
Chipe1/aima-python
mit
We have only one terminal state, (9, 9)
terminals = [(9, 9)]
mdp_apps.ipynb
Chipe1/aima-python
mit
We define our maze environment below
maze = GridMDP(grid, terminals)
mdp_apps.ipynb
Chipe1/aima-python
mit
To solve the maze, we can use the best_policy function along with value_iteration.
pi = best_policy(maze, value_iteration(maze))
mdp_apps.ipynb
Chipe1/aima-python
mit
This is the heatmap generated by the GridMDP editor using value_iteration on this environment <br> <br> Let's print out the best policy
from utils import print_table print_table(maze.to_arrows(pi))
mdp_apps.ipynb
Chipe1/aima-python
mit