markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
nanowrite blog post generscript to create blog posts templates november for nano write.Creates 30 posts. One for each day of the month.rst files + metaSave type as you type it to json file that has time stamp attached to it. rst name is nanowrimo (year) day (day)
import os for novran in range(1,31): print ('nanowrimo15day' + str(novran)) for worcu in range(1,31): print(worcu) os.system('wc -w /home/wcmckee/Downloads/writersden/posts/nanowrimo15-day' + str(worcu) + '.rst > /home/wcmckee/nano/' + str(worcu)) mylisit = list() for worcu in range(1,31): opfilz = open('/home/wcmckee/nano/' + str(worcu), 'r') mylisit.append((opfilz.read())) opfilz.close() bahg = list() behp = list() for bah in bahg: print(bah[0:10]) behp.append(bah[0:10]) for beh in behp: print(beh) for myli in mylisit: print(myli.replace(' ', ', ')) bahg.append(myli.replace(' ', ', ')) os.system('wc -w /home/wcmckee/Downloads/writersden/posts/nanowrimo15-day*.rst')
_____no_output_____
MIT
nanowrimoPosts.ipynb
wcmckee/wcmckee
Interpolating {interpolate_example}=============Interpolate one mesh\'s point/cell arrays onto another mesh\'s nodesusing a Gaussian Kernel.
# sphinx_gallery_thumbnail_number = 4 import pyvista as pv from pyvista import examples
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Simple Surface Interpolation============================Resample the points\' arrays onto a surface
# Download sample data surface = examples.download_saddle_surface() points = examples.download_sparse_points() p = pv.Plotter() p.add_mesh(points, point_size=30.0, render_points_as_spheres=True) p.add_mesh(surface) p.show()
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Run the interpolation
interpolated = surface.interpolate(points, radius=12.0) p = pv.Plotter() p.add_mesh(points, point_size=30.0, render_points_as_spheres=True) p.add_mesh(interpolated, scalars="val") p.show()
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Complex Interpolation=====================In this example, we will in interpolate sparse points in 3D space into avolume. These data are from temperature probes in the subsurface and thegoal is to create an approximate 3D model of the temperature field inthe subsurface.This approach is a great for back-of-the-hand estimations but pales incomparison to kriging
# Download the sparse data probes = examples.download_thermal_probes()
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Create the interpolation grid around the sparse data
grid = pv.UniformGrid() grid.origin = (329700, 4252600, -2700) grid.spacing = (250, 250, 50) grid.dimensions = (60, 75, 100) dargs = dict(cmap="coolwarm", clim=[0,300], scalars="temperature (C)") cpos = [(364280.5723737897, 4285326.164400684, 14093.431895014139), (337748.7217949739, 4261154.45054595, -637.1092549935128), (-0.29629216102673206, -0.23840196609932093, 0.9248651025279784)] p = pv.Plotter() p.add_mesh(grid.outline(), color='k') p.add_mesh(probes, render_points_as_spheres=True, **dargs) p.show(cpos=cpos)
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Run an interpolation
interp = grid.interpolate(probes, radius=15000, sharpness=10, strategy='mask_points')
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Visualize the results
vol_opac = [0, 0, .2, 0.2, 0.5, 0.5] p = pv.Plotter(shape=(1,2), window_size=[1024*3, 768*2]) p.enable_depth_peeling() p.add_volume(interp, opacity=vol_opac, **dargs) p.add_mesh(probes, render_points_as_spheres=True, point_size=10, **dargs) p.subplot(0,1) p.add_mesh(interp.contour(5), opacity=0.5, **dargs) p.add_mesh(probes, render_points_as_spheres=True, point_size=10, **dargs) p.link_views() p.show(cpos=cpos)
_____no_output_____
MIT
locale/examples/01-filter/interpolate.ipynb
tkoyama010/pyvista-doc-translations
Document embeddings in BigQueryThis notebook shows how to do use a pre-trained embedding as a vector representation of a natural language text column.Given this embedding, we can use it in machine learning models. Embedding model for documentsWe're going to use a model that has been pretrained on Google News. Here's an example of how it works in Python. We will use it directly in BigQuery, however.
import tensorflow as tf import tensorflow_hub as tfhub model = tf.keras.Sequential() model.add(tfhub.KerasLayer("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1", output_shape=[20], input_shape=[], dtype=tf.string)) model.summary() model.predict([""" Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially. At the stroke of the midnight hour, when the world sleeps, India will awake to life and freedom. A moment comes, which comes but rarely in history, when we step out from the old to the new -- when an age ends, and when the soul of a nation, long suppressed, finds utterance. """])
Model: "sequential_4" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer_3 (KerasLayer) (None, 20) 400020 ================================================================= Total params: 400,020 Trainable params: 0 Non-trainable params: 400,020 _________________________________________________________________
Apache-2.0
02_data_representation/text_embeddings.ipynb
modoai/ml-design-patterns
Loading model into BigQueryThe Swivel model above is already available in SavedModel format. But we need it on Google Cloud Storage before we can load it into BigQuery.
%%bash BUCKET=ai-analytics-solutions-kfpdemo # CHANGE AS NEEDED rm -rf tmp mkdir tmp FILE=swivel.tar.gz wget --quiet -O tmp/swivel.tar.gz https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1?tf-hub-format=compressed cd tmp tar xvfz swivel.tar.gz cd .. mv tmp swivel gsutil -m cp -R swivel gs://${BUCKET}/swivel rm -rf swivel echo "Model artifacts are now at gs://${BUCKET}/swivel/*"
assets/ assets/tokens.txt saved_model.pb variables/ variables/variables.data-00000-of-00001 variables/variables.index Model artifacts are now at gs://ai-analytics-solutions-kfpdemo/swivel/*
Apache-2.0
02_data_representation/text_embeddings.ipynb
modoai/ml-design-patterns
Let's load the model into a BigQuery dataset named advdata (create it if necessary)
%%bigquery CREATE OR REPLACE MODEL advdata.swivel_text_embed OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/swivel/*')
_____no_output_____
Apache-2.0
02_data_representation/text_embeddings.ipynb
modoai/ml-design-patterns
From the BigQuery web console, click on "schema" tab for the newly loaded model. We see that the input is called sentences and the output is called output_0:
%%bigquery SELECT output_0 FROM ML.PREDICT(MODEL advdata.swivel_text_embed,( SELECT "Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially." AS sentences))
_____no_output_____
Apache-2.0
02_data_representation/text_embeddings.ipynb
modoai/ml-design-patterns
Create lookup tableLet's create a lookup table of embeddings. We'll use the comments field of a storm reports table from NOAA.This is an example of the Feature Store design pattern.
%%bigquery CREATE OR REPLACE TABLE advdata.comments_embedding AS SELECT output_0 as comments_embedding, comments FROM ML.PREDICT(MODEL advdata.swivel_text_embed,( SELECT comments, LOWER(comments) AS sentences FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports` ))
_____no_output_____
Apache-2.0
02_data_representation/text_embeddings.ipynb
modoai/ml-design-patterns
Pumpkin PricingLoad up required libraries and dataset. Convert the data to a dataframe containing a subset of the data: - Only get pumpkins priced by the bushel- Convert the date to a month- Calculate the price to be an average of high and low prices- Convert the price to reflect the pricing by bushel quantity
import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, classification_report from sklearn.metrics import roc_curve, roc_auc_score from sklearn.linear_model import LogisticRegression import sklearn import numpy as np import calendar import seaborn as sns pumpkins = pd.read_csv('../data/US-pumpkins.csv') pumpkins.head() pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)] new_columns = ['Package', 'Variety', 'City Name', 'Month', 'Low Price', 'High Price', 'Date', 'City Num', 'Variety Num'] pumpkins = pumpkins.drop([c for c in pumpkins.columns if c not in new_columns], axis=1) price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2 month = pd.DatetimeIndex(pumpkins['Date']).month new_pumpkins = pd.DataFrame({'Month': month, 'Variety': pumpkins['Variety'], 'City': pumpkins['City Name'], 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price}) new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/1.1 new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price*2 new_pumpkins.head()
_____no_output_____
MIT
2-Regression/3-Linear/notebook.ipynb
GDaglio/ML-For-Beginners
A basic scatterplot reminds us that we only have month data from August through December. We probably need more data to be able to draw conclusions in a linear fashion.
new_pumpkins["Month_str"] = new_pumpkins['Month'].apply(lambda x: calendar.month_abbr[x]) plt.scatter('Month_str', 'Price', data=new_pumpkins) plt.scatter("City", "Price", data=new_pumpkins) new_pumpkins.iloc[:, 0:-1] = new_pumpkins.iloc[:, 0:-1].apply(LabelEncoder().fit_transform) new_pumpkins.head(10) print(new_pumpkins["City"].corr(new_pumpkins["Price"])) print(new_pumpkins["Package"].corr(new_pumpkins["Price"])) new_pumpkins.dropna(inplace=True) new_columns = ["Package", "Price"] lil_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis="columns") X = lil_pumpkins.values[:, :1] y = lil_pumpkins.values[:, 1:2] X from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) lil_reg = LinearRegression() lil_reg.fit(X_train, y_train) pred = lil_reg.predict(X_test) accuracy_score = lil_reg.score(X_train, y_train) print(f"Model accuracy: {accuracy_score}") plt.scatter(X_test, y_test, color="black") plt.plot(X_test, pred, color="blue", linewidth=3) plt.xlabel("Package") plt.ylabel("Price") lil_reg.predict([[2.75]]) new_columns = ['Variety', 'Package', 'City', 'Month', 'Price'] poly_pumpkins = new_pumpkins.drop([c for c in new_pumpkins.columns if c not in new_columns], axis="columns") corr = poly_pumpkins.corr() corr.style.background_gradient(cmap="coolwarm") X = poly_pumpkins.iloc[:, 3:4].values y = poly_pumpkins.iloc[:, 4:5].values from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline pipeline = make_pipeline(PolynomialFeatures(4), LinearRegression()) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) pipeline.fit(X_train, y_train) y_pred = pipeline.predict(X_test) df = pd.DataFrame({"x": X_test[:,0], "y":y_pred[:,0]}) df.sort_values(by="x", inplace=True) # chiamo pd.DataFrame per creare un nuovo df points = pd.DataFrame(df).to_numpy() plt.plot(points[:, 0], points[:, 1], color="blue", linewidth=3) plt.xlabel("Package") plt.ylabel("Price") plt.scatter(X, y, color="black") accuracy_score = pipeline.score(X_train, y_train) accuracy_score pipeline.predict([[2.75]])
_____no_output_____
MIT
2-Regression/3-Linear/notebook.ipynb
GDaglio/ML-For-Beginners
Lecture 1-4 - Binary classification
pumpkins = pd.read_csv('../data/US-pumpkins.csv') new_columns = ['Color','Origin','Item Size','Variety','City Name','Package'] new_pumpkins = pumpkins[new_columns] new_pumpkins.dropna(inplace=True) new_pumpkins = new_pumpkins.apply(LabelEncoder().fit_transform) new_pumpkins g = sns.PairGrid(new_pumpkins) g.map(sns.scatterplot) sns.swarmplot(x="Color", y="Item Size", data=new_pumpkins) sns.catplot(x="Color", y="Item Size", kind="violin", data=new_pumpkins) Selected_features = ['Origin','Item Size','Variety','City Name','Package'] X = new_pumpkins[Selected_features] y = new_pumpkins["Color"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) model = LogisticRegression() model.fit(X_train, y_train) predictions = model.predict(X_test) print(classification_report(y_test, predictions)) # print(f"Predicted labels: {predictions}") print(f"Accuracy: {sklearn.metrics.accuracy_score(y_test, predictions)}") from sklearn.metrics import confusion_matrix confusion_matrix(y_test, predictions) y_scores = model.predict_proba(X_test) fpr, tpr, thresholds = roc_curve(y_test, y_scores[:, 1]) sns.lineplot([0, 1], [0, 1]) sns.lineplot(fpr, tpr) auc = roc_auc_score(y_test, y_scores[:, 1]) print(auc)
0.6976998904709748
MIT
2-Regression/3-Linear/notebook.ipynb
GDaglio/ML-For-Beginners
prepared by Abuzer Yakaryilmaz (QLatvia) updated by Melis Pahalı | December 5, 2019 updated by Özlem Salehi | September 17, 2020 This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Solutions for Quantum Teleportation Task 1 Calculate the new quantum state after this CNOT operator. Solution The state before CNOT is $ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{100} + b \ket{111} \big) $. CNOT(first_qubit,second_qubit) is applied.If the value of the first qubit is 1, then the value of the second qubit is flipped.Thus, the new quantum state after this CNOT is$$ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{110} + b \ket{101} \big). $$ Task 2 Calculate the new quantum state after this Hadamard operator.Verify that the resulting quantum state can be written as follows:$$ \frac{1}{2} \ket{00} \big( a\ket{0}+b\ket{1} \big) + \frac{1}{2} \ket{01} \big( a\ket{1}+b\ket{0} \big) + \frac{1}{2} \ket{10} \big( a\ket{0}-b\ket{1} \big) + \frac{1}{2} \ket{11} \big( a\ket{1}-b\ket{0} \big) .$$ Solution The state before Hadamard is $ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{110} + b \ket{101} \big). $The effect of Hadamard to the first qubit is given below:$ H \ket{0yz} \rightarrow \sqrttwo \ket{0yz} + \sqrttwo \ket{1yz} $$ H \ket{1yz} \rightarrow \sqrttwo \ket{0yz} - \sqrttwo \ket{1yz} $For each triple $ \ket{xyz} $ in the quantum state, we apply this transformation: $ \frac{1}{2} \big( a\ket{000} + a\ket{100} \big) + \frac{1}{2} \big( a\ket{011} + a\ket{111} \big) + \frac{1}{2} \big( b\ket{010} - b\ket{110} \big) + \frac{1}{2} \big( b\ket{001} - b\ket{101} \big) .$ We can rearrange the summation so that we can separate Asja's qubit from the Balvis' qubit:$ \frac{1}{2} \big( a\ket{000}+b\ket{001} \big) + \frac{1}{2} \big( a\ket{011}+b\ket{010} \big) + \frac{1}{2} \big( a\ket{100} - b\ket{101} \big) + \frac{1}{2} \big( a\ket{111}- b\ket{110} \big) $. This is equivalent to$$ \frac{1}{2} \ket{00} \big( a\ket{0}+b\ket{1} \big) + \frac{1}{2} \ket{01} \big( a\ket{1}+b\ket{0} \big) + \frac{1}{2} \ket{10} \big( a\ket{0}-b\ket{1} \big) + \frac{1}{2} \ket{11} \big( a\ket{1}-b\ket{0} \big) .$$ Task 3 Asja sends the measurement outcomes to Balvis by using two classical bits: $ x $ and $ y $. For each $ (x,y) $ pair, determine the quantum operator(s) that Balvis can apply to obtain $ \ket{v} = a\ket{0}+b\ket{1} $ exactly. Solution Measurement outcome "00": The state of Balvis' qubit is $ a\ket{0}+b\ket{1} $. Balvis does not need to apply any extra operation.Measurement outcome "01": The state of Balvis' qubit is $ a\ket{1}+b\ket{0} $. If Balvis applies NOT operator, then the state becomes: $ a\ket{0}+b\ket{1} $.Measurement outcome "10": The state of Balvis' qubit is $ a\ket{0}-b\ket{1} $. If Balvis applies Z operator, then the state becomes: $ a\ket{0}+b\ket{1} $.Measurement outcome "11": The state of Balvis' qubit is $ a\ket{1}-b\ket{0} $. If Balvis applies NOT operator and Z operator, then the state becomes: $ a\ket{0}+b\ket{1} $. Task 4 Create a quantum circuit with three qubits and two classical bits.Assume that Asja has the first two qubits and Balvis has the third qubit. Implement the protocol given above until Balvis makes the measurement. Create entanglement between Asja's second qubit and Balvis' qubit. The state of Asja's first qubit can be initialized to a randomly picked angle. Asja applies CNOT and Hadamard operators to her qubits. Asja measures her own qubits and the results are stored in the classical registers. At this point, read the state vector of the circuit by using "statevector_simulator". When a circuit having measurement is simulated by "statevector_simulator", the simulator picks one of the outcomes, and so we see one of the states after the measurement.Verify that the state of Balvis' qubit is in one of these: $ \ket{v_{00}}$, $ \ket{v_{01}}$, $ \ket{v_{10}}$, and $ \ket{v_{11}}$. Follow the Qiskit order. That is, let qreg[2] be Asja's first qubit, qreg[1] be Asja's second qubit and let qreg[0] be Balvis' qubit. Solution
from qiskit import QuantumCircuit,QuantumRegister,ClassicalRegister,execute,Aer from random import randrange from math import sin,cos,pi # We start with 3 quantum registers # qreg[2]: Asja's first qubit - qubit to be teleported # qreg[1]: Asja's second qubit # qreg[0]: Balvis' qubit qreg=QuantumRegister(3) creg=ClassicalRegister(2) #Classical register with 2 qubits is enough qcir=QuantumCircuit(qreg,creg) # Generation of the entangled state. # Asja's second qubit is entangled with Balvis' qubit. qcir.h(qreg[1]) qcir.cx(qreg[1],qreg[0]) qcir.barrier() # We create a random qubit to teleport. # We pick a random angle. d=randrange(360) r=2*pi*d/360 print("Picked angle is "+str(d)+" degrees, "+str(round(r,2))+" radians.") # The amplitudes of the angle. x=cos(r) y=sin(r) print("cos component of the angle: "+str(round(x,2))+", sin component of the angle: "+str(round(y,2))) print("So to be teleported state is "+str(round(x,2))+"|0>+"+str(round(y,2))+"|1>.") #Asja's qubit to be teleported # Generation of random qubit by rotating the quantum register at the amount of picked angle. qcir.ry(2*r,qreg[2]) qcir.barrier() #CNOT operator by Asja where first qubit is the control and second qubit is the target qcir.cx(qreg[2],qreg[1]) qcir.barrier() #Hadamard operator by Asja on her first qubit qcir.h(qreg[2]) qcir.barrier() #Measurement by Asja stored in classical registers qcir.measure(qreg[1],creg[0]) qcir.measure(qreg[2],creg[1]) print() result=execute(qcir,Aer.get_backend('statevector_simulator'),optimization_level=0).result() print("When you use statevector_simulator, one of the possible outcomes is picked randomly. Classical registers contain:") print(result.get_counts()) print() print("The final statevector.") v=result.get_statevector() for i in range(len(v)): print(v[i].real) print() qcir.draw(output='mpl')
_____no_output_____
Apache-2.0
bronze/B54_Quantum_Teleportation_Solutions.ipynb
ozlemsalehi/bronze-boun
Task 5 Implement the protocol above by including the post-processing part done by Balvis, i.e., the measurement results by Asja are sent to Balvis and then he may apply $ X $ or $ Z $ gates depending on the measurement results.We use the classically controlled quantum operators. Since we do not make measurement on $ q[2] $, we define only 2 classical bits, each of which can also be defined separated.```pythonq = QuantumRegister(3)c2 = ClassicalRegister(1,'c2')c1 = ClassicalRegister(1,'c1')qc = QuantumCircuit(q,c1,c2)...qc.measure(q[1],c1)...qc.x(q[0]).c_if(c1,1) x-gate is applied to q[0] if the classical bit c1 is equal to 1```Read the state vector and verify that Balvis' state is $ \myvector{a \\ b} $ after the post-processing. Solution Classically controlled recovery operations are also added as follows. Below, the state vector is used to confirm that quantum teleportation is completed.
from qiskit import QuantumCircuit,QuantumRegister,ClassicalRegister,execute,Aer from random import randrange from math import sin,cos,pi # We start with 3 quantum registers # qreg[2]: Asja's first qubit - qubit to be teleported # qreg[1]: Asja's second qubit # qreg[0]: Balvis' qubit qreg=QuantumRegister(3) c1=ClassicalRegister(1) c2=ClassicalRegister(1) qcir=QuantumCircuit(qreg,c1,c2) # Generation of the entangled state. # Asja's second qubit is entangled with Balvis' qubit. qcir.h(qreg[1]) qcir.cx(qreg[1],qreg[0]) qcir.barrier() # We create a random qubit to teleport. # We pick a random angle. d=randrange(360) r=2*pi*d/360 print("Picked angle is "+str(d)+" degrees, "+str(round(r,2))+" radians.") # The amplitudes of the angle. x=cos(r) y=sin(r) print("Cos component of the angle: "+str(round(x,2))+", sin component of the angle: "+str(round(y,2))) print("So to be teleported state is "+str(round(x,2))+"|0>+"+str(round(y,2))+"|1>.") #Asja's qubit to be teleported # Generation of random qubit by rotating the quantum register at the amount of picked angle. qcir.ry(2*r,qreg[2]) qcir.barrier() #CNOT operator by Asja where first qubit is the control and second qubit is the target qcir.cx(qreg[2],qreg[1]) qcir.barrier() #Hadamard operator by Asja on the first qubit qcir.h(qreg[2]) qcir.barrier() #Measurement by Asja stored in classical registers qcir.measure(qreg[1],c1) qcir.measure(qreg[2],c2) print() #Post processing by Balvis qcir.x(qreg[0]).c_if(c1,1) qcir.z(qreg[0]).c_if(c2,1) result2=execute(qcir,Aer.get_backend('statevector_simulator'),optimization_level=0).result() print("When you use statevector_simulator, one of the possible outcomes is picked randomly. Classical registers contain:") print(result2.get_counts()) # print() print("The final statevector.") v=result2.get_statevector() for i in range(len(v)): print(v[i].real) print() qcir.draw(output='mpl')
_____no_output_____
Apache-2.0
bronze/B54_Quantum_Teleportation_Solutions.ipynb
ozlemsalehi/bronze-boun
DLISIO in a Nutshell Importing
%matplotlib inline import os import pandas as pd import dlisio import matplotlib.pyplot as plt import numpy as np import numpy.lib.recfunctions as rfn import hvplot.pandas import holoviews as hv from holoviews import opts, streams from holoviews.plotting.links import DataLink hv.extension('bokeh', logo=None)
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
You can work with a single file using the cell below - or by adding an additional for loop to the code below, you can work through a list of files. Another option is to use os.walk to get all .dlis files in a parent folder. Example: for (root, dirs, files) in os.walk(folderpath): for f in files: filepath = os.path.join(root, f) if filepath.endswith('.' + 'dlis'): print(filepath) But for this example, we will work with a single .dlis file specified in the cell below. Note that there are some .dlis file formats that are not supported by DLISIO yet - good to catch them in a try except loop if you are reading files enmasse. We will load a dlis file from the open source Volve dataset available here: https://data.equinor.com/dataset/Volve
filepath = r""
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Query for specific curve Very quickly you can use regex to find certain curves in a file (helpful if you are scanning a lot of files for certain curves)
with dlisio.dlis.load(filepath) as file: for d in file: depth_channels = d.find('CHANNEL','DEPT') for channel in depth_channels: print(channel.name) print(channel.curves())
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Examining internal files and frames Keep in mind that dlis files can contain multiple files and multiple frames. You can quickly get a numpy array of the curves in each frame below.
with dlisio.dlis.load(filepath) as file: print(file.describe()) with dlisio.dlis.load(filepath) as file: for d in file: for fram in d.frames: print(d.channels) print(fram.curves())
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Metadata including Origin information (well name and header)
with dlisio.dlis.load(filepath) as file: for d in file: print(d.describe()) for fram in d.frames: print(fram.describe()) for channel in d.channels: print(channel.describe()) with dlisio.dlis.load(filepath) as file: for d in file: for origin in d.origins: print(origin.describe())
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Reading a full dlis file But most likely we want a single data frame of every curve, no matter which frame it came from. So we write a bit more code to look through each frame, then look at each channel and get the curve name and unit information along with it. We will also save the information about which internal file and which frame each curve resides in.
curves_L = [] curves_name = [] longs = [] unit = [] files_L = [] files_num = [] frames = [] frames_num = [] with dlisio.dlis.load(filepath) as file: for d in file: files_L.append(d) frame_count = 0 for fram in d.frames: if frame_count == 0: frames.append(fram) frame_count = frame_count + 1 for channel in d.channels: curves_name.append(channel.name) longs.append(channel.long_name) unit.append(channel.units) files_num.append(len(files_L)) frames_num.append(len(frames)) curves = channel.curves() curves_L.append(curves) curve_index = pd.DataFrame( {'Curve': curves_name, 'Long': longs, 'Unit': unit, 'Internal_File': files_num, 'Frame_Number': frames_num }) curve_index
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Creating a Pandas dataframe for the entire .dlis file We have to be careful creating a dataframe for the whole .dlis file as often there are some curves that represent mulitple values (numpy array of list values). So, you can use something like:df = pd.DataFrame(data=curves_L, index=curves_name).T to view the full dlis file with lists as some of the curve values. Or we will use the code below to process each curve's 2D numpy array, stacking it if the curve contains multiple values per sample. Then we convert each curve into its own dataframe (uniquifying the column names by adding a .1, .2, .3...etc). Then, to preserve the order with the curve index above, append each data frame together in order to build the final dlis full dataframe.
def df_column_uniquify(df): df_columns = df.columns new_columns = [] for item in df_columns: counter = 0 newitem = item while newitem in new_columns: counter += 1 newitem = "{}_{}".format(item, counter) new_columns.append(newitem) df.columns = new_columns return df curve_df = pd.DataFrame() name_index = 0 for c in curves_L: name = curves_name[name_index] np.vstack(c) try: num_col = c.shape[1] col_name = [name] * num_col df = pd.DataFrame(data=c, columns=col_name) name_index = name_index + 1 df = df_column_uniquify(df) curve_df = pd.concat([curve_df, df], axis=1) except: num_col = 0 df = pd.DataFrame(data=c, columns=[name]) name_index = name_index + 1 curve_df = pd.concat([curve_df, df], axis=1) continue curve_df.head() ## If we have a simpler dlis file with a single logical file and single frame and with single data values in each channel. with dlisio.dlis.load(filepath) as file: logical_count = 0 for d in file: frame_count = 0 for fram in d.frames: if frame_count == 0 & logical_count == 0: curves = fram.curves() curve_df = pd.DataFrame(curves, index=curves[fram.index]) curve_df.head()
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Then we can set the index and start making some plots.
curve_df = df_column_uniquify(curve_df) curve_df['DEPTH_Calc_ft'] = curve_df.loc[:,'TDEP'] * 0.0083333 #0.1 inch/12 inches per foot curve_df['DEPTH_ft'] = curve_df['DEPTH_Calc_ft'] curve_df = curve_df.set_index("DEPTH_Calc_ft") curve_df.index.names = [None] curve_df = curve_df.replace(-999.25,np.nan) min_val = curve_df['DEPTH_ft'].min() max_val = curve_df['DEPTH_ft'].max() curve_list = list(curve_df.columns) curve_list.remove('DEPTH_ft') curve_df.head() def curve_plot(log, df, depthname): aplot = df.hvplot(x=depthname, y=log, invert=True, flip_yaxis=True, shared_axes=True, height=600, width=300).opts(fontsize={'labels': 16,'xticks': 14, 'yticks': 14}) return aplot; plotlist = [curve_plot(x, df=curve_df, depthname='DEPTH_ft') for x in curve_list] well_section = hv.Layout(plotlist).cols(len(curve_list)) well_section
_____no_output_____
MIT
DLISIO_Simple_Reading.ipynb
dcslagel/DLISIO_Notebooks
Example 5: Quantum-to-quantum transfer learning. This is an example of a continuous variable (CV) quantum network for state classification, developed according to the *quantum-to-quantum transfer learning* scheme presented in [1]. Introduction In this proof-of-principle demonstration we consider two distinct toy datasets of Gaussian and non-Gaussian states. Such datasets can be generated according to the following simple prescriptions:**Dataset A**: - Class 0 (Gaussian): random Gaussian layer applied to the vacuum. - Class 1 (non-Gaussian): random non-Gaussian Layer applied to the vacuum. **Dataset B**: - Class 0 (Gaussian): random Gaussian layer applied to a coherent state with amplitude $\alpha=1$.- Class 1 (non-Gaussian): random Gaussian layer applied to a single photon Fock state $|1\rangle$.**Variational Circuit A**: Our starting point is a single-mode variational circuit [2] (a non-Gaussian layer), pre-trained on _Dataset A_. We assume that after the circuit is applied, the output mode is measured with an _on/off_ detector. By averaging over many shots, one can estimate the vacuum probability:$$p_0 = | \langle \psi_{\rm out} |0 \rangle|^2. $$We use _Dataset A_ and train the circuit to rotate Gaussian states towards the vacuum while non-Gaussian states far away from the vacuum. For the final classification we use the simple decision rule:$$p_0 \ge 0 \longrightarrow {\rm Class=0.} \\p_0 < 0 \longrightarrow {\rm Class=1.}$$**Variational Circuit B**: Once _Circuit A_ has been optimized, we can use is as a pre-trained blockapplicable also to the different _Dataset B_. In other words, we implement a _quantum-to-quantum_ transfer learning model:_Circuit B_ = _Circuit A_ (pre-trained) followed by a sequence of _variational layers_ (to be trained).Also in this case, after the application of _Circuit B_, we assume to measure the single mode with an _on/off_ detector, and we apply a similar classification rule:$$p_0 \ge 0 \longrightarrow {\rm Class=1.} \\p_0 < 0 \longrightarrow {\rm Class=0.}$$The motivation for this transfer learning approach is that, even if _Circuit A_ is optimized on a different dataset, it can still act as a good pre-processing block also for _Dataset B_. Ineeed, as we are going to show, the application of _Circuit A_ can significantly improve the training efficiency of _Circuit B_. General setupThe main imported modules are: the `tensorflow` machine learning framework, the quantum CV software `strawberryfields` [3] and the python plotting library `matplotlib`. All modules should be correctly installed in the system before running this notebook.
# Plotting %matplotlib inline import matplotlib.pyplot as plt # TensorFlow import tensorflow as tf # Strawberryfields (simulation of CV quantum circuits) import strawberryfields as sf from strawberryfields.ops import Dgate, Kgate, Sgate, Rgate, Vgate, Fock, Ket # Other modules import numpy as np import time # System variables import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # avoid warning messages os.environ['OMP_NUM_THREADS'] = '1' # set number of threads. os.environ['CUDA_VISIBLE_DEVICES'] = '1' # select the GPU unit. # Path with pre-trained parameters weights_path = 'results/weights/'
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Setting of the main parameters of the network model and of the training process.
# Hilbert space cutoff cutoff = 15 # Normalization cutoff (must be equal or smaller than cutoff dimension) target_cutoff = 15 # Normalization weight norm_weight = 0 # Batch size batch_size = 8 # Number of batches (i.e. number training iterations) num_batches = 500 # Number of state generation layers g_depth = 1 # Number of pre-trained layers (for transfer learning) pre_depth = 1 # Number of state classification layers q_depth = 3 # Standard deviation of random state generation parameters rot_sd = np.math.pi * 2 dis_sd = 0 sq_sd = 0.5 non_lin_sd = 0.5 # this is used as fixed non-linear constant. # Standard deviation of initial trainable weights active_sd = 0.001 passive_sd = 0.001 # Magnitude limit for trainable active parameters clip = 1 # Learning rate lr = 0.01 # Random seeds tf.set_random_seed(0) rng_data = np.random.RandomState(1) # Reset TF graph tf.reset_default_graph()
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Variational circuits for state generation and classificaiton Input states: _Dataset B_The dataset is introduced by defining the corresponding random variational circuit that generates input Gaussian and non-Gaussian states.
# Placeholders for class labels batch_labels = tf.placeholder(dtype=tf.int64, shape = [batch_size]) batch_labels_fl = tf.to_float(batch_labels) # State generation parameters # Squeezing gate sq_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth]) # Rotation gates r1_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth]) r2_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth]) r3_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth]) # Explicit definitions of the ket tensors of |0> and |1> np_ket0, np_ket1 = np.zeros((2, batch_size, cutoff)) np_ket0[:,0] = 1.0 np_ket1[:,1] = 1.0 ket0 = tf.constant(np_ket0, dtype = tf.float32, shape = [batch_size, cutoff]) ket1 = tf.constant(np_ket1, dtype = tf.float32, shape = [batch_size, cutoff]) # Ket of the quantum states associated to the label: i.e. |batch_labels> ket_init = ket0 * (1.0 - tf.expand_dims(batch_labels_fl, 1)) + ket1 * tf.expand_dims(batch_labels_fl, 1) # State generation layer def layer_gen(i, qmode): # If label is 0 (Gaussian) prepare a coherent state with alpha=1 otherwise prepare fock |1> Ket(ket_init) | qmode Dgate((1.0 - batch_labels_fl) * 1.0, 0) | qmode # Random Gaussian operation (without displacement) Rgate(r1_gen[:, i]) | qmode Sgate(sq_gen[:, i], 0) | qmode Rgate(r2_gen[:, i]) | qmode return qmode
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Loading of pre-trained block (_Circuit A_)We assume that _Circuit A_ has been already pre-trained (e.g. by running a dedicated Python script) and that the associated optimal weights have been saved to a NumPy file. Here we first load the such parameters and then we define _Circuit A_ as a constant pre-processing block.
# Loading of pre-trained weights trained_params_npy = np.load('pre_trained/circuit_A.npy') if trained_params_npy.shape[1] < pre_depth: print("Error: circuit q_depth > trained q_depth.") raise SystemExit(0) # Convert numpy arrays to TF tensors trained_params = tf.constant(trained_params_npy) sq_pre = trained_params[0] d_pre = trained_params[1] r1_pre = trained_params[2] r2_pre = trained_params[3] r3_pre = trained_params[4] kappa_pre = trained_params[5] # Definition of the pre-trained Circuit A (single layer) def layer_pre(i, qmode): # Rotation gate Rgate(r1_pre[i]) | qmode # Squeezing gate Sgate(tf.clip_by_value(sq_pre[i], -clip, clip), 0) # Rotation gate Rgate(r2_pre[i]) | qmode # Displacement gate Dgate(tf.clip_by_value(d_pre[i], -clip, clip) , 0) | qmode # Rotation gate Rgate(r3_pre[i]) | qmode # Cubic gate Vgate(tf.clip_by_value(kappa_pre[i], -clip, clip) ) | qmode return qmode
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Addition of trainable layers (_Circuit B_)As discussed in the introduction, _Circuit B_ can is obtained by adding some additional layers that we are going to train on _Dataset B_.
# Trainable variables with tf.name_scope('variables'): # Squeeze gate sq_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd)) # Displacement gate d_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd)) # Rotation gates r1_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd)) r2_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd)) r3_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd)) # Kerr gate kappa_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd)) # 0-depth parameter (just to generate a gradient) x_var = tf.Variable(0.0) parameters = [sq_var, d_var, r1_var, r2_var, r3_var, kappa_var] # Definition of a single trainable variational layer def layer_var(i, qmode): Rgate(r1_var[i]) | qmode Sgate(tf.clip_by_value(sq_var[i], -clip, clip), 0) | qmode Rgate(r2_var[i]) | qmode Dgate(tf.clip_by_value(d_var[i], -clip, clip) , 0) | qmode Rgate(r3_var[i]) | qmode Vgate(tf.clip_by_value(kappa_var[i], -clip, clip) ) | qmode return qmode
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Symbolic evaluation of the full networkWe first instantiate a _StrawberryFields_ quantum simulator, taylored for simulating a single-mode quantum optical system. Then we synbolically evaluate a batch of output states.
prog = sf.Program(1) eng = sf.Engine('tf', backend_options={'cutoff_dim': cutoff, 'batch_size': batch_size}) # Circuit B with prog.context as q: # State generation network for k in range(g_depth): layer_gen(k, q[0]) # Pre-trained network (Circuit A) for k in range(pre_depth): layer_pre(k, q[0]) # State classification network for k in range(q_depth): layer_var(k, q[0]) # Special case q_depth==0 if q_depth == 0: Dgate(0.001, x_var ) | q[0] # almost identity operation just to generate a gradient. # Symbolic computation of the output state results = eng.run(prog, run_options={"eval": False}) out_state = results.state # Batch state norms out_norm = tf.to_float(out_state.trace()) # Batch mean energies mean_n = out_state.mean_photon(0)
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Loss function, accuracy and optimizer.As usual in machine learning, we need to define a loss function that we are going to minimize during the training phase.As discussed in the introduction, we assume that only the vacuum state probability `p_0` is measured. Ideally, `p_0` should be large for non-Gaussian states (_label 1_), while should be small for Gaussian states (_label 0_). The circuit can be trained to this task by minimizing the _cross entropy_ loss function defined in the next cell.Moreover, if `norm_weight` is different from zero, also a regularization term is added to the full cost function in order to reduce quantum amplitudes beyond the target Hilbert space dimension `target_cutoff`.
# Batch vacuum probabilities p0 = out_state.fock_prob([0]) # Complementary probabilities q0 = 1.0 - p0 # Cross entropy loss function eps = 0.0000001 main_loss = tf.reduce_mean(-batch_labels_fl * tf.log(p0 + eps) - (1.0 - batch_labels_fl) * tf.log(q0 + eps)) # Decision function predictions = tf.sign(p0 - 0.5) * 0.5 + 0.5 # Accuracy between predictions and labels accuracy = tf.reduce_mean((predictions + batch_labels_fl - 1.0) ** 2) # Norm loss. This is monitored but not minimized. norm_loss = tf.reduce_mean((out_norm - 1.0) ** 2) # Cutoff loss regularization. This is monitored and minimized if norm_weight is nonzero. c_in = out_state.all_fock_probs() cut_probs = c_in[:, :target_cutoff] cut_norms = tf.reduce_sum(cut_probs, axis=1) cutoff_loss = tf.reduce_mean((cut_norms - 1.0) ** 2 ) # Full regularized loss function full_loss = main_loss + norm_weight * cutoff_loss # Optimization algorithm optim = tf.train.AdamOptimizer(learning_rate=lr) training = optim.minimize(full_loss)
_____no_output_____
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Training and testing Up to now we just defined the analytic graph of the quantum network without numerically evaluating it. Now, after initializing a _TensorFlow_ session, we can finally run the actual training and testing phases.
# Function generating a dictionary of random parameters for a batch of states. def random_dict(): param_dict = { # Labels (0 = Gaussian, 1 = non-Gaussian) batch_labels: rng_data.randint(2, size=batch_size), # Squeezing and rotation parameters sq_gen: rng_data.uniform(low=-sq_sd, high=sq_sd, size=[batch_size, g_depth]), r1_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]), r2_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]), r3_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]), } return param_dict # TensorFlow session with tf.Session() as session: session.run(tf.global_variables_initializer()) train_loss = 0.0 train_loss_sum = 0.0 train_acc = 0.0 train_acc_sum = 0.0 test_loss = 0.0 test_loss_sum = 0.0 test_acc = 0.0 test_acc_sum = 0.0 # ========================================================= # Training Phase # ========================================================= if q_depth > 0: for k in range(num_batches): rep_time = time.time() # Training step [_training, _full_loss, _accuracy, _norm_loss] = session.run([ training, full_loss, accuracy, norm_loss], feed_dict=random_dict()) train_loss_sum += _full_loss train_acc_sum += _accuracy train_loss = train_loss_sum / (k + 1) train_acc = train_acc_sum / (k + 1) # Training log if ((k + 1) % 100) == 0: print('Train batch: {:d}, Running loss: {:.4f}, Running acc {:.4f}, Norm loss {:.4f}, Batch time {:.4f}' .format(k + 1, train_loss, train_acc, _norm_loss, time.time() - rep_time)) # ========================================================= # Testing Phase # ========================================================= num_test_batches = min(num_batches, 1000) for i in range(num_test_batches): rep_time = time.time() # Evaluation step [_full_loss, _accuracy, _norm_loss, _cutoff_loss, _mean_n, _parameters] = session.run([full_loss, accuracy, norm_loss, cutoff_loss, mean_n, parameters], feed_dict=random_dict()) test_loss_sum += _full_loss test_acc_sum += _accuracy test_loss = test_loss_sum / (i + 1) test_acc = test_acc_sum / (i + 1) # Testing log if ((i + 1) % 100) == 0: print('Test batch: {:d}, Running loss: {:.4f}, Running acc {:.4f}, Norm loss {:.4f}, Batch time {:.4f}' .format(i + 1, test_loss, test_acc, _norm_loss, time.time() - rep_time)) # Compute mean photon number of the last batch of states mean_fock = np.mean(_mean_n) print('Training and testing phases completed.') print('RESULTS:') print('{:>11s}{:>11s}{:>11s}{:>11s}{:>11s}{:>11s}'.format('train_loss', 'train_acc', 'test_loss', 'test_acc', 'norm_loss', 'mean_n')) print('{:11f}{:11f}{:11f}{:11f}{:11f}{:11f}'.format(train_loss, train_acc, test_loss, test_acc, _norm_loss, mean_fock))
Train batch: 100, Running loss: 0.6885, Running acc 0.3700, Norm loss 0.0460, Batch time 0.0494 Train batch: 200, Running loss: 0.6673, Running acc 0.3750, Norm loss 0.0599, Batch time 0.0498 Train batch: 300, Running loss: 0.6575, Running acc 0.3825, Norm loss 0.0495, Batch time 0.0502 Train batch: 400, Running loss: 0.6463, Running acc 0.3975, Norm loss 0.1070, Batch time 0.0497 Train batch: 500, Running loss: 0.6113, Running acc 0.4765, Norm loss 0.1387, Batch time 0.0862 Test batch: 100, Running loss: 0.4438, Running acc 0.8762, Norm loss 0.0613, Batch time 0.0206 Test batch: 200, Running loss: 0.4376, Running acc 0.8825, Norm loss 0.0801, Batch time 0.0202 Test batch: 300, Running loss: 0.4345, Running acc 0.8808, Norm loss 0.0391, Batch time 0.0199 Test batch: 400, Running loss: 0.4359, Running acc 0.8775, Norm loss 0.0698, Batch time 0.0260 Test batch: 500, Running loss: 0.4354, Running acc 0.8755, Norm loss 0.1015, Batch time 0.0256 Training and testing phases completed. RESULTS: train_loss train_acc test_loss test_acc norm_loss mean_n 0.611272 0.476500 0.435440 0.875500 0.101467 5.336512
Apache-2.0
q2q_transfer_learning.ipynb
Jerry2001Qu/quantum-transfer-learning
Borehole lithology logs viewerInteractive view of borehole data used for [exploratory lithology analysis](https://github.com/csiro-hydrogeology/pyela)Powered by [Voila](https://github.com/QuantStack/voila), [ipysheet](https://github.com/QuantStack/ipysheet) and [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) Data The sample borehole data around Canberra, Australia is derived from the Australian Bureau of Meteorology [National Groundwater Information System](http://www.bom.gov.au/water/groundwater/ngis/index.shtml). You can check the licensing for these data; the short version is that use for demo and learning purposes is fine.
import os import sys import pandas as pd import numpy as np # from bqplot import Axis, Figure, Lines, LinearScale # from bqplot.interacts import IndexSelector # from ipyleaflet import basemaps, FullScreenControl, LayerGroup, Map, MeasureControl, Polyline, Marker, MarkerCluster, CircleMarker, WidgetControl # from ipywidgets import Button, HTML, HBox, VBox, Checkbox, FileUpload, Label, Output, IntSlider, Layout, Image, link from ipywidgets import Output, HTML from ipyleaflet import Map, Marker, MarkerCluster, basemaps import ipywidgets as widgets import ipysheet example_folder = "./examples" # classified_logs_filename = os.path.join(cbr_datadir_out,'classified_logs.pkl') # with open(classified_logs_filename, 'rb') as handle: # df = pickle.load(handle) # geoloc_filename = os.path.join(cbr_datadir_out,'geoloc.pkl') # with open(geoloc_filename, 'rb') as handle: # geoloc = pickle.load(handle) df = pd.read_csv(os.path.join(example_folder,'classified_logs.csv')) geoloc = pd.read_csv(os.path.join(example_folder,'geoloc.csv')) DEPTH_FROM_COL = 'FromDepth' DEPTH_TO_COL = 'ToDepth' TOP_ELEV_COL = 'TopElev' BOTTOM_ELEV_COL = 'BottomElev' LITHO_DESC_COL = 'Description' HYDRO_CODE_COL = 'HydroCode' HYDRO_ID_COL = 'HydroID' BORE_ID_COL = 'BoreID' # if we want to keep vboreholes that have more than one row x = df[HYDRO_ID_COL].values unique, counts = np.unique(x, return_counts=True) multiple_counts = unique[counts > 1] # len(multiple_counts), len(unique) keep = set(df[HYDRO_ID_COL].values) keep = set(multiple_counts) s = geoloc[HYDRO_ID_COL] geoloc = geoloc[s.isin(keep)] class GlobalThing: def __init__(self, bore_data, displayed_colnames = None): self.marker_info = dict() self.bore_data = bore_data if displayed_colnames is None: displayed_colnames = [BORE_ID_COL, DEPTH_FROM_COL, DEPTH_TO_COL, LITHO_DESC_COL] # 'Lithology_1', 'MajorLithCode']] self.displayed_colnames = displayed_colnames def add_marker_info(self, lat, lon, code): self.marker_info[(lat, lon)] = code def get_code(self, lat, lon): return self.marker_info[(lat, lon)] def data_for_hydroid(self, ident): df_sub = self.bore_data.loc[df[HYDRO_ID_COL] == ident] return df_sub[self.displayed_colnames] def register_geolocations(self, geoloc): for index, row in geoloc.iterrows(): self.add_marker_info(row.Latitude, row.Longitude, row.HydroID) globalthing = GlobalThing(df, displayed_colnames = [BORE_ID_COL, DEPTH_FROM_COL, DEPTH_TO_COL, LITHO_DESC_COL, 'Lithology_1']) globalthing.register_geolocations(geoloc) def plot_map(geoloc, click_handler): """ Plot the markers for each borehole, and register a custom click_handler """ mean_lat = geoloc.Latitude.mean() mean_lng = geoloc.Longitude.mean() # create the map m = Map(center=(mean_lat, mean_lng), zoom=12, basemap=basemaps.Stamen.Terrain) m.layout.height = '600px' # show trace markers = [] for index, row in geoloc.iterrows(): message = HTML() message.value = str(row.HydroID) message.placeholder = "" message.description = "HydroID" marker = Marker(location=(row.Latitude, row.Longitude)) marker.on_click(click_handler) marker.popup = message markers.append(marker) marker_cluster = MarkerCluster( markers=markers ) # not sure whether we could register once instead of each marker: # marker_cluster.on_click(click_handler) m.add_layer(marker_cluster); # m.add_control(FullScreenControl()) return m # If printing a data frame straight to an output widget def raw_print(out, ident): bore_data = globalthing.data_for_hydroid(ident) out.clear_output() with out: print(ident) print(bore_data) def click_handler_rawprint(**kwargs): blah = dict(**kwargs) xy = blah['coordinates'] ident = globalthing.get_code(xy[0], xy[1]) raw_print(out, ident) # to display using an ipysheet def mk_sheet(d): return ipysheet.pandas_loader.from_dataframe(d) def upate_display_df(ident): bore_data = globalthing.data_for_hydroid(ident) out.clear_output() with out: display(mk_sheet(bore_data)) def click_handler_ipysheet(**kwargs): blah = dict(**kwargs) xy = blah['coordinates'] ident = globalthing.get_code(xy[0], xy[1]) upate_display_df(ident) out = widgets.Output(layout={'border': '1px solid black'})
_____no_output_____
BSD-3-Clause
app.ipynb
csiro-hydrogeology/lithology-viewer
Note: it may take a minute or two for the display to first appear....Select a marker:
plot_map(geoloc, click_handler_ipysheet) # plot_map(geoloc, click_handler_rawprint)
_____no_output_____
BSD-3-Clause
app.ipynb
csiro-hydrogeology/lithology-viewer
Descriptive lithology:
out ## Appendix A : qgrid, but at best ended up with "Model not available". May not work yet with Jupyter lab 1.0.x # import qgrid # d = data_for_hydroid(10062775) # d # import ipywidgets as widgets # def build_qgrid(): # qgrid.set_grid_option('maxVisibleRows', 10) # col_opts = { # 'editable': False, # } # qgrid_widget = qgrid.show_grid(d, show_toolbar=False, column_options=col_opts) # qgrid_widget.layout = widgets.Layout(width='920px') # return qgrid_widget, qgrid # qgrid_widget, qgrid = build_qgrid() # display(qgrid_widget) # pitch_app = widgets.VBox(qgrid_widget) # display(pitch_app) # def click_handler(**kwargs): # blah = dict(**kwargs) # xy = blah['coordinates'] # ident = globalthing.get_code(xy[0], xy[1]) # bore_data = data_for_hydroid(ident) # grid.df = bore_data ## Appendix B: using striplog # from striplog import Striplog, Interval, Component, Legend, Decor # import matplotlib as mpl # lithologies = ['shale', 'clay','granite','soil','sand', 'porphyry','siltstone','gravel', ''] # lithology_color_names = ['lightslategrey', 'olive', 'dimgray', 'chocolate', 'gold', 'tomato', 'teal', 'lavender', 'black'] # lithology_colors = [mpl.colors.cnames[clr] for clr in lithology_color_names] # clrs = dict(zip(lithologies, lithology_colors)) # def mk_decor(lithology, component): # dcor = {'color': clrs[lithology], # 'component': component, # 'width': 2} # return Decor(dcor) # def create_striplog_itvs(d): # itvs = [] # dcrs = [] # for index, row in d.iterrows(): # litho = row.Lithology_1 # c = Component({'description':row.Description,'lithology': litho}) # decor = mk_decor(litho, c) # itvs.append(Interval(row.FromDepth, row.ToDepth, components=[c]) ) # dcrs.append(decor) # return itvs, dcrs # def click_handler(**kwargs): # blah = dict(**kwargs) # xy = blah['coordinates'] # ident = globalthing.get_code(xy[0], xy[1]) # bore_data = data_for_hydroid(ident) # itvs, dcrs = create_striplog_itvs(bore_data) # s = Striplog(itvs) # with out: # print(ident) # print(s.plot(legend = Legend(dcrs))) # def plot_striplog(bore_data, ax=None): # itvs, dcrs = create_striplog_itvs(bore_data) # s = Striplog(itvs) # s.plot(legend = Legend(dcrs), ax=ax) # def plot_evaluation_metrics(bore_data): # fig, ax = plt.subplots(figsize=(12, 3)) # # actual plotting # plot_striplog(bore_data, ax=ax) # # finalize # fig.suptitle("Evaluation metrics with cutoff\n", va='bottom') # plt.show() # plt.close(fig) # %matplotlib inline # from ipywidgets import interactive # import matplotlib.pyplot as plt # import numpy as np # def f(m, b): # plt.figure(2) # x = np.linspace(-10, 10, num=1000) # plt.plot(x, m * x + b) # plt.ylim(-5, 5) # plt.show() # interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5)) # output = interactive_plot.children[-1] # output.layout.height = '350px' # interactive_plot # def update_sheet(s, d): # print("before: %s"%(s.rows)) # s.rows = len(d) # for i in range(len(d.columns)): # s.cells[i].value = d[d.columns[i]].values
_____no_output_____
BSD-3-Clause
app.ipynb
csiro-hydrogeology/lithology-viewer
Series Inelastic Cantilever This notebook verifies the `elle.beam2dseries` element against an analysis run with the FEDEASLab `Inel2dFrm_wOneComp` element.
import anon import anon as ana import elle.beam2d import elle.solvers import elle.sections import anon.ops as anp
_____no_output_____
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
Model Definition
from elle.beam2dseries import no_3, no_4, no_5, no_6 from elle.sections import aisc L = 72.0 E = 29e3 fy = 60.0 Hi = 1.0e-6 Hk = 1e-9 #1.0e-9 sect = aisc.load('W14x426','A, I, Zx') Np = fy*sect['A'] Mp = fy*sect['Zx'] Hi = Hi * 6.* E*sect['I']/L * anp.ones((2,1)) Hk = Hk * 6.* E*sect['I']/L * anp.ones((2,1)) xyz = anp.array([[0.0, 0.0],[0.0, L]]) Mp_vector = anp.array([Mp,Mp])[:,None] u = anp.zeros(3) limit_surface = elle.beam2dseries.no_6 geometry = elle.beam2d.geom_no1 transform = elle.beam2d.transform(geometry) basic_response = elle.beam2d.resp_no1 BeamResponse = transform( # <u>, p, state -> u, <p>, state limit_surface( basic_response(E=E,**sect), Mp=Mp_vector,Hi=Hi,Hk=Hk,tol=1e-7 ), xyz=xyz ) BeamResponse ag = anon.autodiff.jacfwd(geometry(xyz), 1, 0) BeamModel = elle.solvers.invert_no2(BeamResponse, nr=3, maxiter=20, tol=1e-6) BeamModel
_____no_output_____
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
Loading
# Np = 0.85*Np q_ref = anp.array([[ 0.0*Np, 0.0, 0.000*Mp], [-0.4*Np, 0.0, 0.000*Mp], [-0.4*Np, 0.0, 0.400*Mp], [-0.4*Np, 0.0, 0.700*Mp], [-0.32*Np, 0.0, 0.600*Mp], [-0.2*Np, 0.0, 0.400*Mp], [-0.1*Np, 0.0, 0.200*Mp], [-0.0*Np, 0.0, 0.000*Mp]]) # steps = [5,8,15,15,15,15,10] steps = [5,5,5,5,5,5,5] load_history = elle.sections.load_hist(q_ref, steps)
_____no_output_____
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
Model Initialization
u, q = [], [] u0, p0 = anp.zeros((6,1)), anp.zeros((6,1)) # vi = U_to_V(U[0]) BeamResponse(u0,p0) pi, ui, state = BeamModel(p0, u0) u.append(ui) q.append(state[0]) # [print(s) for s in state] # print(ui)
_____no_output_____
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
Analysis Procedure
for i in range(len(load_history)): pi = ag(ui, pi).T @ load_history[i][:, None] pi, ui, state = BeamModel(pi, ui, state) u.append(ui) q.append(state[0])
/mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.") /mnt/c/Users/claud/git/elle/elle-solvers/elle/solvers/inv_solve.py:113: UserWarning: Failed to converge on function <function transform.<locals>.geom.<locals>.f at 0x7f5af443a8b0> inversion. warnings.warn(f"Failed to converge on function {f} inversion.")
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
Post Processing and Validation
import matplotlib.pyplot as plt # plt.style.use('trois-pas'); # %config InlineBackend.figure_format = 'svg' fig, ax = plt.subplots() ax.plot([ ui[1] for ui in u ],[ qi[2] for qi in q ], '.'); # ax.plot([ ui[1] for ui in u ], # [ pi['Elem'][0]['q'][1] for pi in data['Post'] ], '.', label='FEDEASLab'); # plt.legend() fig.savefig('../img/no1-analysis.svg') plt.plot([ i for i in range(len(q)) ], [ qi[0] for qi in q ], '.') # plt.plot([ i for i in range(len(q)) ], [ pi['Elem'][0]['q'][0] for pi in post ], '.', label='FEDEASLab') # plt.legend(); plt.plot([ i for i in range(len(q)) ], [ qi[1] for qi in q ], 'x'); # plt.plot([ i for i in range(len(q)) ], # [ pi['Elem'][0]['q'][1] for pi in data['Post'] ], '.', label='FEDEASLab'); # plt.legend(); plt.plot([ i for i in range(len(q)) ], [ qi[2] for qi in q ], '.'); # plt.plot([ i for i in range(len(q)) ], # [ pi['Elem'][0]['q'][2] for pi in post ], '.', label='FEDEASLab'); # plt.legend(); plt.plot([i for i in range(len(u))],[ ui[1] for ui in u ], '.');
_____no_output_____
Apache-2.0
src/archive/main.ipynb
claudioperez/elle-0004
ClassificationThis notebook aims at giving an overview of the classification metrics thatcan be used to evaluate the predictive model generalization performance. We canrecall that in a classification setting, the vector `target` is categoricalrather than continuous.We will load the blood transfusion dataset.
import pandas as pd blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv") data = blood_transfusion.drop(columns="Class") target = blood_transfusion["Class"]
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Let's start by checking the classes present in the target vector `target`.
import matplotlib.pyplot as plt target.value_counts().plot.barh() plt.xlabel("Number of samples") _ = plt.title("Number of samples per classes present\n in the target")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
We can see that the vector `target` contains two classes corresponding towhether a subject gave blood. We will use a logistic regression classifier topredict this outcome.To focus on the metrics presentation, we will only use a single split insteadof cross-validation.
from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, shuffle=True, random_state=0, test_size=0.5)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
We will use a logistic regression classifier as a base model. We will trainthe model on the train set, and later use the test set to compute thedifferent classification metric.
from sklearn.linear_model import LogisticRegression classifier = LogisticRegression() classifier.fit(data_train, target_train)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
Classifier predictionsBefore we go into details regarding the metrics, we will recall what typeof predictions a classifier can provide.For this reason, we will create a synthetic sample for a new potential donor:he/she donated blood twice in the past (1000 c.c. each time). The last timewas 6 months ago, and the first time goes back to 20 months ago.
new_donor = [[6, 2, 1000, 20]]
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
We can get the class predicted by the classifier by calling the method`predict`.
classifier.predict(new_donor)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
With this information, our classifier predicts that this synthetic subjectis more likely to not donate blood again.However, we cannot check whether the prediction is correct (we do not knowthe true target value). That's the purpose of the testing set. First, wepredict whether a subject will give blood with the help of the trainedclassifier.
target_predicted = classifier.predict(data_test) target_predicted[:5]
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
Accuracy as a baselineNow that we have these predictions, we can compare them with the truepredictions (sometimes called ground-truth) which we did not use until now.
target_test == target_predicted
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
In the comparison above, a `True` value means that the value predicted by ourclassifier is identical to the real value, while a `False` means that ourclassifier made a mistake. One way of getting an overall rate representingthe generalization performance of our classifier would be to compute how manytimes our classifier was right and divide it by the number of samples in ourset.
import numpy as np np.mean(target_test == target_predicted)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
This measure is called the accuracy. Here, our classifier is 78%accurate at classifying if a subject will give blood. `scikit-learn` providesa function that computes this metric in the module `sklearn.metrics`.
from sklearn.metrics import accuracy_score accuracy = accuracy_score(target_test, target_predicted) print(f"Accuracy: {accuracy:.3f}")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
`LogisticRegression` also has a method named `score` (part of the standardscikit-learn API), which computes the accuracy score.
classifier.score(data_test, target_test)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
Confusion matrix and derived metricsThe comparison that we did above and the accuracy that we calculated did nottake into account the type of error our classifier was making. Accuracyis an aggregate of the errors made by the classifier. We may be interestedin finer granularity - to know independently what the error is for each ofthe two following cases:- we predicted that a person will give blood but she/he did not;- we predicted that a person will not give blood but she/he did.
from sklearn.metrics import ConfusionMatrixDisplay _ = ConfusionMatrixDisplay.from_estimator(classifier, data_test, target_test)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
The in-diagonal numbers are related to predictions that were correctwhile off-diagonal numbers are related to incorrect predictions(misclassifications). We now know the four types of correct and erroneouspredictions:* the top left corner are true positives (TP) and corresponds to people who gave blood and were predicted as such by the classifier;* the bottom right corner are true negatives (TN) and correspond to people who did not give blood and were predicted as such by the classifier;* the top right corner are false negatives (FN) and correspond to people who gave blood but were predicted to not have given blood;* the bottom left corner are false positives (FP) and correspond to people who did not give blood but were predicted to have given blood.Once we have split this information, we can compute metrics to highlight thegeneralization performance of our classifier in a particular setting. Forinstance, we could be interested in the fraction of people who really gaveblood when the classifier predicted so or the fraction of people predicted tohave given blood out of the total population that actually did so.The former metric, known as the precision, is defined as TP / (TP + FP)and represents how likely the person actually gave blood when the classifierpredicted that they did.The latter, known as the recall, defined as TP / (TP + FN) andassesses how well the classifier is able to correctly identify people whodid give blood.We could, similarly to accuracy, manually compute these values,however scikit-learn provides functions to compute these statistics.
from sklearn.metrics import precision_score, recall_score precision = precision_score(target_test, target_predicted, pos_label="donated") recall = recall_score(target_test, target_predicted, pos_label="donated") print(f"Precision score: {precision:.3f}") print(f"Recall score: {recall:.3f}")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
These results are in line with what was seen in the confusion matrix. Lookingat the left column, more than half of the "donated" predictions were correct,leading to a precision above 0.5. However, our classifier mislabeled a lot ofpeople who gave blood as "not donated", leading to a very low recall ofaround 0.1. The issue of class imbalanceAt this stage, we could ask ourself a reasonable question. While the accuracydid not look bad (i.e. 77%), the recall score is relatively low (i.e. 12%).As we mentioned, precision and recall only focuses on samples predicted to bepositive, while accuracy takes both into account. In addition, we did notlook at the ratio of classes (labels). We could check this ratio in thetraining set.
target_train.value_counts(normalize=True).plot.barh() plt.xlabel("Class frequency") _ = plt.title("Class frequency in the training set")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
We observe that the positive class, `'donated'`, comprises only 24% of thesamples. The good accuracy of our classifier is then linked to its ability tocorrectly predict the negative class `'not donated'` which may or may not berelevant, depending on the application. We can illustrate the issue using adummy classifier as a baseline.
from sklearn.dummy import DummyClassifier dummy_classifier = DummyClassifier(strategy="most_frequent") dummy_classifier.fit(data_train, target_train) print(f"Accuracy of the dummy classifier: " f"{dummy_classifier.score(data_test, target_test):.3f}")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
With the dummy classifier, which always predicts the negative class `'notdonated'`, we obtain an accuracy score of 76%. Therefore, it means that thisclassifier, without learning anything from the data `data`, is capable ofpredicting as accurately as our logistic regression model.The problem illustrated above is also known as the class imbalance problem.When the classes are imbalanced, accuracy should not be used. In this case,one should either use the precision and recall as presented above or thebalanced accuracy score instead of accuracy.
from sklearn.metrics import balanced_accuracy_score balanced_accuracy = balanced_accuracy_score(target_test, target_predicted) print(f"Balanced accuracy: {balanced_accuracy:.3f}")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
The balanced accuracy is equivalent to accuracy in the context of balancedclasses. It is defined as the average recall obtained on each class. Evaluation and different probability thresholdsAll statistics that we presented up to now rely on `classifier.predict` whichoutputs the most likely label. We haven't made use of the probabilityassociated with this prediction, which gives the confidence of theclassifier in this prediction. By default, the prediction of a classifiercorresponds to a threshold of 0.5 probability in a binary classificationproblem. We can quickly check this relationship with the classifier thatwe trained.
target_proba_predicted = pd.DataFrame(classifier.predict_proba(data_test), columns=classifier.classes_) target_proba_predicted[:5] target_predicted = classifier.predict(data_test) target_predicted[:5]
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
Since probabilities sum to 1 we can get the class with the highestprobability without using the threshold 0.5.
equivalence_pred_proba = ( target_proba_predicted.idxmax(axis=1).to_numpy() == target_predicted) np.all(equivalence_pred_proba)
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
The default decision threshold (0.5) might not be the best threshold thatleads to optimal generalization performance of our classifier. In this case, onecan vary the decision threshold, and therefore the underlying prediction, andcompute the same statistics presented earlier. Usually, the two metricsrecall and precision are computed and plotted on a graph. Each metric plottedon a graph axis and each point on the graph corresponds to a specificdecision threshold. Let's start by computing the precision-recall curve.
from sklearn.metrics import PrecisionRecallDisplay disp = PrecisionRecallDisplay.from_estimator( classifier, data_test, target_test, pos_label='donated', marker="+" ) _ = disp.ax_.set_title("Precision-recall curve")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
TipScikit-learn will return a display containing all plotting element. Notably,displays will expose a matplotlib axis, named ax_, that can be used to addnew element on the axis.You can refer to the documentation to have more information regarding thevisualizations in scikit-learnOn this curve, each blue cross corresponds to a level of probability which weused as a decision threshold. We can see that, by varying this decisionthreshold, we get different precision vs. recall values.A perfect classifier would have a precision of 1 for all recall values. Ametric characterizing the curve is linked to the area under the curve (AUC)and is named average precision (AP). With an ideal classifier, the averageprecision would be 1.The precision and recall metric focuses on the positive class, however, onemight be interested in the compromise between accurately discriminating thepositive class and accurately discriminating the negative classes. Thestatistics used for this are sensitivity and specificity. Sensitivity is justanother name for recall. However, specificity measures the proportion ofcorrectly classified samples in the negative class defined as: TN / (TN +FP). Similar to the precision-recall curve, sensitivity and specificity aregenerally plotted as a curve called the receiver operating characteristic(ROC) curve. Below is such a curve:
from sklearn.metrics import RocCurveDisplay disp = RocCurveDisplay.from_estimator( classifier, data_test, target_test, pos_label='donated', marker="+") disp = RocCurveDisplay.from_estimator( dummy_classifier, data_test, target_test, pos_label='donated', color="tab:orange", linestyle="--", ax=disp.ax_) _ = disp.ax_.set_title("ROC AUC curve")
_____no_output_____
CC-BY-4.0
notebooks/metrics_classification.ipynb
lesteve/scikit-learn-mooc
Para entrar no modo apresentação, execute a seguinte célula e pressione `-`
%reload_ext slide
_____no_output_____
MIT
resultados/4.Proxy.ipynb
jacsonrbinf/minicurso-mineracao-interativa
ProxyEste notebook apresenta os seguintes tópicos:- [Introdução](Introdu%C3%A7%C3%A3o)- [Servidor de proxy](Servidor-de-proxy) IntroduçãoExiste muita informação disponível em repositórios software.A seguir temos uma *screenshot* do repositório `gems-uff/sapos`. Nessa imagem, vemos a organização e nome do repositório Estrelas, forks, watchers Número de issues e pull requests Número de commits, branches, releases, contribuidores e licensa Arquivos Mensagem e data dos commits que alteraram esses arquivos por último Podemos extrair informações de repositórios de software de 3 formas:- Crawling do site do repositório- APIs que fornecem dados- Diretamente do sistema de controle de versõesNeste minicurso abordaremos as 3 maneiras, porém daremos mais atenção a APIs do GitHub e extração direta do Git. Servidor de proxyServidores de repositório costumam limitar a quantidade de requisições que podemos fazer.Em geral, essa limitação não afeta muito o uso esporádico dos serviços para mineração. Porém, quando estamos desenvolvendo algo, pode ser que passemos do limite com requisições repetidas.Para evitar esse problema, vamos configurar um servidor de proxy simples em flask. Quando estamos usando um servidor de proxy, ao invés de fazermos requisições diretamente ao site de destino, fazemos requisições ao servidor de proxy, que, em seguida, redireciona as requisições para o site de destino.Ao receber o resultado da requisição, o proxy faz um cache do resultado e nos retorna o resultado.Se uma requisição já tiver sido feita pelo servidor de proxy, ele apenas nos retorna o resultado do cache. Implementação do ProxyA implementação do servidor de proxy está no arquivo `proxy.py`. Como queremos executar o proxy em paralelo ao notebook, o servidor precisa ser executado externamente.Entretanto, o código do proxy será explicado aqui. Começamos o arquivo com os imports necessários. ```pythonimport hashlibimport requestsimport simplejsonimport osimport sysfrom flask import Flask, request, Response```A biblioteca `hashlib` é usada para fazer hash das requisições. A biblioteca `requests` é usada para fazer requisições ao GitHub. A biblioteca `simplejson` é usada para transformar requisiçoes e respostas em JSON. A biblioteca `os` é usada para manipular caminhos de diretórios e verificar a existência de arquivos. A biblioteca `sys` é usada para pegar os argumentos da execução. Por fim, `flask` é usada como servidor. Em seguida, definimos o site para qual faremos proxy, os headers excluídos da resposta recebida, e criamos um `app` pro `Flask`. Note que `SITE` está sendo definido como o primeiro argumendo da execução do programa ou como https://github.com/, caso não haja argumento.```pythonif len(sys.argv) > 1: SITE = sys.argv[1]else: SITE = "https://github.com/"EXCLUDED_HEADERS = ['content-encoding', 'content-length', 'transfer-encoding', 'connection']app = Flask(__name__)``` Depois, definimos uma função para tratar todas rotas e métodos possíveis que o servidor pode receber.```pythonMETHODS = ['GET', 'POST', 'PATCH', 'PUT', 'DELETE']@app.route('/', defaults={'path': ''}, methods=METHODS)@app.route('/', methods=METHODS)def catch_all(path):``` Dentro desta função, definimos um dicionário de requisição com base na requisição que foi recebida pelo `flask`.```python request_dict = { "method": request.method, "url": request.url.replace(request.host_url, SITE), "headers": {key: value for (key, value) in request.headers if key != 'Host'}, "data": request.get_data(), "cookies": request.cookies, "allow_redirects": False }```Nesta requsição, substituímos o host pelo site de destino. Em seguida, convertemos o dicionário para JSON e calculamos o hash SHA1 do resultado.```python request_json = simplejson.dumps(request_dict, sort_keys=True) sha1 = hashlib.sha1(request_json.encode("utf-8")).hexdigest() path_req = os.path.join("cache", sha1 + ".req") path_resp = os.path.join("cache", sha1 + ".resp")```No diretório `cache` armazenamos arquivos `{sha1}.req` e `{sha1}.resp` com a requisição e resposta dos resultados em cache. Com isso, ao receber uma requisição, podemos ver se `{sha1}.req` existe. Se existir, podemos comparar com a nossa requisição (para evitar conflitos). Por fim, se forem iguais, podemos retornar a resposta que está em cache.```python if os.path.exists(path_req): with open(path_req, "r") as req: req_read = req.read() if req_read == request_json: with open(path_resp, "r") as dump: response = simplejson.load(dump) return Response( response["content"], response["status_code"], response["headers"] )``` Se a requisição não estiver em cache, transformamos o dicionário da requisição em uma requisição do `requests` para o GitHub, excluimos os headers populados pelo `flask` e criamos um JSON para a resposta.```python resp = requests.request(**request_dict) headers = [(name, value) for (name, value) in resp.raw.headers.items() if name.lower() not in EXCLUDED_HEADERS] response = { "content": resp.content, "status_code": resp.status_code, "headers": headers } response_json = simplejson.dumps(response, sort_keys=True)``` Depois disso, salvamos a resposta no cache e retornamos ela para o cliente original.```python with open(path_resp, "w") as dump: dump.write(response_json) with open(path_req, "w") as req: req.write(request_json) return Response( response["content"], response["status_code"], response["headers"] )``` No fim do script, iniciamos o servidor.```pythonif __name__ == '__main__': app.run(debug=True)``` Uso do ProxyExecute a seguinte linha em um terminal:```bashpython proxy.py```Agora, toda requisição que faríamos a github.com, passaremos a fazer a localhost:5000. Por exemplo, ao invés de acessar https://github.com/gems-uff/sapos, acessaremos http://localhost:5000/gems-uff/sapos Requisição com requestsA seguir fazemos uma requisição com requests para o proxy.
SITE = "http://localhost:5000/" # Se não usar o proxy, alterar para https://github.com/ import requests response = requests.get(SITE + "gems-uff/sapos") response.headers['server'], response.status_code
_____no_output_____
MIT
resultados/4.Proxy.ipynb
jacsonrbinf/minicurso-mineracao-interativa
Implement function ToLowerCase() that has a string parameter str, and returns the same string in lowercase.Example 1:```javascriptInput: "Hello"Output: "hello"```Example 2:```javascriptInput: "here"Output: "here"```Example 3:```javascriptInput: "LOVELY"Output: "lovely"```
class Solution(object): def toLowerCase(self, str): """ type: str, str rtype: str """ return str.lower() if __name__ == '__main__': sol = Solution() print(sol.toLowerCase("Hello")) print(sol.toLowerCase("here")) print(sol.toLowerCase("LOVELY")) # Time: O(n) # Space: O(1) # Implement function ToLowerCase() that has a string parameter str, # and returns the same string in lowercase. class Solution(object): def toLowerCase(self, str): """ :type str: str :rtype: str """ return "".join( [chr(ord('a')+ord(c)-ord('A')) if 'A' <= c <= 'Z' else c for c in str] )
_____no_output_____
Apache-2.0
709.ToLowerCase.ipynb
charlesxu90/leetcode_py
BRAIN IMAGINGDATA STRUCTURE The dataset for this tutorial is structured according to the [Brain Imaging Data Structure (BIDS)](http://bids.neuroimaging.io/). BIDS is a simple and intuitive way to organize and describe your neuroimaging and behavioral data. Neuroimaging experiments result in complicated data that can be arranged in many different ways. So far there is no consensus on how to organize and share data obtained in neuroimaging experiments. BIDS tackles this problem by suggesting a new standard for the arrangement of neuroimaging datasets. The idea of BIDS is that the file and folder names follow a strict set of rules:![](../static/images/bids.png) Using the same structure for all of your studies will allow you to easily reuse all of your scripts between studies. But additionally, it also has the advantage that sharing code with and using scripts from other researchers will be much easier. Tutorial DatasetFor this tutorial, we will be using a subset of the [fMRI dataset (ds000114)](https://openfmri.org/dataset/ds000114/) publicly available on [openfmri.org](https://openfmri.org). **If you're using the suggested Docker image you probably have all data needed to run the tutorial within the Docker container.**If you want to have data locally you can use [Datalad](http://datalad.org/) to download a subset of the dataset, via the [datalad repository](http://datasets.datalad.org/?dir=/workshops/nih-2017/ds000114). In order to install dataset with all subrepositories you can run:
%%bash cd /data datalad install -r ///workshops/nih-2017/ds000114
_____no_output_____
BSD-3-Clause
notebooks/introduction_dataset.ipynb
siyunb/Nipype_tutorial
In order to download data, you can use ``datalad get foldername`` command, to download all files in the folder ``foldername``. For this tutorial we only want to download part of the dataset, i.e. the anatomical and the functional `fingerfootlips` images:
%%bash cd /data/ds000114 datalad get -J 4 derivatives/fmriprep/sub-*/anat/*preproc.nii.gz \ sub-01/ses-test/anat \ sub-*/ses-test/func/*fingerfootlips*
_____no_output_____
BSD-3-Clause
notebooks/introduction_dataset.ipynb
siyunb/Nipype_tutorial
So let's have a look at the tutorial dataset.
!tree -L 4 /data/ds000114/
_____no_output_____
BSD-3-Clause
notebooks/introduction_dataset.ipynb
siyunb/Nipype_tutorial
As you can, for every subject we have one anatomical T1w image, five functional images, and one diffusion weighted image.**Note**: If you used `datalad` or `git annex` to get the dataset, you can see symlinks for the image files. Behavioral TaskSubject from the ds000114 dataset did five behavioral tasks. In our dataset two of them are included. The **motor task** consisted of ***finger tapping***, ***foot twitching*** and ***lip pouching*** interleaved with fixation at a cross.The **landmark task** was designed to mimic the ***line bisection task*** used in neurological practice to diagnose spatial hemineglect. Two conditions were contrasted, specifically judging if a horizontal line had been bisected exactly in the middle, versus judging if a horizontal line was bisected at all. More about the dataset and studies you can find [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3641991/).To each of the functional images above, we therefore also have a tab-separated values file (``tva``), containing information such as stimuli onset, duration, type, etc. So let's have a look at one of them:
%%bash cd /data/ds000114 datalad get sub-01/ses-test/func/sub-01_ses-test_task-linebisection_events.tsv !cat /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-linebisection_events.tsv
_____no_output_____
BSD-3-Clause
notebooks/introduction_dataset.ipynb
siyunb/Nipype_tutorial
STAT 453: Deep Learning (Spring 2021) Instructor: Sebastian Raschka (sraschka@wisc.edu) Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/ GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21---
%load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch
Author: Sebastian Raschka Python implementation: CPython Python version : 3.9.2 IPython version : 7.21.0 torch: 1.9.0a0+d819a21
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
- Runs on CPU or GPU (if available) A Convolutional ResNet and Residual Blocks Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016. Imports
import time import numpy as np import torch from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms
_____no_output_____
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
Settings and Dataset
########################## ### SETTINGS ########################## # Device device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu") # Hyperparameters random_seed = 123 learning_rate = 0.01 num_epochs = 10 batch_size = 128 # Architecture num_classes = 10 ########################## ### MNIST DATASET ########################## # Note transforms.ToTensor() scales input images # to 0-1 range train_dataset = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break
Image batch dimensions: torch.Size([128, 1, 28, 28]) Image label dimensions: torch.Size([128])
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
ResNet with identity blocks The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:![](./2-resnet-ex/resnet-ex-1-1.png)
########################## ### MODEL ########################## class ConvNet(torch.nn.Module): def __init__(self, num_classes): super(ConvNet, self).__init__() ######################### ### 1st residual block ######################### self.block_1 = torch.nn.Sequential( torch.nn.Conv2d(in_channels=1, out_channels=4, kernel_size=(1, 1), stride=(1, 1), padding=0), torch.nn.BatchNorm2d(4), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=4, out_channels=1, kernel_size=(3, 3), stride=(1, 1), padding=1), torch.nn.BatchNorm2d(1) ) self.block_2 = torch.nn.Sequential( torch.nn.Conv2d(in_channels=1, out_channels=4, kernel_size=(1, 1), stride=(1, 1), padding=0), torch.nn.BatchNorm2d(4), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=4, out_channels=1, kernel_size=(3, 3), stride=(1, 1), padding=1), torch.nn.BatchNorm2d(1) ) ######################### ### Fully connected ######################### self.linear_1 = torch.nn.Linear(1*28*28, num_classes) def forward(self, x): ######################### ### 1st residual block ######################### shortcut = x x = self.block_1(x) x = torch.nn.functional.relu(x + shortcut) ######################### ### 2nd residual block ######################### shortcut = x x = self.block_2(x) x = torch.nn.functional.relu(x + shortcut) ######################### ### Fully connected ######################### logits = self.linear_1(x.view(-1, 1*28*28)) return logits torch.manual_seed(random_seed) model = ConvNet(num_classes=num_classes) model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
_____no_output_____
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
Training
def compute_accuracy(model, data_loader): correct_pred, num_examples = 0, 0 for i, (features, targets) in enumerate(data_loader): features = features.to(device) targets = targets.to(device) logits = model(features) _, predicted_labels = torch.max(logits, 1) num_examples += targets.size(0) correct_pred += (predicted_labels == targets).sum() return correct_pred.float()/num_examples * 100 start_time = time.time() for epoch in range(num_epochs): model = model.train() for batch_idx, (features, targets) in enumerate(train_loader): features = features.to(device) targets = targets.to(device) ### FORWARD AND BACK PROP logits = model(features) cost = torch.nn.functional.cross_entropy(logits, targets) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 250: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_loader), cost)) model = model.eval() # eval mode to prevent upd. batchnorm params during inference with torch.set_grad_enabled(False): # save memory during inference print('Epoch: %03d/%03d training accuracy: %.2f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_loader))) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) print('Total Training Time: %.2f min' % ((time.time() - start_time)/60)) print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
Test accuracy: 92.16%
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
ResNet with convolutional blocks for resizing The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:![](./2-resnet-ex/resnet-ex-1-2.png)
class ResidualBlock(torch.nn.Module): """ Helper Class""" def __init__(self, channels): super(ResidualBlock, self).__init__() self.block = torch.nn.Sequential( torch.nn.Conv2d(in_channels=channels[0], out_channels=channels[1], kernel_size=(3, 3), stride=(2, 2), padding=1), torch.nn.BatchNorm2d(channels[1]), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(in_channels=channels[1], out_channels=channels[2], kernel_size=(1, 1), stride=(1, 1), padding=0), torch.nn.BatchNorm2d(channels[2]) ) self.shortcut = torch.nn.Sequential( torch.nn.Conv2d(in_channels=channels[0], out_channels=channels[2], kernel_size=(1, 1), stride=(2, 2), padding=0), torch.nn.BatchNorm2d(channels[2]) ) def forward(self, x): shortcut = x block = self.block(x) shortcut = self.shortcut(x) x = torch.nn.functional.relu(block+shortcut) return x ########################## ### MODEL ########################## class ConvNet(torch.nn.Module): def __init__(self, num_classes): super(ConvNet, self).__init__() self.residual_block_1 = ResidualBlock(channels=[1, 4, 8]) self.residual_block_2 = ResidualBlock(channels=[8, 16, 32]) self.linear_1 = torch.nn.Linear(7*7*32, num_classes) def forward(self, x): out = self.residual_block_1(x) out = self.residual_block_2(out) logits = self.linear_1(out.view(-1, 7*7*32)) return logits torch.manual_seed(random_seed) model = ConvNet(num_classes=num_classes) model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
_____no_output_____
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
Training
for epoch in range(num_epochs): model = model.train() for batch_idx, (features, targets) in enumerate(train_loader): features = features.to(device) targets = targets.to(device) ### FORWARD AND BACK PROP logits = model(features) cost = torch.nn.functional.cross_entropy(logits, targets) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_dataset)//batch_size, cost)) model = model.eval() # eval mode to prevent upd. batchnorm params during inference with torch.set_grad_enabled(False): # save memory during inference print('Epoch: %03d/%03d training accuracy: %.2f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_loader))) print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
Test accuracy: 98.13%
MIT
L14/2-resnet-example.ipynb
sum-coderepo/stat453-deep-learning-ss21
Hidden Markov Model What is a Hidden Markov Model?A Hidden Markov Model (HMM) is a statistical Markov model in with the system being modeled is assumed to be a Markov process with **hidden** states.An HMM allows us to talk about both observed events (like words that we see in the input) and hidden events (like Part-Of-Speech tags).An HMM is specified by the following components:![image.png](attachment:image.png)**State Transition Probabilities** are the probabilities of moving from state i to state j.![image-2.png](attachment:image-2.png)**Observation Probability Matrix** also called emission probabilities, express the probability of an observation Ot being generated from a state i.![image-4.png](attachment:image-4.png)**Initial State Distribution** $\pi$i is the probability that the Markov chain will start in state i. Some state j with $\pi$j=0 means that they cannot be initial states.Hence, the entire Hidden Markov Model can be described as,![image-3.png](attachment:image-3.png)
# Inorder to get the notebooks running in current directory import os, sys, inspect currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) parentdir = os.path.dirname(currentdir) sys.path.insert(0, parentdir) import hmm
_____no_output_____
MIT
notebooks/HiddenMarkovModel.ipynb
kad99kev/HiddenMarkovModel
Let us take a simple example with two hidden states and two observable states.The **Hidden states** will be **Rainy** and **Sunny**.The **Observable states** will be **Sad** and **Happy**.The transition and emission matrices are given below.The initial probabilities are obtained by computing the stationary distribution of the transition matrix.This means that for a given matrix A, the stationary distribution would be given as,$\pi$A = $\pi$
# Hidden hidden_states = ["Rainy", "Sunny"] transition_matrix = [[0.5, 0.5], [0.3, 0.7]] # Observable observable_states = ["Sad", "Happy"] emission_matrix = [[0.8, 0.2], [0.4, 0.6]] # Inputs input_seq = [0, 0, 1] model = hmm.HiddenMarkovModel( observable_states, hidden_states, transition_matrix, emission_matrix ) model.print_model_info() model.visualize_model(output_dir="simple_demo", notebook=True)
************************************************** Observable States: ['Sad', 'Happy'] Emission Matrix: Sad Happy Rainy 0.8 0.2 Sunny 0.4 0.6 Hidden States: ['Rainy', 'Sunny'] Transition Matrix: Rainy Sunny Rainy 0.5 0.5 Sunny 0.3 0.7 Initial Probabilities: [0.375 0.625]
MIT
notebooks/HiddenMarkovModel.ipynb
kad99kev/HiddenMarkovModel
Here the blue lines indicate the hidden transitions.Here the red lines indicate the emission transitions. Problem 1:Computing Likelihood: Given an HMM $\lambda$ = (A, B) and an observation sequence O, determine the likelihood P(O | $\lambda$) How It Is Calculated?For our example, for the given **observed** sequence - (Sad, Sad, Happy) the probabilities will be calculated as,P(Sad, Sad, Happy) = P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Happy | Rainy)+P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Happy | Sunny)+P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Happy | Rainy)+P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Happy | Sunny)+P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Happy | Rainy)+P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Happy | Sunny)+P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Happy | Rainy)+P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Happy | Sunny) The Problems With This MethodThis however, is a naive way of computation. The number of multiplications this way is of the order of 2TNT. where T is the length of the observed sequence and N is the number of hidden states.This means that the time complexity increases exponentially as the number of hidden states increases. Forward AlgorithmWe are computing *P(Rainy) * P(Sad | Rainy)* and *P(Sunny) * P(Sad | Sunny)* a total of 4 times.Even parts like*P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy)*, *P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny)*, *P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy)* and *P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny)* are repeated.We can avoid so many computation by using recurrance relations with the help of **Dynamic Programming**.![ForwardHMM](../assets/ForwardHMM.png)In code, it can be written as:```alpha[:, 0] = self.pi * emission_matrix[:, input_seq[0]] Initializefor t in range(1, T): for s in range(n_states): alpha[s, t] = emission_matrix[s, input_seq[t]] * np.sum( alpha[:, t - 1] * transition_matrix[:, s] )```This will lead to the following computations:![Computation](../assets/Computation.png)
alpha, a_probs = model.forward(input_seq) hmm.print_forward_result(alpha, a_probs)
************************************************** Alpha: [[0.3 0.18 0.0258] [0.25 0.13 0.1086]] Probability of sequence: 0.13440000000000002
MIT
notebooks/HiddenMarkovModel.ipynb
kad99kev/HiddenMarkovModel
Backward AlgorithmThe Backward Algorithm is the time-reversed version of the Forward Algorithm.
beta, b_probs = model.backward(input_seq) hmm.print_backward_result(beta, b_probs)
************************************************** Beta: [[0.256 0.4 1. ] [0.2304 0.48 1. ]] Probability of sequence: 0.13440000000000002
MIT
notebooks/HiddenMarkovModel.ipynb
kad99kev/HiddenMarkovModel
Problem 2: Given an observation sequence O and an HMM λ = (A,B), discover the best hidden state sequence Q. Viterbi AlgorithmThe Viterbi Algorithm increments over each time step, finding the maximum probability of any path that gets to state i at time t, that also has the correct observations for the sequence up to time t.The algorithm also keeps track of the state with the highest probability at each stage. At the end of the sequence, the algorith will iterate backwards selecting the state that won which creates the most likely path or sequence of hidden states that led to the sequence of observations.In code, it is written as:```delta[:, 0] = self.pi * emission_matrix[:, input_seq[0]] Initializefor t in range(1, T): for s in range(n_states): delta[s, t] = ( np.max(delta[:, t - 1] * transition_matrix[:, s]) * emission_matrix[s, input_seq[t]] ) phi[s, t] = np.argmax(delta[:, t - 1] * transition_matrix[:, s])```The Viterbi Algorithm is identical to the forward algorithm except that it takes the **max** over theprevious path probabilities whereas the forward algorithm takes the **sum**.The code for the Backtrace is written as:```path[T - 1] = np.argmax(delta[:, T - 1]) Initializefor t in range(T - 2, -1, -1): path[t] = phi[path[t + 1], [t + 1]]```
path, delta, phi = model.viterbi(input_seq) hmm.print_viterbi_result(input_seq, observable_states, hidden_states, path, delta, phi)
************************************************** Starting Forward Walk State=0 : Sequence=1 | phi[0, 1]=0.0 State=1 : Sequence=1 | phi[1, 1]=1.0 State=0 : Sequence=2 | phi[0, 2]=0.0 State=1 : Sequence=2 | phi[1, 2]=0.0 ************************************************** Start Backtrace Path[1]=0 Path[0]=0 ************************************************** Viterbi Result Delta: [[0.3 0.12 0.012] [0.25 0.07 0.036]] Phi: [[0. 0. 0.] [0. 1. 0.]] Result: Observation BestPath 0 Sad Rainy 1 Sad Rainy 2 Happy Sunny
MIT
notebooks/HiddenMarkovModel.ipynb
kad99kev/HiddenMarkovModel
Deep Reinforcement Learning in Action by Alex Zai and Brandon Brown Chapter 3 Listing 3.1
from Gridworld import Gridworld game = Gridworld(size=4, mode='static') import sys game.display() game.makeMove('d') game.makeMove('d') game.makeMove('d') game.display() game.reward() game.board.render_np() game.board.render_np().shape
_____no_output_____
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.2
import numpy as np import torch from Gridworld import Gridworld import random from matplotlib import pylab as plt l1 = 64 l2 = 150 l3 = 100 l4 = 4 model = torch.nn.Sequential( torch.nn.Linear(l1, l2), torch.nn.ReLU(), torch.nn.Linear(l2, l3), torch.nn.ReLU(), torch.nn.Linear(l3,l4) ) loss_fn = torch.nn.MSELoss() learning_rate = 1e-3 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) gamma = 0.9 epsilon = 1.0 action_set = { 0: 'u', 1: 'd', 2: 'l', 3: 'r', }
_____no_output_____
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.3
epochs = 1000 losses = [] for i in range(epochs): game = Gridworld(size=4, mode='static') state_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0 state1 = torch.from_numpy(state_).float() status = 1 while(status == 1): qval = model(state1) qval_ = qval.data.numpy() if (random.random() < epsilon): action_ = np.random.randint(0,4) else: action_ = np.argmax(qval_) action = action_set[action_] game.makeMove(action) state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0 state2 = torch.from_numpy(state2_).float() reward = game.reward() #-1 for lose, +1 for win, 0 otherwise with torch.no_grad(): newQ = model(state2.reshape(1,64)) maxQ = torch.max(newQ) if reward == -1: # if game still in play Y = reward + (gamma * maxQ) else: Y = reward Y = torch.Tensor([Y]).detach().squeeze() X = qval.squeeze()[action_] loss = loss_fn(X, Y) optimizer.zero_grad() loss.backward() losses.append(loss.item()) optimizer.step() state1 = state2 if reward != -1: #game lost status = 0 if epsilon > 0.1: epsilon -= (1/epochs) plt.plot(losses) m = torch.Tensor([2.0]) m.requires_grad=True b = torch.Tensor([1.0]) b.requires_grad=True def linear_model(x,m,b): y = m @ x + b return y y = linear_model(torch.Tensor([4.]), m,b) y y.grad_fn with torch.no_grad(): y = linear_model(torch.Tensor([4]),m,b) y y.grad_fn y = linear_model(torch.Tensor([4.]), m,b) y.backward() m.grad b.grad
_____no_output_____
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.4
def test_model(model, mode='static', display=True): i = 0 test_game = Gridworld(mode=mode) state_ = test_game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0 state = torch.from_numpy(state_).float() if display: print("Initial State:") print(test_game.display()) status = 1 while(status == 1): qval = model(state) qval_ = qval.data.numpy() action_ = np.argmax(qval_) action = action_set[action_] if display: print('Move #: %s; Taking action: %s' % (i, action)) test_game.makeMove(action) state_ = test_game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0 state = torch.from_numpy(state_).float() if display: print(test_game.display()) reward = test_game.reward() if reward != -1: #if game is over if reward > 0: #if game won status = 2 if display: print("Game won! Reward: %s" % (reward,)) else: #game is lost status = 0 if display: print("Game LOST. Reward: %s" % (reward,)) i += 1 if (i > 15): if display: print("Game lost; too many moves.") break win = True if status == 2 else False return win test_model(model, 'static')
Initial State: [['+' '-' ' ' 'P'] [' ' 'W' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 0; Taking action: l [['+' '-' 'P' ' '] [' ' 'W' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 1; Taking action: d [['+' '-' ' ' ' '] [' ' 'W' 'P' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 2; Taking action: d [['+' '-' ' ' ' '] [' ' 'W' ' ' ' '] [' ' ' ' 'P' ' '] [' ' ' ' ' ' ' ']] Move #: 3; Taking action: l [['+' '-' ' ' ' '] [' ' 'W' ' ' ' '] [' ' 'P' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 4; Taking action: l [['+' '-' ' ' ' '] [' ' 'W' ' ' ' '] ['P' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 5; Taking action: u [['+' '-' ' ' ' '] ['P' 'W' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Move #: 6; Taking action: u [['+' '-' ' ' ' '] [' ' 'W' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' ' ']] Game won! Reward: 10
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.5
from collections import deque epochs = 5000 losses = [] mem_size = 1000 batch_size = 200 replay = deque(maxlen=mem_size) max_moves = 50 h = 0 for i in range(epochs): game = Gridworld(size=4, mode='random') state1_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0 state1 = torch.from_numpy(state1_).float() status = 1 mov = 0 while(status == 1): mov += 1 qval = model(state1) qval_ = qval.data.numpy() if (random.random() < epsilon): action_ = np.random.randint(0,4) else: action_ = np.argmax(qval_) action = action_set[action_] game.makeMove(action) state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0 state2 = torch.from_numpy(state2_).float() reward = game.reward() done = True if reward > 0 else False exp = (state1, action_, reward, state2, done) replay.append(exp) state1 = state2 if len(replay) > batch_size: minibatch = random.sample(replay, batch_size) state1_batch = torch.cat([s1 for (s1,a,r,s2,d) in minibatch]) action_batch = torch.Tensor([a for (s1,a,r,s2,d) in minibatch]) reward_batch = torch.Tensor([r for (s1,a,r,s2,d) in minibatch]) state2_batch = torch.cat([s2 for (s1,a,r,s2,d) in minibatch]) done_batch = torch.Tensor([d for (s1,a,r,s2,d) in minibatch]) Q1 = model(state1_batch) with torch.no_grad(): Q2 = model(state2_batch) Y = reward_batch + gamma * ((1 - done_batch) * torch.max(Q2,dim=1)[0]) X = \ Q1.gather(dim=1,index=action_batch.long().unsqueeze(dim=1)).squeeze() loss = loss_fn(X, Y.detach()) optimizer.zero_grad() loss.backward() losses.append(loss.item()) optimizer.step() if reward != -1 or mov > max_moves: status = 0 mov = 0 losses = np.array(losses) plt.plot(losses) test_model(model,mode='random')
Initial State: [['P' ' ' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['-' '+' ' ' ' ']] Move #: 0; Taking action: r [[' ' 'P' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['-' '+' ' ' ' ']] Move #: 1; Taking action: d [[' ' ' ' ' ' ' '] [' ' 'P' ' ' ' '] [' ' ' ' ' ' 'W'] ['-' '+' ' ' ' ']] Move #: 2; Taking action: d [[' ' ' ' ' ' ' '] [' ' ' ' ' ' ' '] [' ' 'P' ' ' 'W'] ['-' '+' ' ' ' ']] Move #: 3; Taking action: d [[' ' ' ' ' ' ' '] [' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['-' '+' ' ' ' ']] Game won! Reward: 10
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.6
max_games = 1000 wins = 0 for i in range(max_games): win = test_model(model, mode='random', display=False) if win: wins += 1 win_perc = float(wins) / float(max_games) print("Games played: {0}, # of wins: {1}".format(max_games,wins)) print("Win percentage: {}".format(100.0*win_perc))
Games played: 1000, # of wins: 908 Win percentage: 90.8
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.7
import copy model = torch.nn.Sequential( torch.nn.Linear(l1, l2), torch.nn.ReLU(), torch.nn.Linear(l2, l3), torch.nn.ReLU(), torch.nn.Linear(l3,l4) ) model2 = model2 = copy.deepcopy(model) model2.load_state_dict(model.state_dict()) sync_freq = 50 loss_fn = torch.nn.MSELoss() learning_rate = 1e-3 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
_____no_output_____
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
Listing 3.8
from IPython.display import clear_output from collections import deque epochs = 5000 losses = [] mem_size = 1000 batch_size = 200 replay = deque(maxlen=mem_size) max_moves = 50 h = 0 sync_freq = 500 j=0 for i in range(epochs): game = Gridworld(size=4, mode='random') state1_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0 state1 = torch.from_numpy(state1_).float() status = 1 mov = 0 while(status == 1): j+=1 mov += 1 qval = model(state1) qval_ = qval.data.numpy() if (random.random() < epsilon): action_ = np.random.randint(0,4) else: action_ = np.argmax(qval_) action = action_set[action_] game.makeMove(action) state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0 state2 = torch.from_numpy(state2_).float() reward = game.reward() done = True if reward > 0 else False exp = (state1, action_, reward, state2, done) replay.append(exp) state1 = state2 if len(replay) > batch_size: minibatch = random.sample(replay, batch_size) state1_batch = torch.cat([s1 for (s1,a,r,s2,d) in minibatch]) action_batch = torch.Tensor([a for (s1,a,r,s2,d) in minibatch]) reward_batch = torch.Tensor([r for (s1,a,r,s2,d) in minibatch]) state2_batch = torch.cat([s2 for (s1,a,r,s2,d) in minibatch]) done_batch = torch.Tensor([d for (s1,a,r,s2,d) in minibatch]) Q1 = model(state1_batch) with torch.no_grad(): Q2 = model2(state2_batch) Y = reward_batch + gamma * ((1-done_batch) * \ torch.max(Q2,dim=1)[0]) X = Q1.gather(dim=1,index=action_batch.long() \ .unsqueeze(dim=1)).squeeze() loss = loss_fn(X, Y.detach()) print(i, loss.item()) clear_output(wait=True) optimizer.zero_grad() loss.backward() losses.append(loss.item()) optimizer.step() if j % sync_freq == 0: model2.load_state_dict(model.state_dict()) if reward != -1 or mov > max_moves: status = 0 mov = 0 losses = np.array(losses) plt.plot(losses) test_model(model,mode='random')
Initial State: [[' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['+' ' ' ' ' ' '] ['-' 'P' ' ' ' ']] Move #: 0; Taking action: u [[' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['+' 'P' ' ' ' '] ['-' ' ' ' ' ' ']] Move #: 1; Taking action: l [[' ' ' ' ' ' ' '] [' ' ' ' ' ' 'W'] ['+' ' ' ' ' ' '] ['-' ' ' ' ' ' ']] Game won! Reward: 10
MIT
Errata/Chapter 3.ipynb
karshtharyani/DeepReinforcementLearningInAction
0. General note * This notebook produces figures and calculations presented in [Ye et al. 2017, JGR](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2016JB013811).* This notebook demonstrates how to correct pressure scales for the existing phase boundary data. 1. Global setup
import matplotlib.pyplot as plt import numpy as np from uncertainties import unumpy as unp import pytheos as eos
_____no_output_____
Apache-2.0
Mantle_Boundaries.ipynb
SHDShim/cider2018_tutorial
2. Pressure calculations for PPv * Data from Tateno2009T (K) | Au-Tsuchiya | Pt-Holmes | MgO-Speziale------|-------------|-----------|--------------3500 | 120.4 | 137.7 | 135.62000 | 110.5 | 126.8 | 115.8* Dorogokupets2007T (K) | Au | Pt | MgO ------|-------------|-----------|--------------3500 | 119.7 | 135.2 | 129.62000 | 108.9 | 123.2 | 113.2* In conclusion, PPV boundary discrepancy is not likely due to pressure scale problem.
t_ppv = np.asarray([3500., 2000.]) Au_T = eos.gold.Tsuchiya2003() Au_D = eos.gold.Dorogokupets2007() v = np.asarray([51.58,51.7]) p_Au_T_ppv = Au_T.cal_p(v, t_ppv) p_Au_D_ppv = Au_D.cal_p(v, t_ppv) print(p_Au_T_ppv, p_Au_D_ppv) print('slopes: ', (p_Au_T_ppv[0]-p_Au_T_ppv[1])/(t_ppv[0]-t_ppv[1]),\ (p_Au_D_ppv[0]-p_Au_D_ppv[1])/(t_ppv[0]-t_ppv[1]) ) Pt_H = eos.platinum.Holmes1989() Pt_D = eos.platinum.Dorogokupets2007() v = np.asarray([48.06, 48.09]) p_Pt_H_ppv = Pt_H.cal_p(v, t_ppv) p_Pt_D_ppv = Pt_D.cal_p(v, t_ppv) print(p_Pt_H_ppv, p_Pt_D_ppv) print('slopes: ', (p_Pt_H_ppv[0]-p_Pt_H_ppv[1])/(t_ppv[0]-t_ppv[1]),\ (p_Pt_D_ppv[0]-p_Pt_D_ppv[1])/(t_ppv[0]-t_ppv[1]) ) MgO_S = eos.periclase.Speziale2001() MgO_D = eos.periclase.Dorogokupets2007() v = np.asarray([52.87, 53.6]) p_MgO_S_ppv = MgO_S.cal_p(v, t_ppv) p_MgO_D_ppv = MgO_D.cal_p(v, t_ppv) print(p_MgO_S_ppv, p_MgO_D_ppv) print('slopes: ', (p_MgO_S_ppv[0]-p_MgO_S_ppv[1])/(t_ppv[0]-t_ppv[1]), \ (p_MgO_D_ppv[0]-p_MgO_D_ppv[1])/(t_ppv[0]-t_ppv[1]) )
[135.56997269894586+/-0.9530902142955003 115.8281642808288+/-0.49022460541552476] [129.57114762303976+/-0.0071388388100554765 113.23868528253495+/-0.006849434801332905] slopes: 0.01316+/-0.00032 0.01088831+/-0.00000020
Apache-2.0
Mantle_Boundaries.ipynb
SHDShim/cider2018_tutorial
3. Post-spinel Fei2004Scales| PT | PT ------|------------|------------MgO-S | 23.6, 1573 | 22.8, 2173MgO-D | 23.1, 1573 | 22.0, 2173Ye2014Scales | PT | PT-------|------------|------------Pt-F | 25.2, 1550 | 23.2, 2380Pt-D | 24.6, 1550 | 22.5, 2380 Au-F | 28.3, 1650 | 27.1, 2150Au-D | 27.0, 1650 | 25.6, 2150
MgO_S = eos.periclase.Speziale2001() MgO_D = eos.periclase.Dorogokupets2007() v = np.asarray([68.75, 70.3]) t_MgO = np.asarray([1573.,2173.]) p_MgO_S = MgO_S.cal_p(v, t_MgO) p_MgO_D = MgO_D.cal_p(v, t_MgO) print(p_MgO_S, p_MgO_D) print('slopes: ', (p_MgO_S[0]-p_MgO_S[1])/(t_MgO[0]-t_MgO[1]), (p_MgO_D[0]-p_MgO_D[1])/(t_MgO[0]-t_MgO[1]) ) Pt_F = eos.platinum.Fei2007bm3() Pt_D = eos.platinum.Dorogokupets2007() v = np.asarray([57.43, 58.85]) t_Pt = np.asarray([1550., 2380.]) p_Pt_F = Pt_F.cal_p(v, t_Pt) p_Pt_D = Pt_D.cal_p(v, t_Pt) print(p_Pt_F, p_Pt_D) print('slopes: ', (p_Pt_F[0]-p_Pt_F[1])/(t_Pt[0]-t_Pt[1]), (p_Pt_D[0]-p_Pt_D[1])/(t_Pt[0]-t_Pt[1]) ) Au_F = eos.gold.Fei2007bm3() Au_D = eos.gold.Dorogokupets2007() v = np.asarray([62.33,63.53]) t_Au = np.asarray([1650., 2150.]) p_Au_F = Au_F.cal_p(v, t_Au) p_Au_D = Au_D.cal_p(v, t_Au) print(p_Au_F, p_Au_D) print('slopes: ', (p_Au_F[0]-p_Au_F[1])/(t_Au[0]-t_Au[1]), (p_Au_D[0]-p_Au_D[1])/(t_Au[0]-t_Au[1]) ) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,3.5)) #ax.plot(unp.nominal_values(p_Au_T), t, c='b', ls='--', label='Au-Tsuchiya') lw = 4 l_alpha = 0.3 ax1.plot(unp.nominal_values(p_Au_D), t_Au, c='b', ls='-', alpha=l_alpha, label='Au-D07', lw=lw) ax1.annotate('Au-D07', xy=(25.7, 2100), xycoords='data', xytext=(26.9, 2100), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='right', verticalalignment='center') ax1.plot(unp.nominal_values(p_Au_D-2.5), t_Au, c='b', ls='-', label='Au-mD07', lw=lw) ax1.annotate('Au-D07,\n corrected', xy=(24.35, 1700), xycoords='data', xytext=(24.8, 1700), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='left', verticalalignment='center') #ax.plot(unp.nominal_values(p_Pt_H), t, c='r', ls='--', label='Pt-Holmes') ax1.plot(unp.nominal_values(p_Pt_D), t_Pt, c='r', ls='-', label='Pt-D07', lw=lw) ax1.annotate('Pt-D07', xy=(22.7, 2300), xycoords='data', xytext=(23.1, 2300), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='left', verticalalignment='center') ax1.plot(unp.nominal_values(p_MgO_S), t_MgO, c='k', ls='-', alpha=l_alpha, label='MgO-S01', lw=lw) ax1.annotate('MgO-S01', xy=(22.9, 2150), xycoords='data', xytext=(22.5, 2250), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='right', verticalalignment='top') ax1.plot(unp.nominal_values(p_MgO_D), t_MgO, c='k', ls='-', label='MgO-D07', lw=lw) ax1.annotate('MgO-D07', xy=(22.7, 1800), xycoords='data', xytext=(22.3, 1800), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='right', verticalalignment='center') ax1.fill([23.5,24,24,23.5], [1700,1700,2000,2000], 'k', alpha=0.2) ax1.set_xlabel("Pressure (GPa)"); ax1.set_ylabel("Temperature (K)") #l = ax1.legend(loc=3, fontsize=10, handlelength=2.5); l.get_frame().set_linewidth(0.5) ax2.plot(unp.nominal_values(p_Au_T_ppv), t_ppv, c='b', ls='-', alpha=l_alpha, label='Au-T04', lw=lw) ax2.annotate('Au-T04', xy=(120, 3400), xycoords='data', xytext=(122, 3400), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='left', verticalalignment='center') ax2.plot(unp.nominal_values(p_Au_D_ppv), t_ppv, c='b', ls='-', label='Au-D07', lw=lw) ax2.annotate('Au-D07', xy=(119, 3400), xycoords='data', xytext=(117, 3400), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='right', verticalalignment='center') ax2.plot(unp.nominal_values(p_Pt_H_ppv), t_ppv, c='r', ls='-', alpha=l_alpha, label='Pt-H89', lw=lw) ax2.annotate('Pt-H89', xy=(129, 2300), xycoords='data', xytext=(132, 2300), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='left', verticalalignment='center') ax2.plot(unp.nominal_values(p_Pt_D_ppv), t_ppv, c='r', ls='-', label='Pt-D07', lw=lw) ax2.annotate('Pt-D07', xy=(124, 2150), xycoords='data', xytext=(123.7, 2300), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='center', verticalalignment='bottom') ax2.plot(unp.nominal_values(p_MgO_S_ppv), t_ppv, c='k', ls='-', alpha=l_alpha, label='MgO-S01', lw=lw) ax2.annotate('MgO-S01', xy=(132, 3250), xycoords='data', xytext=(132.2, 3550), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='left', verticalalignment='bottom') ax2.plot(unp.nominal_values(p_MgO_D_ppv), t_ppv, c='k', ls='-', label='MgO-D07', lw=lw) ax2.annotate('MgO-D07', xy=(128, 3400), xycoords='data', xytext=(128, 3550), textcoords='data', arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5), horizontalalignment='center', verticalalignment='bottom') ax2.set_xlabel("Pressure (GPa)"); ax2.set_ylabel("Temperature (K)") ax2.set_ylim(1900, 3700.) #l = ax2.legend(loc=0, fontsize=10, handlelength=2.5); l.get_frame().set_linewidth(0.5) ax1.text(0.05, 0.03, 'a', horizontalalignment='center',\ verticalalignment='bottom', transform = ax1.transAxes,\ fontsize = 32) ax2.text(0.05, 0.03, 'b', horizontalalignment='center',\ verticalalignment='bottom', transform = ax2.transAxes,\ fontsize = 32) ax1.set_yticks(ax1.get_yticks()[::2]) #ax2.set_yticks(ax2.get_yticks()[::2]) plt.tight_layout(pad=0.6) plt.savefig('f-boundaries.pdf', bbox_inches='tight', \ pad_inches=0.1)
_____no_output_____
Apache-2.0
Mantle_Boundaries.ipynb
SHDShim/cider2018_tutorial
Generators 生成器 > Here we'll take a deeper dive into Python generators, including *generator expressions* and *generator functions*.本章我们深入讨论Python的生成器,包括*生成器表达式*和*生成器函数* Generator Expressions 生成器表达式 > The difference between list comprehensions and generator expressions is sometimes confusing; here we'll quickly outline the differences between them:列表解析和生成器表达式之间的区别很容易令人混乱;下面我们快速地说明一下它们之间的区别: List comprehensions use square brackets, while generator expressions use parentheses 列表解析使用中括号,而生成器表达式使用小括号> This is a representative list comprehension:下面是一个很有代表性的列表解析:
[n ** 2 for n in range(12)]
_____no_output_____
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> While this is a representative generator expression:下面这个却是一个生成器表达式:
(n ** 2 for n in range(12))
_____no_output_____
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> Notice that printing the generator expression does not print the contents; one way to print the contents of a generator expression is to pass it to the ``list`` constructor:你会注意到直接打印生成器表达式并不会输出生成器的内容;可以使用`list`将生成器转换为一个列表然后输出:
G = (n ** 2 for n in range(12)) list(G)
_____no_output_____
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
A list is a collection of values, while a generator is a recipe for producing values 列表是一个集合,而生成器只是产生集合值的配方> When you create a list, you are actually building a collection of values, and there is some memory cost associated with that.When you create a generator, you are not building a collection of values, but a recipe for producing those values.Both expose the same iterator interface, as we can see here:当你创建一个列表,你真实地创建了一个集合,当然这个集合存储在内存当中需要一定的空间。当你创建了一个生成器,你并没有创建一个集合,你仅仅是指定了产生集合值的方法。两者都实现了迭代器接口,由下面两个例子可以看到:
L = [n ** 2 for n in range(12)] for val in L: print(val, end=' ') G = (n ** 2 for n in range(12)) for val in G: print(val, end=' ')
0 1 4 9 16 25 36 49 64 81 100 121
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> The difference is that a generator expression does not actually compute the values until they are needed.This not only leads to memory efficiency, but to computational efficiency as well!This also means that while the size of a list is limited by available memory, the size of a generator expression is unlimited!区别在于生成器仅在你用到值的时候才会按照配方计算一个值返回给你。这样的好处不仅仅是节省内存,还能节省计算资源。这还意味着,列表的大小受限于可用内存的大小,而生成器的大小是无限的。> An example of an infinite generator expression can be created using the ``count`` iterator defined in ``itertools``:我们可以使用`itertools`里面的`count`函数来构造一个无限的生成器表达式:
from itertools import count count() for i in count(): print(i, end=' ') if i >= 10: break
0 1 2 3 4 5 6 7 8 9 10
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> The ``count`` iterator will go on happily counting forever until you tell it to stop; this makes it convenient to create generators that will also go on forever:`count`函数会永远的迭代下去除非你停止了它的运行;这也可以用来创建永远运行的生成器:
factors = [2, 3, 5, 7] G = (i for i in count() if all(i % n > 0 for n in factors)) for val in G: print(val, end=' ') if val > 40: break
1 11 13 17 19 23 29 31 37 41
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> You might see what we're getting at here: if we were to expand the list of factors appropriately, what we would have the beginnings of is a prime number generator, using the Sieve of Eratosthenes algorithm. We'll explore this more momentarily.上面的例子你应该已经看出来了:如果我们使用Sieve of Eratosthenes算法,将factors列表进行合适的扩展的话,那么我们将会得到一个质数的生成器。 A list can be iterated multiple times; a generator expression is single-use 列表可以被迭代多次;生成器只能是一次使用> This is one of those potential gotchas of generator expressions.With a list, we can straightforwardly do this:这是生成器的一个著名的坑。使用列表时,我们可以如下做:
L = [n ** 2 for n in range(12)] for val in L: print(val, end=' ') print() for val in L: print(val, end=' ')
0 1 4 9 16 25 36 49 64 81 100 121 0 1 4 9 16 25 36 49 64 81 100 121
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop