markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
The following creates a TFLite file that will fail in the Edge TPU Compiler: | converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = quantize_model(converter)
with open('mobilenet_quant_before.tflite', 'wb') as f:
f.write(tflite_model)
! edgetpu_compiler mobilenet_quant_before.tflite | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
It won't compile because the model has a dynamic batch size as shown here (None means it's dynamic): | model.input.shape | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
So to fix it, we need to set that to 1: | model.input.set_shape((1,) + model.input.shape[1:])
model.input.shape | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Now we can convert it again and it will compile: | converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = quantize_model(converter)
with open('mobilenet_quant_after.tflite', 'wb') as f:
f.write(tflite_model)
! edgetpu_compiler mobilenet_quant_after.tflite | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Solution for a SavedModel
If you're loading a SavedModel file, then the fix looks a little different.
So let's say you saved a model like this: | model = tf.keras.applications.MobileNet()
save_path = os.path.join("mobilenet/1/")
tf.saved_model.save(model, save_path) | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Ideally, you could later load the model like this: | converter = tf.lite.TFLiteConverter.from_saved_model(save_path) | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
But the saved model's input still has a dynamic batch size, so you need to instead load the model with saved_model.load() and modify the input's concrete function so it has a batch size of 1. Then load it into TFLiteConverter using the concrete function: | imported = tf.saved_model.load(save_path)
concrete_func = imported.signatures["serving_default"]
concrete_func.inputs[0].set_shape([1, 224, 224, 3])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], imported) | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Now you can convert to TFLite and it will compile for the Edge TPU: | tflite_model = quantize_model(converter)
with open('mobilenet_imported_quant.tflite', 'wb') as f:
f.write(tflite_model)
! edgetpu_compiler mobilenet_imported_quant.tflite | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Can't import a SavedModel without a signature
Sometimes, a SavedModel does not include a signature (such as when the model was built with a custom tf.Module), making it impossible to load using TFLiteConverter. So you can instead add the batch size as follows.
Note: If you created the model yourself, see how to specify... | # First get the Inception SavedModel, which is lacking a signature
!wget -O imagenet_inception_v2_classification_4.tar.gz https://tfhub.dev/google/imagenet/inception_v2/classification/4?tf-hub-format=compressed
!mkdir -p imagenet_inception_v2_classification_4
!tar -xvzf imagenet_inception_v2_classification_4.tar.gz --d... | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
For example, this fails because the model has no signature: | #converter = tf.lite.TFLiteConverter.from_saved_model("imagenet_inception_v2_classification_4") | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Whereas other code above loads the input's concrete function by calling upon its "serving_default" signature, we can't do that if the model has no signature. So we instead get the concrete function by specifying its known input tensor shape: | imported = tf.saved_model.load("imagenet_inception_v2_classification_4")
concrete_func = imported.__call__.get_concrete_function(
tf.TensorSpec([1, 224, 224, 3]))
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], imported) | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Now we can convert and compile: | tflite_model = quantize_model(converter)
with open('inceptionv2_imported_quant.tflite', 'wb') as f:
f.write(tflite_model)
! edgetpu_compiler inceptionv2_imported_quant.tflite | fix_conversion_issues_ptq_tf2.ipynb | google-coral/tutorials | apache-2.0 |
Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 400 short trajectories, each with 30 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset tha... | ntraj = 700
trajectory_length = 40
lag_values = np.arange(1, 37, 2)
embedding_values = lag_values[1:] - 1 | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
Load and format the data | trajs_2d = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length] # Raw trajectory
trajs = trajs_2d[:, :, 1] # Only keep y coordinate
stateA = (trajs > 1.15).astype('float')
stateB = (trajs < 0.15).astype('float')
# Convert to list of trajectories format
trajs = [traj_i.reshape(-1, 1) for traj_i in trajs]
... | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the tr... | flattened_trajs, traj_edges = tlist_to_flat(trajs)
flattened_stateA = np.hstack(stateA)
flattened_stateB = np.hstack(stateB)
print("Flattened Shapes are: ", flattened_trajs.shape, flattened_stateA.shape, flattened_stateB.shape,) | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
Construct DGA MFPT by increasing lag times
We first construct the MFPT with increasing lag times. | # Build the basis set
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')
diff_atlas.fit(flattened_trajs)
flat_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flattened_stateA))
basis = flat_to_tlist(flat_basis, traj_edges)
flat_basis_no_bou... | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
Construct DGA MFPT with increasing Delay Embedding
We now construct the MFPT using delay embedding. To accelerate the process, we will only use every fifth value of the delay length. | mfpt_BA_embeddings = []
for lag in embedding_values:
# Perform delay embedding
debbed_traj = delay_embed(trajs, n_embed=lag)
lifted_A = lift_function(stateA, n_embed=lag)
lifted_B = lift_function(stateB, n_embed=lag)
flat_debbed_traj, embed_edges = tlist_to_flat(debbed_traj)
flat_lifted_A =... | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
Plot the Results
We plot the results of our calculation, against the true value (black line, with the standard deviation in stateB given by the dotted lines). We see that increasing the lag time causes the mean-first-passage time to grow unboundedly. In contrast, with delay embedding the mean-first-passage time conve... | plt.plot(embedding_values, mfpt_BA_embeddings, label="Delay Embedding")
plt.plot(lag_values, mfpt_BA_lags, label="Lags")
plt.axhline(true_mfpt[0] * 10, color='k', label='True')
plt.axhline((true_mfpt[0] + true_mfpt[1]) * 10., color='k', linestyle=':')
plt.axhline((true_mfpt[0] - true_mfpt[1]) * 10., color='k', linestyl... | examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb | ehthiede/PyEDGAR | mit |
Connections to DB2
Before any SQL commands can be issued, a connection needs to be made to the DB2 database that you will be using. The connection can be done manually (through the use of the CONNECT command), or automatically when the first %sql command is issued.
The DB2 magic command tracks whether or not a connecti... | %sql CONNECT | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Line versus Cell Command
The DB2 extension is made up of one magic command that works either at the LINE level (%sql) or at the CELL level (%%sql). If you only want to execute a SQL command on one line in your script, use the %sql form of the command. If you want to run a larger block of SQL, then use the %%sql form. N... | %sql VALUES 'HELLO THERE' | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The %sql syntax allows you to pass local variables to the script. There are 5 predefined variables defined in the program:
db2_database - The name of the database you are connected to
db2_uid - The userid that you connected with
db2_host = The IP address of the host system
db2_port - The port number of the host system... | %sql VALUES '{db2_database}' | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
You cannot use variable substitution with the CELL version of the %sql command. If your SQL statement extends beyond one line, and you want to use variable substitution, you can use a couple of techniques to make it look like one line. The simplest way is to add the backslash character (\) at the end of every line. Th... | %sql VALUES \
'{db2_database}' | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
If you have SQL that requires multiple lines, of if you need to execute many lines of SQL, then you should
be using the CELL version of the %sql command. To start a block of SQL, start the cell with %%sql and do not place
any SQL following the command. Subsequent lines can contain SQL code, with each SQL statement del... | %%sql
VALUES
1,
2,
3 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
If you are using a single statement then there is no need to use a delimiter. However, if you are combining a number of commands then you must use the semicolon. | %%sql
DROP TABLE STUFF;
CREATE TABLE STUFF (A INT);
INSERT INTO STUFF VALUES
1,2,3;
SELECT * FROM STUFF; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The script will generate messages and output as it executes. Each SQL statement that generates results will have a table displayed with the result set. If a command is executed, the results of the execution get listed as well. The script you just ran probably generated an error on the DROP table command.
Options
Both f... | %%sql -d
DROP TABLE STUFF
@
CREATE TABLE STUFF (A INT)
@
INSERT INTO STUFF VALUES
1,2,3
@
SELECT * FROM STUFF
@ | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The delimiter change will only take place for the statements following the %%sql command. Subsequent cells
in the notebook will still use the semicolon. You must use the -d option for every cell that needs to use the
semicolon in the script.
Limiting Result Sets
The default number of rows displayed for any result set i... | %sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
You will notice that the displayed result will split the visible rows to the first 5 rows and the last 5 rows.
Using the -a option will display all values: | %sql -a values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
To change the default value of rows displayed, you can either do a CONNECT RESET (discussed later) or set the
db2 control variable db2_max to a different value. A value of -1 will display all rows. | # Save previous version of maximum rows
last_max = db2_max
db2_max = 5
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
# Set the maximum back
db2_max = last_max | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Quiet Mode
Every SQL statement will result in some output. You will either get an answer set (SELECT), or an indication if
the command worked. For instance, the following set of SQL will generate some error messages since the tables
will probably not exist: | %%sql
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
If you know that these errors may occur you can silence them with the -q option. | %%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
SQL output will not be suppressed, so the following command will still show the results. | %%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG;
VALUES 1,2,3; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
To have the messages returned as text only, you must set the db2_error_highlight variable to False. This is a change that will affect all messages in the notebook. | db2_error_highlight = False
%sql DROP TABLE TABLE_NOT_FOUND; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
To set the messages back to being formatted, set db2_error_highlight to True. | db2_error_highlight = True
%sql DROP TABLE TABLE_NOT_FOUND; | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
SQL and Command Mode
The %sql looks at the beginning of your SQL to determine whether or not to run it as a SELECT statement, or
as a command. There are three statements that trigger SELECT mode: SELECT, WITH, and VALUES. Aside from
CONNECT, all other statements will be considered to be an SQL statement that does not ... | %sql -t VALUES 1,2,3,4,5,6,7,8,9 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
When timing a statement, no output will be displayed. If your SQL statement takes longer than one second you
will need to modify the db2_runtime variable. This variable must be set to the number of seconds that you
want to run the statement. | db2_runtime = 5
%sql -t VALUES 1,2,3,4,5,6,7,8,9 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
JSON Formatting
DB2 supports querying JSON that is stored in a column within a table. Standard output would just display the
JSON as a string. For instance, the following statement would just return a large string of output. | %%sql
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"sala... | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Adding the -j option to the %sql (or %%sql) command will format the first column of a return set to better
display the structure of the document. Note that if your answer set has additional columns associated with it, they will not be displayed in this format. | %%sql -j
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"s... | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Plotting
Sometimes it would be useful to display a result set as either a bar, pie, or line chart. The first one or two
columns of a result set need to contain the values need to plot the information.
The three possible plot options are:
-pb - bar chart (x,y)
-pp - pie chart (y)
-pl - line chart (x,y)
The following d... | %sql values 1,2,3,4,5 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Since the results only have one column, the pie, line, and bar charts will not have any labels associated with
them. The first example is a bar chart. | %sql -pb values 1,2,3,4,5 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The same data as a pie chart. | %sql -pp values 1,2,3,4,5 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
And finally a line chart. | %sql -pl values 1,2,3,4,5 | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
If you retrieve two columns of information, the first column is used for the labels (X axis or pie slices) and
the second column contains the data. | %sql -pb values ('A',1),('B',2),('C',3),('D',4),('E',5) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
For a pie chart, the first column is used to label the slices, while the data comes from the second column. | %sql -pp values ('A',1),('B',2),('C',3),('D',4),('E',5) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Finally, for a line chart, the x contains the labels and the y values are used. | %sql -pl values ('A',1),('B',2),('C',3),('D',4),('E',5) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The following SQL will plot the number of employees per department. | %%sql -pb
SELECT WORKDEPT, COUNT(*)
FROM EMPLOYEE
GROUP BY WORKDEPT | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Sample Data
Many of the DB2 notebooks depend on two of the tables that are found in the SAMPLE database. Rather than
having to create the entire SAMPLE database, this option will create and populate the EMPLOYEE and
DEPARTMENT tables in your database. Note that if you already have these tables defined, they will not b... | %sql -sampledata | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Result Set
By default, any %sql block will return the contents of a result set as a table that is displayed in the notebook. If you want to capture the results from the SQL into a variable, you would use the -r option:
<pre>
var = %sql -r select * from employee
</pre>
The result is a list of rows. Each row is a list it... | rows = %sql -r select * from employee
print(rows[0][0]) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
The number of rows in the result set can be determined by using the length function. | print(len(rows)) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
If you want to iterate over all of the rows and columns, you could use the following Python syntax instead of
creating a for loop that goes from 0 to 41. | for row in rows:
line = ""
for col in row:
line = line + str(col) + ","
print(line) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Since the data may be returned in different formats (like integers), you should use the str() function
to convert the values to strings. Otherwise, the concatenation function used in the above example will fail. For
instance, the 6th field is a birthdate field. If you retrieve it as an individual value and try and conc... | print("Birth Date="+rows[0][6]) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
You can fix this problem by adding the str function to convert the date. | print("Birth Date="+str(rows[0][6])) | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
DB2 CONNECT Statement
As mentioned at the beginning of this notebook, connecting to DB2 is automatically done when you issue your first
%sql statement. Usually the program will prompt you with what options you want when connecting to a database. The
other option is to use the CONNECT statement directly. The CONNECT sta... | %sql CONNECT RESET
%sql CONNECT | Notebooks/DB2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2odata | apache-2.0 |
Run the Demo | from ipywidgets import interact
import ipywidgets as widgets
from common import common_vcu_demo_camera_encode_file
import os
from ipywidgets import HBox, VBox, Text, Layout | recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb | Xilinx/meta-petalinux | mit |
Video | video_capture_device=widgets.Text(value='',
placeholder='"/dev/video1"',
description='Camera Dev Id:',
style={'description_width': 'initial'},
#layout=Layout(width='35%', height='30px'),
disabled=False)
video_capture_device
codec_type=widgets.RadioButtons(
options=['avc', 'hevc'],
descript... | recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb | Xilinx/meta-petalinux | mit |
Audio | device_id=Text(value='',
placeholder='(optional) "hw:1"',
description='Input Dev:',
description_tooltip='To select the values, please refer Determine Audio Device Names section',
disabled=False)
device_id
audio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pu... | recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb | Xilinx/meta-petalinux | mit |
Advanced options: | frame_rate=widgets.Text(value='',
placeholder='(optional) 15, 30, 60',
description='Frame Rate:',
disabled=False)
bit_rate=widgets.Text(value='',
placeholder='(optional) 1000, 20000',
description='Bit Rate(Kbps):',
style={'description_width': 'initial'},
disabled=False)
gop_length=widgets.Te... | recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb | Xilinx/meta-petalinux | mit |
Before checking for the closest stations, we can understand the composition of stations found in the different data sets. The designed sets_grps function from preprocess.py file is useful for this purpose. | from script.preprocess import sets_grps
sets_grps(df1.station_id, df2.station_id) | posts/Pair-Distances.ipynb | JQGoh/jqlearning | gpl-3.0 |
The summary, as shown above, suggests that both data sets have 9 common stations.
To evaluate the distances between the selected features of two data sets, the designed pair_dist function from preprocess.py file can be handy.
One can provide the dataframes he/she is interested to work with, and the selected features i... | from script.preprocess import pair_dist
pair_dist(df1,
df2,
{"station_id": ["latitude", "longitude"]},
{"station_id": ["latitude", "longitude"]}) | posts/Pair-Distances.ipynb | JQGoh/jqlearning | gpl-3.0 |
It is straightforward to find the closest stations by using the idxmin function.
This jupyter notebook is available at my Github page: Pair-Distances.ipynb, and it is part of the repository jqlearning. | station_pairs_df = pair_dist(df1,
df2,
{"station_id": ["latitude", "longitude"]},
{"station_id": ["latitude", "longitude"]})
station_pairs_df.idxmin(axis=1) | posts/Pair-Distances.ipynb | JQGoh/jqlearning | gpl-3.0 |
Note: this lab was developed and tested with the following TF ecosystem package versions:
Tensorflow Version: 2.3.1
TFX Version: 0.25.0
TFDV Version: 0.25.0
TFMA Version: 0.25.0
If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below. | print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart... | os.environ["PATH"] += os.pathsep + "/home/jupyter/.local/bin" | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Configure lab settings
Set constants, location paths and other environment settings. | ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "./data"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
!mkdir -p $DATA_ROOT | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Preparing the dataset | data = pd.read_csv("./data/titles_sample.csv")
data.head()
LABEL_MAPPING = {"github": 0, "nytimes": 1, "techcrunch": 2}
data["source"] = data["source"].apply(lambda label: LABEL_MAPPING[label])
data.head()
data.to_csv(f"{DATA_ROOT}/dataset.csv", index=None)
!head $DATA_ROOT/*.csv | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the inte... | PIPELINE_NAME = "tfx-title-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It c... | output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=1),
]
)
)
example_gen = tfx.components.CsvExampleGen(
... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Examine the ingested data | examples_uri = example_gen.outputs["examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(examples_uri, "train", name)
for name in os.listdir(os.path.join(examples_uri, "train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
exampl... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval.
<... | statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs["examples"]
)
context.run(statistics_gen) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext... | context.show(statistics_gen.outputs["statistics"]) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generat... | schema_gen = SchemaGen(
statistics=statistics_gen.outputs["statistics"], infer_feature_shape=False
)
context.run(schema_gen) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Visualize the inferred schema | context.show(schema_gen.outputs["schema"]) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the... | schema_proto_path = "{}/{}".format(
schema_gen.outputs["schema"].get()[0].uri, "schema.pbtxt"
)
schema = tfdv.load_schema_text(schema_proto_path) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Modify the schema
You can use the protocol buffer APIs to modify the schema using tfdv.set_somain.
Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.
Save the updat... | schema_dir = os.path.join(ARTIFACT_STORE, "schema")
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, "schema.pbtxt")
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file} | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Importing the updated schema using ImporterNode
The ImporterNode component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the ImporterNode component | schema_importer = ImporterNode(
instance_name="Schema_Importer",
source_uri=schema_dir,
artifact_type=tfx.types.standard_artifacts.Schema,
reimport=False,
)
context.run(schema_importer) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Visualize the imported schema | context.show(schema_importer.outputs["result"]) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by ImporterNode.
ExampleValidator can detect different classes of anomalies... | example_validator = ExampleValidator(
instance_name="Data_Validation",
statistics=statistics_gen.outputs["statistics"],
schema=schema_importer.outputs["result"],
)
context.run(example_validator) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Examine the output of ExampleValidator
The output artifact of the ExampleValidator is the anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf. | train_uri = example_validator.outputs["anomalies"].get()[0].uri
train_anomalies_filename = os.path.join(train_uri, "train/anomalies.pbtxt")
!cat $train_anomalies_filename | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Visualize validation results
The file anomalies.pbtxt can be visualized using context.show. | context.show(example_validator.outputs["output"]) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.... | %%writefile config.py
FEATURE_KEY = "title"
LABEL_KEY = "source"
N_CLASSES = 3
HUB_URL = "https://tfhub.dev/google/nnlm-en-dim50/2"
HUB_DIM = 50
N_NEURONS = 16
TRAIN_BATCH_SIZE = 5
EVAL_BATCH_SIZE = 5
MODEL_NAME = "tfx_title_classifier"
def transformed_name(key):
return key + "_xf"
%%writefile preprocessing.py
i... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Configure and run the Transform component. | transform = Transform(
examples=example_gen.outputs["examples"],
schema=schema_importer.outputs["result"],
module_file=TRANSFORM_MODULE,
)
context.run(transform) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Examine the Transform component's outputs
The Transform component has 2 outputs:
transform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
transformed_examples - contains the preprocessed training and evaluation data.
Take a ... | os.listdir(transform.outputs["transform_graph"].get()[0].uri) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
And the transform.examples artifact | os.listdir(transform.outputs["transformed_examples"].get()[0].uri)
transform_uri = transform.outputs["transformed_examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, "train", name)
for name in os.listdir(os.path.join(transform_uri, "train"))
]
dataset = tf.data.TFRecordDataset(tfrecord... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes:
tf.Examples used for training and eval.
A user provided module file that defines the trainer logic.
A data schema created by SchemaGen or imported by ImporterNode.
A proto definition of train ar... | %%writefile model.py
import tensorflow as tf
import tensorflow_transform as tft
from config import (
EVAL_BATCH_SIZE,
HUB_DIM,
HUB_URL,
LABEL_KEY,
MODEL_NAME,
N_CLASSES,
N_NEURONS,
TRAIN_BATCH_SIZE,
transformed_name,
)
from tensorflow.keras.callbacks import TensorBoard
from tensorflo... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Create and run the Trainer component
As of the 0.25.0 release of TFX, the Trainer component only supports passing a single field - num_steps - through the train_args and eval_args arguments. | trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs.transformed_examples,
schema=schema_importer.outputs.result,
transform_graph=transform.outputs.transform_gr... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to th... | logs_path = trainer.outputs["model_run"].get()[0].uri
print(logs_path) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google accou... | model_resolver = ResolverNode(
instance_name="latest_blessed_model_resolver",
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing),
)
context.run(model_resolver) | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Configure evaluation metrics and slices. | accuracy_threshold = tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={"value": 0.30}, upper_bound={"value": 0.99}
)
)
metrics_specs = tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(
class_name="SparseCategoricalAccuracy", threshold=accuracy_threshold... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Check the model performance validation status | model_blessing_uri = model_analyzer.outputs.blessing.get()[0].uri
!ls -l {model_blessing_uri} | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component | trainer.outputs["model"]
pusher = Pusher(
model=trainer.outputs["model"],
model_blessing=model_analyzer.outputs["blessing"],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR
)
),
)
context.run(pus... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Examine the output of Pusher | pusher.outputs
# Set `PATH` to include a directory containing `saved_model_cli.
PATH = get_ipython().run_line_magic("env", "PATH")
%env PATH=/opt/conda/envs/tfx/bin:{PATH}
latest_pushed_model = os.path.join(
SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))
)
!saved_model_cli show --dir {latest_pushed_model} ... | notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
create grid to plot (choose 2D plane for visualisation cutting through charge centers , but calculation is correct for 3D) | xpoints=512 #nr of grid points in 1 direction
xmax=1 #extension of grid [m]
pref=9e9 # 1/(4pi eps0)
x=np.linspace(-xmax,xmax,xpoints)
y=x
[x2d,y2d]=np.meshgrid(x,y,indexing='ij') #2D matrices holding x or y coordinate for each point on the grid
#define multipole
npoles=6 #number of poles, needs to be even
fradius=0.5*... | multipole.ipynb | joverbee/electromagnetism_course | gpl-3.0 |
calculate the potential of the set of spheres (use a function that we can reuse later) | def multipolepotential(x,y,z,npoles,v,fradius,sradius):
#assume a set of n conducting spheres of radius on a circle of radius fradius (field radius)
#npoles is number of poles and needs to be even >0
#the spheres are positioned in the xy plane and have a potential of V for the even spheres and -V for the od... | multipole.ipynb | joverbee/electromagnetism_course | gpl-3.0 |
And now its showtime! | #show vector plot, but limit number of points to keep the number of vector reasonable
skippts=20
skip=(slice(None,None,skippts),slice(None,None,skippts)) #dont plot all points in a quiver as this becomes unreadable
plt.quiver(x2d[skip],y2d[skip],ex[skip],ey[skip])
plt.title('electric field')
plt.xlabel('x')
plt.ylabel(... | multipole.ipynb | joverbee/electromagnetism_course | gpl-3.0 |
Note how the field emanates from the positive charge sinks into the negative charge | plt.imshow(e,extent=[-xmax, xmax, -xmax, xmax])
plt.title('electric field and fieldlines')
plt.xlabel('x');
plt.ylabel('y');
plt.streamplot(x2d,x2d,ey,ex)
plt.axis('square')
plt.colorbar
plt.show() | multipole.ipynb | joverbee/electromagnetism_course | gpl-3.0 |
Note the interesting npoles/2 fold symmetry of the field | plt.imshow(v,extent=[-xmax, xmax, -xmax, xmax])
plt.title('electrostatic potential V')
plt.xlabel('x')
plt.ylabel('y')
plt.axis('square')
plt.colorbar()
plt.show()
nlines=50;
plt.contour(x2d,y2d,v,nlines)
plt.title('equipotential surfaces')
plt.xlabel('x')
plt.ylabel('y')
plt.axis('square')
plt.colorbar
plt.show() | multipole.ipynb | joverbee/electromagnetism_course | gpl-3.0 |
<b>Exercise</b>: Extending a Shift-Reduce Parser
In this exercise your task is to extend the shift-reduce parser
that has been discussed in the lecture so that it returns an abstract syntax tree. You should test it with the program sum-for.sl that is given the directory Examples. | cat Examples/sum-for.sl | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The grammar that should be used to parse this program is given in the file
Examples/simple.g. It is very similar to the grammar that we have developed previously for our interpreter. I have simplified this grammar at various places to make it more suitable
for the current task. | cat Examples/simple.g | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Exercise 1: Generate both the action-table and the goto table for this grammar using the notebook SLR-Table-Generator.ipynb.
Implementing a Scanner | import re | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Exercise 2: The function tokenize(s) transforms the string s into a list of tokens.
Given the program sum-for.sl it should produce the list of tokens shown further below. Note that a number n is stored as a pairs of the form
('NUMBER', n)
and an identifier v is stored as the pair
('ID', v).
You have to take care of ... | def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
"Edit the code below!"
lexSpec = r'''([ \t\n]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # par... | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The cell below tests your tokenizer. Your task is to compare the output with the output shown above. | with open('Examples/sum-for.sl', 'r', encoding='utf-8') as file:
program = file.read()
tokenize(program)
class ShiftReduceParser():
def __init__(self, actionTable, gotoTable):
self.mActionTable = actionTable
self.mGotoTable = gotoTable | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function parse(self, TL) is called with two arguments:
- self ia an object of class ShiftReduceParser that maintain both an action table
and a goto table.
- TL is a list of tokens. Tokens are either
- literals, i.e. strings enclosed in single quote characters,
- pairs of the form ('NUMBER', n) where n ... | %run parse-table.py | ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.