markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
๋ค์์ผ๋ก, ๊ฒ์ฆ ๋ฐ ํ
์คํธ ๋ฐ์ดํฐ์ธํธ๋ฅผ ๋ง๋ญ๋๋ค. ๊ฒ์ฆ์ ์ํด ํ๋ จ ์ธํธ์ ๋๋จธ์ง 5,000๊ฐ ๋ฆฌ๋ทฐ๋ฅผ ์ฌ์ฉํฉ๋๋ค.
์ฐธ๊ณ : validation_split ๋ฐ subset ์ธ์๋ฅผ ์ฌ์ฉํ ๋ ๊ฒ์ฆ ๋ฐ ํ๋ จ ๋ถํ ์ด ๊ฒน์น์ง ์๋๋ก ์์ ์๋๋ฅผ ์ง์ ํ๊ฑฐ๋ shuffle=False๋ฅผ ์ ๋ฌํ๋ ๊ฒ์ ์์ง ๋ง์ธ์. | raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ฐธ๊ณ : ๋ค์ ์น์
์์ ์ฌ์ฉ๋๋ Preprocessing API๋ TensorFlow 2.3์์ ์คํ์ ์ด๋ฉฐ ๋ณ๊ฒฝ๋ ์ ์์ต๋๋ค.
ํ๋ จ์ ์ํ ๋ฐ์ดํฐ์ธํธ ์ค๋นํ๊ธฐ
๋ค์์ผ๋ก, ์ ์ตํ preprocessing.TextVectorization ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ๋ฅผ ํ์คํ, ํ ํฐํ ๋ฐ ๋ฒกํฐํํฉ๋๋ค.
ํ์คํ๋ ์ผ๋ฐ์ ์ผ๋ก ๊ตฌ๋์ ์ด๋ HTML ์์๋ฅผ ์ ๊ฑฐํ์ฌ ๋ฐ์ดํฐ์ธํธ๋ฅผ ๋จ์ํํ๊ธฐ ์ํด ํ
์คํธ๋ฅผ ์ ์ฒ๋ฆฌํ๋ ๊ฒ์ ๋งํฉ๋๋ค. ํ ํฐํ๋ ๋ฌธ์์ด์ ์ฌ๋ฌ ํ ํฐ์ผ๋ก ๋ถํ ํ๋ ๊ฒ์ ๋งํฉ๋๋ค(์: ํ์ดํธ์คํ์ด์ค์์ ๋ถํ ํ์ฌ ๋ฌธ์ฅ์ ๊ฐ๋ณ ๋จ์ด๋ก ๋ถํ ). ๋ฒกํฐํ๋ ํ ํฐ์ ์ซ์๋ก ๋ณํํ์ฌ ์ ๊ฒฝ๋ง์ ... | def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation),
'') | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ค์์ผ๋ก TextVectorization ๋ ์ด์ด๋ฅผ ๋ง๋ญ๋๋ค. ์ด ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ๋ฅผ ํ์คํ, ํ ํฐํ ๋ฐ ๋ฒกํฐํํฉ๋๋ค. ๊ฐ ํ ํฐ์ ๋ํด ๊ณ ์ ํ ์ ์ ์ธ๋ฑ์ค๋ฅผ ์์ฑํ๋๋ก output_mode๋ฅผ int๋ก ์ค์ ํฉ๋๋ค.
๊ธฐ๋ณธ ๋ถํ ํจ์์ ์์์ ์ ์ํ ์ฌ์ฉ์ ์ง์ ํ์คํ ํจ์๋ฅผ ์ฌ์ฉํ๊ณ ์์ต๋๋ค. ๋ช
์์ ์ต๋๊ฐ์ธ sequence_length์ ๊ฐ์ด ๋ชจ๋ธ์ ๋ํ ๋ช ๊ฐ์ง ์์๋ฅผ ์ ์ํ์ฌ ๋ ์ด์ด๊ฐ ์ํ์ค๋ฅผ ์ ํํ sequence_length ๊ฐ์ผ๋ก ์ฑ์ฐ๊ฑฐ๋ ์๋ฅด๋๋ก ํฉ๋๋ค. | max_features = 10000
sequence_length = 250
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ค์์ผ๋ก, ์ ์ฒ๋ฆฌ ๋ ์ด์ด์ ์ํ๋ฅผ ๋ฐ์ดํฐ์ธํธ์ ๋ง์ถ๊ธฐ ์ํด adapt๋ฅผ ํธ์ถํฉ๋๋ค. ๊ทธ๋ฌ๋ฉด ๋ชจ๋ธ์ด ๋ฌธ์์ด ์ธ๋ฑ์ค๋ฅผ ์ ์๋ก ๋น๋ํฉ๋๋ค.
์ฐธ๊ณ : adapt๋ฅผ ํธ์ถํ ๋ ํ๋ จ ๋ฐ์ดํฐ๋ง ์ฌ์ฉํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค(ํ
์คํธ์ธํธ๋ฅผ ์ฌ์ฉํ๋ฉด ์ ๋ณด๊ฐ ๋์ถ๋จ). | # Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda x, y: x)
vectorize_layer.adapt(train_text) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ด ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ ์ผ๋ถ ๋ฐ์ดํฐ๋ฅผ ์ ์ฒ๋ฆฌํ ๊ฒฐ๊ณผ๋ฅผ ํ์ธํ๋ ํจ์๋ฅผ ๋ง๋ค์ด ๋ณด๊ฒ ์ต๋๋ค. | def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label... | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์์์ ๋ณผ ์ ์๋ฏ์ด ๊ฐ ํ ํฐ์ ์ ์๋ก ๋์ฒด๋์์ต๋๋ค. ๋ ์ด์ด์์ .get_vocabulary()๋ฅผ ํธ์ถํ์ฌ ๊ฐ ์ ์์ ํด๋นํ๋ ํ ํฐ(๋ฌธ์์ด)์ ์กฐํํ ์ ์์ต๋๋ค. | print("1287 ---> ",vectorize_layer.get_vocabulary()[1287])
print(" 313 ---> ",vectorize_layer.get_vocabulary()[313])
print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary()))) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ชจ๋ธ์ ํ๋ จํ ์ค๋น๊ฐ ๊ฑฐ์ ๋์์ต๋๋ค. ์ต์ข
์ ์ฒ๋ฆฌ ๋จ๊ณ๋ก ์ด์ ์ ์์ฑํ TextVectorization ๋ ์ด์ด๋ฅผ ํ๋ จ, ๊ฒ์ฆ ๋ฐ ํ
์คํธ ๋ฐ์ดํฐ์ธํธ์ ์ ์ฉํฉ๋๋ค. | train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ฑ๋ฅ์ ๋์ด๋๋ก ๋ฐ์ดํฐ์ธํธ ๊ตฌ์ฑํ๊ธฐ
๋ค์์ I/O๊ฐ ์ฐจ๋จ๋์ง ์๋๋ก ๋ฐ์ดํฐ๋ฅผ ๋ก๋ํ ๋ ์ฌ์ฉํด์ผ ํ๋ ๋ ๊ฐ์ง ์ค์ํ ๋ฉ์๋์
๋๋ค.
.cache()๋ ๋ฐ์ดํฐ๊ฐ ๋์คํฌ์์ ๋ก๋๋ ํ ๋ฉ๋ชจ๋ฆฌ์ ๋ฐ์ดํฐ๋ฅผ ๋ณด๊ดํฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ๋ชจ๋ธ์ ํ๋ จํ๋ ๋์ ๋ฐ์ดํฐ์ธํธ๋ก ์ธํด ๋ณ๋ชฉ ํ์์ด ๋ฐ์ํ์ง ์์ต๋๋ค. ๋ฐ์ดํฐ์ธํธ๊ฐ ๋๋ฌด ์ปค์ ๋ฉ๋ชจ๋ฆฌ์ ๋ง์ง ์๋ ๊ฒฝ์ฐ, ์ด ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ์ฑ๋ฅ์ด ๋ฐ์ด๋ ์จ ๋์คํฌ ์บ์๋ฅผ ์์ฑํ ์๋ ์์ต๋๋ค. ๋ง์ ์์ ํ์ผ๋ณด๋ค ์ฝ๊ธฐ๊ฐ ๋ ํจ์จ์ ์
๋๋ค.
.prefetch()๋ ํ๋ จ ์ค์ ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ ๋ฐ ๋ชจ๋ธ ์คํ๊ณผ ๊ฒน์นฉ๋๋ค.
๋ฐ์ดํฐ ์ฑ๋ฅ ๊ฐ์ด๋์์ ๋ ๊ฐ์ง... | AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ชจ๋ธ ์์ฑ
์ด์ ์ ๊ฒฝ๋ง์ ๋ง๋ค ์ฐจ๋ก์
๋๋ค. | embedding_dim = 16
model = tf.keras.Sequential([
layers.Embedding(max_features + 1, embedding_dim),
layers.Dropout(0.2),
layers.GlobalAveragePooling1D(),
layers.Dropout(0.2),
layers.Dense(1)])
model.summary() | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ธต์ ์์๋๋ก ์์ ๋ถ๋ฅ๊ธฐ(classifier)๋ฅผ ๋ง๋ญ๋๋ค:
์ฒซ ๋ฒ์งธ ์ธต์ Embedding ์ธต์
๋๋ค. ์ด ์ธต์ ์ ์๋ก ์ธ์ฝ๋ฉ๋ ๋จ์ด๋ฅผ ์
๋ ฅ ๋ฐ๊ณ ๊ฐ ๋จ์ด ์ธ๋ฑ์ค์ ํด๋นํ๋ ์๋ฒ ๋ฉ ๋ฒกํฐ๋ฅผ ์ฐพ์ต๋๋ค. ์ด ๋ฒกํฐ๋ ๋ชจ๋ธ์ด ํ๋ จ๋๋ฉด์ ํ์ต๋ฉ๋๋ค. ์ด ๋ฒกํฐ๋ ์ถ๋ ฅ ๋ฐฐ์ด์ ์๋ก์ด ์ฐจ์์ผ๋ก ์ถ๊ฐ๋ฉ๋๋ค. ์ต์ข
์ฐจ์์ (batch, sequence, embedding)์ด ๋ฉ๋๋ค.
๊ทธ๋ค์ GlobalAveragePooling1D ์ธต์ sequence ์ฐจ์์ ๋ํด ํ๊ท ์ ๊ณ์ฐํ์ฌ ๊ฐ ์ํ์ ๋ํด ๊ณ ์ ๋ ๊ธธ์ด์ ์ถ๋ ฅ ๋ฒกํฐ๋ฅผ ๋ฐํํฉ๋๋ค. ์ด๋ ๊ธธ์ด๊ฐ ๋ค๋ฅธ ์
๋ ฅ์ ๋ค๋ฃจ๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์
... | model.compile(loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.BinaryAccuracy(threshold=0.0)) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ชจ๋ธ ํ๋ จํ๊ธฐ
dataset ๊ฐ์ฒด๋ฅผ fit ๋ฉ์๋์ ์ ๋ฌํ์ฌ ๋ชจ๋ธ์ ํ๋ จํฉ๋๋ค. | history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ชจ๋ธ ํ๊ฐํ๊ธฐ
๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ์ธํด ๋ณด์ฃ . ๋ ๊ฐ์ ๊ฐ์ด ๋ฐํ๋ฉ๋๋ค. ์์ค(์ค์ฐจ๋ฅผ ๋ํ๋ด๋ ์ซ์์ด๋ฏ๋ก ๋ฎ์์๋ก ์ข์ต๋๋ค)๊ณผ ์ ํ๋์
๋๋ค. | loss, accuracy = model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ด ์๋นํ ๋จ์ํ ์ ๊ทผ ๋ฐฉ์์ ์ฝ 86%์ ์ ํ๋๋ฅผ ๋ฌ์ฑํฉ๋๋ค.
์ ํ๋์ ์์ค ๊ทธ๋ํ ๊ทธ๋ฆฌ๊ธฐ
model.fit()์ ํ๋ จ ์ค์ ๋ฐ์ํ ๋ชจ๋ ๊ฒ์ ๊ฐ์ง ์ฌ์ ์ ํฌํจํ๋ History ๊ฐ์ฒด๋ฅผ ๋ฐํํฉ๋๋ค. | history_dict = history.history
history_dict.keys() | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ค ๊ฐ์ ํญ๋ชฉ์ด ์์ต๋๋ค. ํ๋ จ๊ณผ ๊ฒ์ฆ ๋จ๊ณ์์ ๋ชจ๋ํฐ๋งํ๋ ์งํ๋ค์
๋๋ค. ํ๋ จ ์์ค๊ณผ ๊ฒ์ฆ ์์ค์ ๊ทธ๋ํ๋ก ๊ทธ๋ ค ๋ณด๊ณ , ํ๋ จ ์ ํ๋์ ๊ฒ์ฆ ์ ํ๋๋ ๊ทธ๋ํ๋ก ๊ทธ๋ ค์ ๋น๊ตํด ๋ณด๊ฒ ์ต๋๋ค: | acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', ... | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ด ๊ทธ๋ํ์์ ์ ์ ์ ํ๋ จ ์์ค๊ณผ ํ๋ จ ์ ํ๋๋ฅผ ๋ํ๋
๋๋ค. ์ค์ ์ ๊ฒ์ฆ ์์ค๊ณผ ๊ฒ์ฆ ์ ํ๋์
๋๋ค.
ํ๋ จ ์์ค์ ๊ฐ epoch๋ง๋ค ๊ฐ์ํ๊ณ ํ๋ จ ์ ํ์ฑ์ ๊ฐ epoch๋ง๋ค ์ฆ๊ฐํฉ๋๋ค. ๊ฒฝ์ฌ ํ๊ฐ ์ต์ ํ๋ฅผ ์ฌ์ฉํ ๋ ์ด์ ๊ฐ์ด ์์๋ฉ๋๋ค. ๋ชจ๋ ๋ฐ๋ณต์์ ์ํ๋ ์๋์ ์ต์ํํด์ผ ํฉ๋๋ค.
ํ์ง๋ง ๊ฒ์ฆ ์์ค๊ณผ ๊ฒ์ฆ ์ ํ๋์์๋ ๊ทธ๋ ์ง ๋ชปํฉ๋๋ค. ํ๋ จ ์ ํ๋ ์ด์ ์ด ํผํฌ์ธ ๊ฒ ๊ฐ์ต๋๋ค. ์ด๋ ๊ณผ๋์ ํฉ ๋๋ฌธ์
๋๋ค. ์ด์ ์ ๋ณธ ์ ์๋ ๋ฐ์ดํฐ๋ณด๋ค ํ๋ จ ๋ฐ์ดํฐ์์ ๋ชจ๋ธ์ด ๋ ์ ๋์ํฉ๋๋ค. ์ด ์ง์ ๋ถํฐ๋ ๋ชจ๋ธ์ด ๊ณผ๋ํ๊ฒ ์ต์ ํ๋์ด ํ
์คํธ ๋ฐ์ดํฐ์์ ์ผ๋ฐํ๋์ง ์๋ ํ๋ จ ๋ฐ์ดํฐ์ ... | export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_tes... | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
์๋ก์ด ๋ฐ์ดํฐ๋ก ์ถ๋ก ํ๊ธฐ
์๋ก์ด ์์ ๋ํ ์์ธก์ ์ป์ผ๋ ค๋ฉด ๊ฐ๋จํ model.predict()๋ฅผ ํธ์ถํ๋ฉด ๋ฉ๋๋ค. | examples = [
"The movie was great!",
"The movie was okay.",
"The movie was terrible..."
]
export_model.predict(examples) | site/ko/tutorials/keras/text_classification.ipynb | tensorflow/docs-l10n | apache-2.0 |
Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment spe... | %%writefile ./pipeline_vertex/pipeline_vertex_automl.py
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless req... | notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Use the CLI compiler to compile the pipeline
We compile the pipeline from the Python file we generated into a JSON description using the following command: | PIPELINE_JSON = "covertype_automl_vertex_pipeline.json"
!dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl.py --output $PIPELINE_JSON | notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Deploy the pipeline package | aiplatform.init(project=PROJECT, location=REGION)
pipeline = aiplatform.PipelineJob(
display_name="automl_covertype_kfp_pipeline",
template_path=PIPELINE_JSON,
enable_caching=True,
)
pipeline.run() | notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
More detailed information about installing Tensorflow can be found at https://www.tensorflow.org/install/. | #@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/... | .ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb | mathnathan/notebooks | mit |
Similarity Visualized
Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j. | messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"E... | .ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb | mathnathan/notebooks | mit |
Evaluation: STS (Semantic Textual Similarity) Benchmark
The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearso... | import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.cs... | .ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb | mathnathan/notebooks | mit |
Evaluate Sentence Embeddings | sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multip... | .ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb | mathnathan/notebooks | mit |
The Run Engine processes messages
A message has four parts: a command string, an object, a tuple of positional arguments, and a dictionary of keyword arguments. | Msg('set', motor, {'pos': 5})
Msg('trigger', motor)
Msg('read', motor)
RE = RunEngine()
def simple_scan(motor):
"Set, trigger, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('read', motor)
RE.run(simple_scan(motor)) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Moving a motor and reading it back is boring. Let's add a detector. | def simple_scan2(motor, det):
"Set, trigger motor, trigger detector, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(simple_scan2(motor, det)) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
There is two-way communication between the message generator and the Run Engine
Above we the three messages with the responses they generated from the RunEngine. We can use these responses to make our scan adaptive. | def adaptive_scan(motor, det, threshold):
"""Set, trigger, read until the detector reads intensity < threshold"""
i = 0
while True:
print("LOOP %d" % i)
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Control timing with 'sleep' and 'wait'
The 'sleep' command is as simple as it sounds. | def sleepy_scan(motor, det):
"Set, trigger motor, sleep for a fixed time, trigger detector, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('sleep', None, 2) # units: seconds
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(sleepy_scan(motor, det)) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
The 'wait' command is more powerful. It watches for Movers (e.g., motor) to report being done.
Wait for one motor to be done moving | def wait_one(motor, det):
"Set, trigger, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A') # Add motor to group 'A'.
yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done.
yield Msg('trigger', det)
yield Msg('read', det)
RE.run... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Notice, in the log, that the response to wait is the set of Movers the scan was waiting on.
Wait for two motors to both be done moving | def wait_multiple(motors, det):
"Set motors, trigger all motors, wait for all motors to move."
for motor in motors:
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A') # Trigger each motor and add it to group 'A'.
yield Msg('wait', None, 'A') # Wait for everyth... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Advanced Example: Wait for different groups of motors at different points in the run
If the 'A' bit seems pointless, the payoff is here. We trigger all the motors at once, wait for the first two, read, wait for the last one, and read again. This is merely meant to show that complex control flow is possible. | def wait_complex(motors, det):
"Set motors, trigger motors, wait for all motors to move."
# Same as above...
for motor in motors[:-1]:
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A')
# ...but put the last motor is separate group.
yield Msg('s... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Runs can be paused and safely resumed or aborted
"Hard Pause": Stop immediately. On resume, rerun messages from last 'checkpoint' command.
The Run Engine does not guess where it is safe to resume. The 'pause' command must follow a 'checkpoint' command, indicating a safe point to go back to in the event of a hard pause. | def conditional_hard_pause(motor, det):
for i in range(5):
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read', det)
if reading['det']['value'] < 0.2:
yield Msg('pause... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
The scan thread sleeps and waits for more user input, to resume or abort. (On resume, this example will obviously hit the same pause condition again --- nothing has changed.) | RE.state
RE.resume()
RE.state
RE.abort()
def conditional_soft_pause(motor, det):
for i in range(5):
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read', det)
if reading['det']... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Other threads can request a pause
Calling RE.request_pause(hard=True) or RE.request_pause(hard=False) has the same affect as a 'pause' command.
SIGINT (Ctrl+C) is reliably caught before each message is processed, even across threads.
SIGINT triggers a hard pause. If no checkpoint commands have been issued, CTRL+C cause... | RE.run(sleepy_scan(motor, det)) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
If the scan contains checkpoints, it's possible to resume after Ctrl+C. | def sleepy_scan_checkpoints(motor, det):
"Set, trigger motor, sleep for a fixed time, trigger detector, read"
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('sleep', None, 2) # units: seconds
yield Msg('trigger', det)
yield Msg('read', det)... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Threading is optional -- switch it off for easier debugging
Again, we'll interrupt the scan. We get exactly the same result, but this time we see a full Traceback. | RE.run(simple_scan(motor), use_threading=False) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Any functions can subscribe to the live data stream (e.g., live plotting)
In the examples above, the runs have been emitting RunStart and RunStop Documents, but no Events or Event Descriptors. We will add those now.
Emitting Events and Event Descriptors
The 'create' and 'save' commands collect all the reads between the... | def simple_scan_saving(motor, det):
"Set, trigger, read"
yield Msg('create')
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('read', motor)
yield Msg('read', det)
yield Msg('save')
RE.run(simple_scan_saving(motor, det)) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Very Simple Example
Any user function that accepts a Python dictionary can be registered as a "consumer" of these Event Documents. Here's a toy example. | def print_event_time(doc):
print('===== EVENT TIME:', doc['time'], '=====') | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
To use this consumer function during a run: | RE.run(simple_scan_saving(motor, det), subscriptions={'event': print_event_time}) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
The use it by default on every run for this instance of the Run Engine: | token = RE.subscribe('event', print_event_time)
token | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
The output token, an integer, can be use to unsubscribe later. | RE.unsubscribe(token) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Live Plotting
First, we'll create some axes. The code below updates the plot while the run is ongoing. | %matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
def stepscan(motor, detector):
for i in range(-5, 5):
yield Msg('create')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
yield Msg('read', motor)
... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Saving Documents to metadatastore
Mission-critical consumers can be run on the scan thread, where they will block the scan until they return from processing the emitted Documents. This should not be used for computationally heavy tasks like visualization. Its only intended use is for saving data to metadatastore, but u... | %run register_mds.py
register_mds(RE) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
We can verify that this worked by loading this one-point scan from the DataBroker and displaying the data using DataMuxer. | RE.run(simple_scan_saving(motor, det))
from dataportal import DataBroker as db
header = db[-1]
header
from dataportal import DataMuxer as dm
dm.from_events(db.fetch_events(header)).to_sparse_dataframe() | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Flyscan prototype
Asserts that flyscans are managed by an object which has three methods:
- describe : same as for everything else
- kickoff : method which starts the flyscan. This should be a fast-to-
execute function that is assumed to just poke at some external
hardware.
- collect : coll... | flyer = FlyMagic('flyer', 'theta', 'sin')
def fly_scan(flyer):
yield Msg('kickoff', flyer)
yield Msg('collect', flyer)
yield Msg('kickoff', flyer)
yield Msg('collect', flyer)
# Note that there is no 'create'/'save' here. That is managed by 'collect'.
RE.run(fly_gen(flyer), use_threading=False) | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
The fly scan results are in metadatastore.... | header = db[-1]
header
res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
res
fig, ax = plt.subplots()
ax.cla()
res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
ax.plot(res['sin'], label='sin')
ax.plot(res['theta'], label='theta')
ax.legend()
fig.canvas.draw() | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Fly scan + stepscan
Do a step scan with one motor and a fly scan with another | def fly_step(flyer, motor):
for x in range(-5, 5):
# step
yield Msg('create')
yield Msg('set', motor, {'pos': x})
yield Msg('trigger', motor)
yield Msg('read', motor)
yield Msg('save')
# fly
yield Msg('kickoff', flyer)
yield Msg('collect', flye... | examples/Blue Sky Demo.ipynb | sameera2004/bluesky | bsd-3-clause |
Matrices | A = [[1, 2, 3], # A has 2 rows and 3 columns
[4, 5, 6]]
B = [[1, 2], # B has 3 rows and 2 columns
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] f... | Chapter4.ipynb | nifannn/data-science-from-scratch | mit |
Defining Terms
All definitions taken from CIRI data documentation.
The indicators included in this project rank each country from 0 to 3, with 0 being the least respect for rights and 3 being the most. This level of respect is measured by the extent to which right are guaranteed within law and enforced by the governmen... | centamciridf['Women Economic Rights'][centamciridf['Year']==2003].plot(kind='bar',
title="Respect for Women's Economic Rights in Central America (2003)", color="mediumslateblue")
centamciridf['Women Political Rights'][centamciridf['Year']==2003].plot(kind='bar',
title="Respect for Women's Polit... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
Looking at these indicators for the same countries in 2010, one sees that Costa Rica again leads the pack, with a full score on both economic and political rights for women. Honduras and Nicaragua fell in their respect for women's political rights, leaving them on par with the region. Belize and Panama increased their ... | centamciridf['Women Economic Rights'][centamciridf['Year']==2010].plot(kind='bar',
title="Respect for Women's Economic Rights in Central America (2010)", color="mediumslateblue")
centamciridf['Women Political Rights'][centamciridf['Year']==2010].plot(kind='bar',
title="Respect for Women's Polit... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
Based on the findings above, let's drill down on the shining star in the pack (Costa Rica), a country that regresed (Nicaragua) and a middle of the road performer (El Salvador) to see what their performance over time has been on these indicators.
Costa Rica
Costa Rica has been consistently high over time for both respe... | costaricaciridf = centamciridf.reset_index()
costaricaciridf = costaricaciridf.drop(costaricaciridf.columns[0],axis=1)
costaricaciridf = costaricaciridf[costaricaciridf['Country']=='Costa Rica']
costaricaciridf['Women Political Rights'].plot(kind='bar',
title="Respect for Women's Political Rights in Costa R... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
Nicaragua
Looking at Nicaragua over time, one sees that its drop in respect for political rights from 2003 stayed consistent until 2008 when it spiked again and then dropped through the end of the period covered by this data. Interestingly, its spike in economic rights over the 8 year period took place in 2007, the yea... | nicciridf = centamciridf.reset_index()
nicciridf = nicciridf.drop(nicciridf.columns[0],axis=1)
nicciridf = nicciridf[nicciridf['Country']=='Nicaragua']
nicciridf['Women Political Rights'].plot(kind='bar',
title="Respect for Women's Political Rights in Nicaragua(2003-2011)", color="deepskyblue")
nicciridf['... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
El Salvador | salvciridf = centamciridf.reset_index()
salvciridf = salvciridf.drop(salvciridf.columns[0],axis=1)
salvciridf = salvciridf[salvciridf['Country']=='El Salvador']
salvciridf['Women Political Rights'].plot(kind='bar',
title="Respect for Women's Political Rights in El Salvador(2003-2011)", color="deepskyblue")
... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
GDP Growth: Costa Rica, Nicaragua, El Salvador
Now, let's turn to the question of whether a high level of respect for women's rights has translated into economic wellbeing at a country level. To do this, we look at GDP growth rather than absolute GDP in order to more easily compare countries despite differences in GDP ... | centamgdpwbdf = centamgdpwbdf.reset_index() # getting rid of multiindex for ease of graphing and comparison
centamgdpwbdf = centamgdpwbdf.rename(columns={'NY.GDP.MKTP.KD.ZG':'Annual GDP Growth'})
costaricawbdf = centamgdpwbdf[centamgdpwbdf['country']=='Costa Rica']
costaricawbdf = costaricawbdf.drop(costaricawbdf.col... | MBA_S16/Chao-WomenRightsGDP.ipynb | NYUDataBootcamp/Projects | mit |
Reconstructions in the two bit system
I would expect (and hope) for all possible hidden configurations when the visible in in state [1,1] the reconstructions produced would be either v_a = [1,0] v_b = [0,1] or visa versa. | results = performance(np.array([1,1]), a) | Max/ORBM-Inference-XOR.ipynb | garibaldu/multicauseRBM | mit |
Excellent!##
In all case it falls into a stable visible configuration and successfully separates the visibles. | def plot_avg_results_for_visible_pattern(v, sampler):
results = performance(v, sampler)
avgd_results = {}
for key in results:
results[key]
for inner_key in results[key]:
if inner_key not in avgd_results:
avgd_results[inner_key] = results[key][inner_key]
... | Max/ORBM-Inference-XOR.ipynb | garibaldu/multicauseRBM | mit |
Yussssssssss
This is perfect, over all cases for the visible pattern it can separate it. Creating excellent reconstructions the majority of the time. We already know the vanilla RBM will fail to do this. Still i graphed the output of on of the visible reconstructions of the vanilla. | plot_avg_results_for_visible_pattern(np.array([1,0]), a)
plot_avg_results_for_visible_pattern(np.array([0,1]), a)
plot_avg_results_for_visible_pattern(np.array([1,0]), WrappedVanillaSampler(dot))
plot_avg_results_for_visible_pattern(np.array([0,1]), WrappedVanillaSampler(dot)) | Max/ORBM-Inference-XOR.ipynb | garibaldu/multicauseRBM | mit |
Below
In this cell below I check the code still holds together given more hidden nodes. Than visibles. It does. | training_set = np.eye(2)
dot = RBM(3,2,1)
s = VanillaSampler(dot)
t = VanillaTrainier(dot, s)
t.train(10000, training_set)
h_a = np.array([1,0,0])
h_b = np.array([0,1,0])
v = np.array([1,1])
plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([1,1]) , ApproximatedSampler(dot.weights,dot.weights,0,0))
plot_avg_results... | Max/ORBM-Inference-XOR.ipynb | garibaldu/multicauseRBM | mit |
Climate Data Time-Series
We will be using Jena Climate dataset recorded by the
Max Planck Institute for Biogeochemistry.
The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per
10 minutes.
Location: Weather Station, Max Planck Institute for Biogeochemistry
in Jena, Germany
Tim... | from zipfile import ZipFile
import os
uri = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip"
zip_path = keras.utils.get_file(origin=uri, fname="jena_climate_2009_2016.csv.zip")
zip_file = ZipFile(zip_path)
zip_file.extractall()
csv_path = "jena_climate_2009_2016.csv"
df = p... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Raw Data Visualization
To give us a sense of the data we are working with, each feature has been plotted below.
This shows the distinct pattern of each feature over the time period from 2009 to 2016.
It also shows where anomalies are present, which will be addressed during normalization. | titles = [
"Pressure",
"Temperature",
"Temperature in Kelvin",
"Temperature (dew point)",
"Relative Humidity",
"Saturation vapor pressure",
"Vapor pressure",
"Vapor pressure deficit",
"Specific humidity",
"Water vapor concentration",
"Airtight",
"Wind speed",
"Maximum... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Data Preprocessing
Here we are picking ~300,000 data points for training. Observation is recorded every
10 mins, that means 6 times per hour. We will resample one point per hour since no
drastic change is expected within 60 minutes. We do this via the sampling_rate
argument in timeseries_dataset_from_array utility.
We ... | split_fraction = 0.715
train_split = int(split_fraction * int(df.shape[0]))
step = 6
past = 720
future = 72
learning_rate = 0.001
batch_size = 256
epochs = 10
def normalize(data, train_split):
data_mean = data[:train_split].mean(axis=0)
data_std = data[:train_split].std(axis=0)
return (data - data_mean) ... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Training dataset
The training dataset labels starts from the 792nd observation (720 + 72). | start = past + future
end = start + train_split
x_train = train_data[[i for i in range(7)]].values
y_train = features.iloc[start:end][[1]]
sequence_length = int(past / step) | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Validation dataset
The validation dataset must not contain the last 792 rows as we won't have label data for
those records, hence 792 must be subtracted from the end of the data.
The validation label dataset must start from 792 after train_split, hence we must add
past + future (792) to label_start. | x_end = len(val_data) - past - future
label_start = train_split + past + future
x_val = val_data.iloc[:x_end][[i for i in range(7)]].values
y_val = features.iloc[label_start:][[1]]
dataset_val = keras.preprocessing.timeseries_dataset_from_array(
x_val,
y_val,
sequence_length=sequence_length,
sampling... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Training | inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2]))
lstm_out = keras.layers.LSTM(32)(inputs)
outputs = keras.layers.Dense(1)(lstm_out)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss="mse")
model.summary() | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
We'll use the ModelCheckpoint callback to regularly save checkpoints, and
the EarlyStopping callback to interrupt training when the validation loss
is not longer improving. | path_checkpoint = "model_checkpoint.h5"
es_callback = keras.callbacks.EarlyStopping(monitor="val_loss", min_delta=0, patience=5)
modelckpt_callback = keras.callbacks.ModelCheckpoint(
monitor="val_loss",
filepath=path_checkpoint,
verbose=1,
save_weights_only=True,
save_best_only=True,
)
history = m... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
We can visualize the loss with the function below. After one point, the loss stops
decreasing. |
def visualize_loss(history, title):
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, "b", label="Training loss")
plt.plot(epochs, val_loss, "r", label="Validation loss")
plt.title(title)
plt.xlabel("Epoch... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
Prediction
The trained model above is now able to make predictions for 5 sets of values from
validation set. |
def show_plot(plot_data, delta, title):
labels = ["History", "True Future", "Model Prediction"]
marker = [".-", "rx", "go"]
time_steps = list(range(-(plot_data[0].shape[0]), 0))
if delta:
future = delta
else:
future = 0
plt.title(title)
for i, val in enumerate(plot_data):
... | examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb | keras-team/keras-io | apache-2.0 |
TRY IT
Open and close the san-francisco-2013.csv file
Text files and lines
A text file is just a sequence of lines, in fact if you read it in all at once it is returns a list of strings.
Each line is separated by the new line character "\n". This is the special character that is inserted into text files when you hit en... | print("Golden\nGate\nBridge") | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Print your name on two lines using only one print statement
Reading from files
There are two common ways to read through the file, the first (and usually better way) is to loop through the lines in the file.
for line in file_handle:
print line
The second is to read all the lines at once and store as a str... | fh = open('thingstodo.txt')
for line in fh:
print(line.rstrip())
fh.close()
fh = open('thingstodo.txt')
contents = fh.read()
fh.close()
print(contents)
print(type(contents))
fh = open('thingstodo.txt')
lines = fh.readlines()
fh.close()
print(lines)
print(type(lines)) | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Open 'san-francisco-2013.csv' and print out the first line. You can use either method. If you are using the loop method, you can 'break' after printing the first line.
Searching through a file
When searching through a file, you can use string methods to discover and parse the contents.
Let's look at a few exampl... | # Looking for a line that starts with something
# I want to see salary data of women with my first name
fh = open('san-francisco-2014.csv')
for line in fh:
if line.startswith('Charlotte'):
print(line)
fh.close()
# Looking for lines that contain a specific string
fh = open('san-francisco-2014.csv')
# Looki... | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
* Note that sometimes you get a quoted line, instead of the title and salary. If a csv file has a comma inside a cell, the line is quoted. Thus, splitting is not the proper way to read a csv file, but it will work in a pinch. We'll learn about the csv module as well as other ways to read in tabular (excel-like) data in... | # Skipping lines
fh = open('thingstodo.txt')
for line in fh:
if line.startswith('Golden'):
continue
print(line)
fh.close() | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Try, except with open
If you are worried that the file might not exist, you can wrap the open in a try block
try:
fh = open('i_dont_exist.txt')
except:
print "File does not exist"
exit() | # Opening a non-existent file
try:
fh = open('i_dont_exist.txt')
print(fh)
fh.close()
except:
print("File does not exist")
#exit()
| Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Writing to files
You can write to files very easily. You need to give open a second parameter 'w' to indicate you want to open the file in write mode.
fh_write = open('new_file.txt', 'w')
Then you call the write method on the file handle. You give it the string you want to write to the file. Be careful, write doesn't... | fh = open('numbers.txt', 'w')
for i in range(10):
fh.write(str(i) + '\n')
fh.close()
# Now let's prove that we actaully made a file
fh = open('numbers.txt')
lines = fh.readlines()
print(lines)
fh.close()
| Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Create a file called 'my_favorite_cities.txt' and put your top 3 favorite cities each on its own line.
Bonus check that you did it correctly by reading the lines in python
With statement and opening files
You can use with to open a file and it will automatically close the file at the end of the with block. This ... | with open('thingstodo.txt') as fh:
for line in fh:
print((line.rstrip())) | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
You can also use with statements to write files | with open('numbers2.txt', 'w') as fh:
for i in range(5):
fh.write(str(i) + '\n')
with open('numbers2.txt') as fh:
for line in fh:
print((line.rstrip())) | Lesson06_Files/Files.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Set global flags | PROJECT = 'your-project'
BUCKET = 'your-project-babyweight'
REGION = 'us-central1'
ROOT_DIR = 'babyweight_tft'
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['ROOT_DIR'] = ROOT_DIR
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET,ROOT_DIR)
TRANSFORM_ARTEFACTS_... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Import required packages and modules | import math, os
import tensorflow as tf
import tensorflow_transform as tft
from tensorflow.keras import layers, models | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
2. Define deep and wide regression model
Check features in the transformed data
You can use these features (except the target feature) as an input to the model. | transformed_metadata = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR).transformed_metadata
transformed_metadata.schema | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Define wide and deep feature columns
This is a feature engineering layer that creates new features from the transformed data. | def create_wide_and_deep_feature_columns():
deep_feature_columns = []
wide_feature_columns = []
inputs = {}
categorical_columns = {}
# Select features you've checked from the metadata
# - Categorical features are associated with the vocabulary size (starting from 0)
numeric_features = ['mo... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Define a regression model | def create_model():
wide_feature_columns, deep_feature_columns, inputs = create_wide_and_deep_feature_columns()
feature_layer_wide = layers.DenseFeatures(wide_feature_columns, name='wide_features')
feature_layer_deep = layers.DenseFeatures(deep_feature_columns, name='deep_features')
wide_model = featu... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Define tfrecords_input_fn
This function creates a batched dataset from the transformed dataset. | def tfrecords_input_fn(files_name_pattern, batch_size=512):
tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR)
TARGET_FEATURE_NAME = 'weight_pounds'
batched_dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=files_name_pattern,
batch_size=batch... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
3. Train and export the model | def train_and_evaluate(train_pattern, eval_pattern):
train_dataset = tfrecords_input_fn(train_pattern, batch_size=BATCH_SIZE)
validation_dataset = tfrecords_input_fn(eval_pattern, batch_size=BATCH_SIZE)
model = create_model()
print(model.summary())
print('Now training the model... hang on')
h... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Visualize the training result | from pandas import DataFrame
DataFrame({'loss': history.history['loss'], 'val_loss': history.history['val_loss']}).plot(ylim=(0, 2.5))
print('Final RMSE for the validation set: {:f}'.format(math.sqrt(history.history['val_loss'][-1]))) | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Export the trained model
The serving function serveing_fn receives raw input data and apply the tranformation before making predictions with the trained model.
You can also add some pre/post-processing within the seriving function. In this example, it accepts an unique identifier for each instance and passes through it... | def export_serving_model(model, output_dir):
tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR)
# The layer has to be saved to the model for keras tracking purposes.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serveing_fn(uid, is_male, mother... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Explore the exported model | %%bash
gsutil ls -lR $EXPORT_DIR
saved_model_cli show --all --dir=$EXPORT_DIR | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
4. Use the exported model
Load the exported model | #tf.keras.backend.clear_session()
model = tf.keras.models.load_model(EXPORT_DIR) | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Define a local prediction function
The serving function of the exported model accepts input data with named options. You can use this wrapper to use input data in the standard python dictionary.
When you deply the model to Vertex AI using the pre-built container, you can use the Python client library to make online pre... | def predict(requests):
uid, is_male, mother_race, mother_age, plurality, gestation_weeks = [], [], [], [], [], []
for instance in requests:
uid.append(instance['uid'])
is_male.append(instance['is_male'])
mother_race.append(instance['mother_race'])
mother_age.append(instance['mot... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Make local predictions | instance1 = {
'uid': 'instance1',
'is_male': 'True',
'mother_age': 26.0,
'mother_race': 'Asian Indian',
'plurality': 1.0,
'gestation_weeks': 39
}
instance2 = {
'uid': 'instance2',
'is_male': 'False',
'mother_age': 40.0,
'mother_race': 'Japanese',
'plurality': 2.0,
'gesta... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Deploy model to Vertex AI
Deployment of the model may take minutes. Please hang on. | %%bash
ts=$(date +%y%m%d-%H%M%S)
MODEL_NAME="babyweight-$ts"
ENDPOINT_NAME="babyweight-endpoint-$ts"
gcloud ai models upload --region=$REGION --display-name=$MODEL_NAME \
--container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest \
--artifact-uri=$EXPORT_DIR
MODEL_ID=$(gcloud ai models list --... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Make online preditions with Python client library | from google.cloud import aiplatform
# Use the latest endpoint that starts with 'babyweight-endpoint'
endpoints = aiplatform.Endpoint.list(
project=PROJECT, location=REGION, order_by='create_time desc')
for item in endpoints:
if item.display_name.startswith('babyweight-endpoint'):
endpoint = item
... | blogs/babyweight_tft/babyweight_tft_keras_02.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Are the features predictive? | scatter_matrix(energy, alpha=0.2, figsize=(18,18), diagonal='kde')
plt.show() | archive/notebook/energy_efficiency.ipynb | georgetown-analytics/machine-learning | mit |
Let's focus on predicting heating load | energy_features = energy.iloc[:,0:8]
heat_labels = energy.iloc[:,8]
X_train, X_test, y_train, y_test = train_test_split(energy_features, heat_labels, test_size=0.2)
model = LinearRegression()
model.fit(X_train, y_train)
expected = y_test
predicted = model.predict(X_test)
print('Linear Regression model')
print('Mean... | archive/notebook/energy_efficiency.ipynb | georgetown-analytics/machine-learning | mit |
Capture NOAA Weather Radio
Using rtlsdr_helper record a short complex baseband array using capture(). | # From the docstring
#x = sdr.capture(Tc, fo=88700000.0, fs=2400000.0, gain=40, device_index=0)
x = sdr.capture(Tc=5,fo=162.4e6,fs=2.4e6,gain=40,device_index=0)
sdr.complex2wav('capture_162475.wav',2400000,x)
fs, x = sdr.wav2complex('capture_162475.wav')
psd(x,2**10,2400); | tutorial_part3/RTL_SDR2.ipynb | mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm | bsd-2-clause |
Narrowband FM Demodulator | def NBFM_demod(x,fs=2.4e6,file_name='test.wav',B1=50e3,N1=10,B2=5e3,N2=5):
"""
Narrowband FM Demodulator
"""
b = signal.firwin(64,2*B1/float(fs))
# Filter and decimate (should be polyphase)
y = signal.lfilter(b,1,x)
z = ss.downsample(y,N1)
# Apply complex baseband discriminator
z_bb ... | tutorial_part3/RTL_SDR2.ipynb | mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm | bsd-2-clause |
To obtain these curves, we sort the predictions made by the classifier from the smallest to the biggest for each group and put them on a $[0, 1]$ scale on the x-axis. The value corresponding to $x=0.5$ is the median of the distribution. Similarly for each quantile level in $[0,1]$ we obtain the corresponding quantile o... | N = 24
rng = jax.random.PRNGKey(1)
rng, *rngs = jax.random.split(rng, 3)
y_pred = 3 * jax.random.uniform(rngs[0], (N,))
groups = jax.random.uniform(rngs[1], (N,)) < 0.25
support_0 = jnp.linspace(0, 1, N - jnp.sum(groups))
support_1 = jnp.linspace(0, 1, jnp.sum(groups))
quantiles_0 = jnp.sort(y_pred[jnp.logical_not(gr... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
We can see on this figure that the support of the two quantile function is different, since the number of points in the two groups is different. In order to compute the gap between the two curves, we first interpolate the two curves on the union of the supports. The Wasserstein distance corresponds to the gap between t... | import scipy
kinds = ['linear', 'nearest']
fig, axes = plt.subplots(1, len(kinds), figsize=(8 * len(kinds), 5))
for ax, kind in zip(axes, kinds):
q0 = scipy.interpolate.interp1d(support_0, quantiles_0, kind=kind)
q1 = scipy.interpolate.interp1d(support_1, quantiles_1, kind=kind)
support_01 = jnp.sort(jnp.concat... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
Soft Wasserstein
Computing the Wasserstein distance involves complex operations such as sorting and interpolating. Fortunately, regularized optimal transport and its implementation with OTT provides accelerator-friendly differentiable approaches to sort according to a group (setting the weights of the outsiders to zero... | import functools
@functools.partial(jax.jit, static_argnums=(2,))
def sort_group(inputs: jnp.ndarray, group: jnp.ndarray, target_size: int = 16):
a = group / jnp.sum(group)
b = jnp.ones(target_size) / target_size
ot = ott.tools.soft_sort.transport_for_sort(inputs, a, b, dict(epsilon=1e-3))
return 1.0 / b * ot.... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
It is noteworthy to see that the obtained interpolation corresponds to a smooth version of the 'nearest' interpolation. | target_sizes = [4, 16, 64]
_, axes = plt.subplots(1, len(target_sizes), figsize=(len(target_sizes * 8), 5))
for ax, target_size in zip(axes, target_sizes):
ax.plot(sort_group(y_pred, jnp.logical_not(groups), target_size),
lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0')
ax.plot(sort_... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
Training a network
In order to train our classifier with a fairness regularizer, we first turn the categorical features $x$ of the adult dataset into dense ones (using 16 dimensions) and pass the obtained vector to an MLP $f_\theta$ with 2 hidden layers of 64 neurons.
We optimize a loss which is the sum of the binary c... | import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 2, figsize=(16, 10))
for weight, curves in result.items():
for ax_row, metric in zip(axes, ['loss', 'accuracy']):
for ax, phase in zip(ax_row, ['train', 'eval']):
arr = np.array(curves[f'{phase}_{metric}'])
ax.plot(arr[:, 0], arr[:, 1], lab... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
We can see that when we increase the fairness regularization factor $\lambda$, the training accuracy slightly decreases but it does not impact too much the eval accuracy. The fairness regularizer is a rather good regularizer. For $\lambda = 1000$ the training metrics are a bit more degraded as well as the eval ones, bu... | num_rows = 2
num_cols = len(weights[1:]) // 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(7 * num_cols, 5 * num_rows))
for ax, w in zip(axes.ravel(), weights[1:]):
logits, groups = get_predictions(ds_test, config, states[w])
plot_quantiles(logits, groups, ax)
ax.set_title(f'$\lambda = {w:.0f}$', fontsiz... | docs/notebooks/fairness.ipynb | google-research/ott | apache-2.0 |
Loading the tokenizing the corpus | from glob import glob
import re
import string
import funcy as fp
from gensim import models
from gensim.corpora import Dictionary, MmCorpus
import nltk
import pandas as pd
# quick and dirty....
EMAIL_REGEX = re.compile(r"[a-z0-9\.\+_-]+@[a-z0-9\._-]+\.[a-z]*")
FILTER_REGEX = re.compile(r"[^a-z '#]")
TOKEN_MAPPINGS = [(... | notebooks/Gensim Newsgroup.ipynb | codingafuture/pyLDAvis | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.