markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
๋‹ค์Œ์œผ๋กœ, ๊ฒ€์ฆ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๊ฒ€์ฆ์„ ์œ„ํ•ด ํ›ˆ๋ จ ์„ธํŠธ์˜ ๋‚˜๋จธ์ง€ 5,000๊ฐœ ๋ฆฌ๋ทฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ฐธ๊ณ : validation_split ๋ฐ subset ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๊ฒ€์ฆ ๋ฐ ํ›ˆ๋ จ ๋ถ„ํ• ์ด ๊ฒน์น˜์ง€ ์•Š๋„๋ก ์ž„์˜ ์‹œ๋“œ๋ฅผ ์ง€์ •ํ•˜๊ฑฐ๋‚˜ shuffle=False๋ฅผ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”.
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ฐธ๊ณ : ๋‹ค์Œ ์„น์…˜์—์„œ ์‚ฌ์šฉ๋˜๋Š” Preprocessing API๋Š” TensorFlow 2.3์—์„œ ์‹คํ—˜์ ์ด๋ฉฐ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ ๋‹ค์Œ์œผ๋กœ, ์œ ์ตํ•œ preprocessing.TextVectorization ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œ์ค€ํ™”, ํ† ํฐํ™” ๋ฐ ๋ฒกํ„ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ํ‘œ์ค€ํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๊ตฌ๋‘์ ์ด๋‚˜ HTML ์š”์†Œ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋‹จ์ˆœํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™”๋Š” ๋ฌธ์ž์—ด์„ ์—ฌ๋Ÿฌ ํ† ํฐ์œผ๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค(์˜ˆ: ํ™”์ดํŠธ์ŠคํŽ˜์ด์Šค์—์„œ ๋ถ„ํ• ํ•˜์—ฌ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ). ๋ฒกํ„ฐํ™”๋Š” ํ† ํฐ์„ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์‹ ๊ฒฝ๋ง์— ...
def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '')
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์œผ๋กœ TextVectorization ๋ ˆ์ด์–ด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œ์ค€ํ™”, ํ† ํฐํ™” ๋ฐ ๋ฒกํ„ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ํ† ํฐ์— ๋Œ€ํ•ด ๊ณ ์œ ํ•œ ์ •์ˆ˜ ์ธ๋ฑ์Šค๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก output_mode๋ฅผ int๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋ถ„ํ•  ํ•จ์ˆ˜์™€ ์œ„์—์„œ ์ •์˜ํ•œ ์‚ฌ์šฉ์ž ์ง€์ • ํ‘œ์ค€ํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ช…์‹œ์  ์ตœ๋Œ€๊ฐ’์ธ sequence_length์™€ ๊ฐ™์ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•˜์—ฌ ๋ ˆ์ด์–ด๊ฐ€ ์‹œํ€€์Šค๋ฅผ ์ •ํ™•ํžˆ sequence_length ๊ฐ’์œผ๋กœ ์ฑ„์šฐ๊ฑฐ๋‚˜ ์ž๋ฅด๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.
max_features = 10000 sequence_length = 250 vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode='int', output_sequence_length=sequence_length)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์œผ๋กœ, ์ „์ฒ˜๋ฆฌ ๋ ˆ์ด์–ด์˜ ์ƒํƒœ๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ์— ๋งž์ถ”๊ธฐ ์œ„ํ•ด adapt๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด ๋ชจ๋ธ์ด ๋ฌธ์ž์—ด ์ธ๋ฑ์Šค๋ฅผ ์ •์ˆ˜๋กœ ๋นŒ๋“œํ•ฉ๋‹ˆ๋‹ค. ์ฐธ๊ณ : adapt๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค(ํ…Œ์ŠคํŠธ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ •๋ณด๊ฐ€ ๋ˆ„์ถœ๋จ).
# Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda x, y: x) vectorize_layer.adapt(train_text)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ผ๋ถ€ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•œ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # retrieve a batch (of 32 reviews and labels) from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_review, first_label = text_batch[0], label_batch[0] print("Review", first_review) print("Label...
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์œ„์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ๊ฐ ํ† ํฐ์€ ์ •์ˆ˜๋กœ ๋Œ€์ฒด๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ ˆ์ด์–ด์—์„œ .get_vocabulary()๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๊ฐ ์ •์ˆ˜์— ํ•ด๋‹นํ•˜๋Š” ํ† ํฐ(๋ฌธ์ž์—ด)์„ ์กฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
print("1287 ---> ",vectorize_layer.get_vocabulary()[1287]) print(" 313 ---> ",vectorize_layer.get_vocabulary()[313]) print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๊ฑฐ์˜ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ตœ์ข… ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋กœ ์ด์ „์— ์ƒ์„ฑํ•œ TextVectorization ๋ ˆ์ด์–ด๋ฅผ ํ›ˆ๋ จ, ๊ฒ€์ฆ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค.
train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์„ฑ๋Šฅ์„ ๋†’์ด๋„๋ก ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ตฌ์„ฑํ•˜๊ธฐ ๋‹ค์Œ์€ I/O๊ฐ€ ์ฐจ๋‹จ๋˜์ง€ ์•Š๋„๋ก ๋ฐ์ดํ„ฐ๋ฅผ ๋กœ๋“œํ•  ๋•Œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ๋ฉ”์„œ๋“œ์ž…๋‹ˆ๋‹ค. .cache()๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๋””์Šคํฌ์—์„œ ๋กœ๋“œ๋œ ํ›„ ๋ฉ”๋ชจ๋ฆฌ์— ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋™์•ˆ ๋ฐ์ดํ„ฐ์„ธํŠธ๋กœ ์ธํ•ด ๋ณ‘๋ชฉ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ๊ฐ€ ๋„ˆ๋ฌด ์ปค์„œ ๋ฉ”๋ชจ๋ฆฌ์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ์ด ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚œ ์˜จ ๋””์Šคํฌ ์บ์‹œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŽ์€ ์ž‘์€ ํŒŒ์ผ๋ณด๋‹ค ์ฝ๊ธฐ๊ฐ€ ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. .prefetch()๋Š” ํ›ˆ๋ จ ์ค‘์— ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐ ๋ชจ๋ธ ์‹คํ–‰๊ณผ ๊ฒน์นฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ฑ๋Šฅ ๊ฐ€์ด๋“œ์—์„œ ๋‘ ๊ฐ€์ง€...
AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ์ƒ์„ฑ ์ด์ œ ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค.
embedding_dim = 16 model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1)]) model.summary()
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ธต์„ ์ˆœ์„œ๋Œ€๋กœ ์Œ“์•„ ๋ถ„๋ฅ˜๊ธฐ(classifier)๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ์ฒซ ๋ฒˆ์งธ ์ธต์€ Embedding ์ธต์ž…๋‹ˆ๋‹ค. ์ด ์ธต์€ ์ •์ˆ˜๋กœ ์ธ์ฝ”๋”ฉ๋œ ๋‹จ์–ด๋ฅผ ์ž…๋ ฅ ๋ฐ›๊ณ  ๊ฐ ๋‹จ์–ด ์ธ๋ฑ์Šค์— ํ•ด๋‹นํ•˜๋Š” ์ž„๋ฒ ๋”ฉ ๋ฒกํ„ฐ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด ๋ฒกํ„ฐ๋Š” ๋ชจ๋ธ์ด ํ›ˆ๋ จ๋˜๋ฉด์„œ ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฒกํ„ฐ๋Š” ์ถœ๋ ฅ ๋ฐฐ์—ด์— ์ƒˆ๋กœ์šด ์ฐจ์›์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ตœ์ข… ์ฐจ์›์€ (batch, sequence, embedding)์ด ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋‹ค์Œ GlobalAveragePooling1D ์ธต์€ sequence ์ฐจ์›์— ๋Œ€ํ•ด ํ‰๊ท ์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ ์ƒ˜ํ”Œ์— ๋Œ€ํ•ด ๊ณ ์ •๋œ ๊ธธ์ด์˜ ์ถœ๋ ฅ ๋ฒกํ„ฐ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์ž…๋ ฅ์„ ๋‹ค๋ฃจ๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์ž…...
model.compile(loss=losses.BinaryCrossentropy(from_logits=True), optimizer='adam', metrics=tf.metrics.BinaryAccuracy(threshold=0.0))
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ dataset ๊ฐœ์ฒด๋ฅผ fit ๋ฉ”์„œ๋“œ์— ์ „๋‹ฌํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค.
history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•ด ๋ณด์ฃ . ๋‘ ๊ฐœ์˜ ๊ฐ’์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ์†์‹ค(์˜ค์ฐจ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ˆซ์ž์ด๋ฏ€๋กœ ๋‚ฎ์„์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค)๊ณผ ์ •ํ™•๋„์ž…๋‹ˆ๋‹ค.
loss, accuracy = model.evaluate(test_ds) print("Loss: ", loss) print("Accuracy: ", accuracy)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์ƒ๋‹นํžˆ ๋‹จ์ˆœํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์•ฝ 86%์˜ ์ •ํ™•๋„๋ฅผ ๋‹ฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•๋„์™€ ์†์‹ค ๊ทธ๋ž˜ํ”„ ๊ทธ๋ฆฌ๊ธฐ model.fit()์€ ํ›ˆ๋ จ ์ค‘์— ๋ฐœ์ƒํ•œ ๋ชจ๋“  ๊ฒƒ์„ ๊ฐ€์ง„ ์‚ฌ์ „์„ ํฌํ•จํ•˜๋Š” History ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.
history_dict = history.history history_dict.keys()
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋„ค ๊ฐœ์˜ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ๊ณผ ๊ฒ€์ฆ ๋‹จ๊ณ„์—์„œ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋Š” ์ง€ํ‘œ๋“ค์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์†์‹ค์„ ๊ทธ๋ž˜ํ”„๋กœ ๊ทธ๋ ค ๋ณด๊ณ , ํ›ˆ๋ จ ์ •ํ™•๋„์™€ ๊ฒ€์ฆ ์ •ํ™•๋„๋„ ๊ทธ๋ž˜ํ”„๋กœ ๊ทธ๋ ค์„œ ๋น„๊ตํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:
acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', ...
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ๊ทธ๋ž˜ํ”„์—์„œ ์ ์„ ์€ ํ›ˆ๋ จ ์†์‹ค๊ณผ ํ›ˆ๋ จ ์ •ํ™•๋„๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์‹ค์„ ์€ ๊ฒ€์ฆ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์ •ํ™•๋„์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์†์‹ค์€ ๊ฐ epoch๋งˆ๋‹ค ๊ฐ์†Œํ•˜๊ณ  ํ›ˆ๋ จ ์ •ํ™•์„ฑ์€ ๊ฐ epoch๋งˆ๋‹ค ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๊ฒฝ์‚ฌ ํ•˜๊ฐ• ์ตœ์ ํ™”๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ด์™€ ๊ฐ™์ด ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฐ˜๋ณต์—์„œ ์›ํ•˜๋Š” ์ˆ˜๋Ÿ‰์„ ์ตœ์†Œํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ฒ€์ฆ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์ •ํ™•๋„์—์„œ๋Š” ๊ทธ๋ ‡์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ •ํ™•๋„ ์ด์ „์ด ํ”ผํฌ์ธ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๊ณผ๋Œ€์ ํ•ฉ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด์ „์— ๋ณธ ์  ์—†๋Š” ๋ฐ์ดํ„ฐ๋ณด๋‹ค ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ๋ชจ๋ธ์ด ๋” ์ž˜ ๋™์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ง€์ ๋ถ€ํ„ฐ๋Š” ๋ชจ๋ธ์ด ๊ณผ๋„ํ•˜๊ฒŒ ์ตœ์ ํ™”๋˜์–ด ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์—์„œ ์ผ๋ฐ˜ํ™”๋˜์ง€ ์•Š๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ...
export_model = tf.keras.Sequential([ vectorize_layer, model, layers.Activation('sigmoid') ]) export_model.compile( loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'] ) # Test it with `raw_test_ds`, which yields raw strings loss, accuracy = export_model.evaluate(raw_tes...
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ ์ถ”๋ก ํ•˜๊ธฐ ์ƒˆ๋กœ์šด ์˜ˆ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์–ป์œผ๋ ค๋ฉด ๊ฐ„๋‹จํžˆ model.predict()๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
examples = [ "The movie was great!", "The movie was okay.", "The movie was terrible..." ] export_model.predict(examples)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl.py file that we will generate below. The pipeline's DSL has been designed to avoid hardcoding any environment spe...
%%writefile ./pipeline_vertex/pipeline_vertex_automl.py # Copyright 2021 Google LLC # Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy of # the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless req...
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Use the CLI compiler to compile the pipeline We compile the pipeline from the Python file we generated into a JSON description using the following command:
PIPELINE_JSON = "covertype_automl_vertex_pipeline.json" !dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl.py --output $PIPELINE_JSON
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Deploy the pipeline package
aiplatform.init(project=PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name="automl_covertype_kfp_pipeline", template_path=PIPELINE_JSON, enable_caching=True, ) pipeline.run()
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
More detailed information about installing Tensorflow can be found at https://www.tensorflow.org/install/.
#@title Load the Universal Sentence Encoder's TF Hub module from absl import logging import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns module_url = "https://tfhub.dev/google/universal-sentence-encoder/...
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Similarity Visualized Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j.
messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "E...
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Evaluation: STS (Semantic Textual Similarity) Benchmark The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearso...
import pandas import scipy import math import csv sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = pandas.read_table( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.cs...
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Evaluate Sentence Embeddings
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} def run_sts_benchmark(batch): sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1) sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1) cosine_similarities = tf.reduce_sum(tf.multip...
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
The Run Engine processes messages A message has four parts: a command string, an object, a tuple of positional arguments, and a dictionary of keyword arguments.
Msg('set', motor, {'pos': 5}) Msg('trigger', motor) Msg('read', motor) RE = RunEngine() def simple_scan(motor): "Set, trigger, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('read', motor) RE.run(simple_scan(motor))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Moving a motor and reading it back is boring. Let's add a detector.
def simple_scan2(motor, det): "Set, trigger motor, trigger detector, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('trigger', det) yield Msg('read', det) RE.run(simple_scan2(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
There is two-way communication between the message generator and the Run Engine Above we the three messages with the responses they generated from the RunEngine. We can use these responses to make our scan adaptive.
def adaptive_scan(motor, det, threshold): """Set, trigger, read until the detector reads intensity < threshold""" i = 0 while True: print("LOOP %d" % i) yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Control timing with 'sleep' and 'wait' The 'sleep' command is as simple as it sounds.
def sleepy_scan(motor, det): "Set, trigger motor, sleep for a fixed time, trigger detector, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('sleep', None, 2) # units: seconds yield Msg('trigger', det) yield Msg('read', det) RE.run(sleepy_scan(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The 'wait' command is more powerful. It watches for Movers (e.g., motor) to report being done. Wait for one motor to be done moving
def wait_one(motor, det): "Set, trigger, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # Add motor to group 'A'. yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done. yield Msg('trigger', det) yield Msg('read', det) RE.run...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Notice, in the log, that the response to wait is the set of Movers the scan was waiting on. Wait for two motors to both be done moving
def wait_multiple(motors, det): "Set motors, trigger all motors, wait for all motors to move." for motor in motors: yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # Trigger each motor and add it to group 'A'. yield Msg('wait', None, 'A') # Wait for everyth...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Advanced Example: Wait for different groups of motors at different points in the run If the 'A' bit seems pointless, the payoff is here. We trigger all the motors at once, wait for the first two, read, wait for the last one, and read again. This is merely meant to show that complex control flow is possible.
def wait_complex(motors, det): "Set motors, trigger motors, wait for all motors to move." # Same as above... for motor in motors[:-1]: yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # ...but put the last motor is separate group. yield Msg('s...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Runs can be paused and safely resumed or aborted "Hard Pause": Stop immediately. On resume, rerun messages from last 'checkpoint' command. The Run Engine does not guess where it is safe to resume. The 'pause' command must follow a 'checkpoint' command, indicating a safe point to go back to in the event of a hard pause.
def conditional_hard_pause(motor, det): for i in range(5): yield Msg('checkpoint') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read', det) if reading['det']['value'] < 0.2: yield Msg('pause...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The scan thread sleeps and waits for more user input, to resume or abort. (On resume, this example will obviously hit the same pause condition again --- nothing has changed.)
RE.state RE.resume() RE.state RE.abort() def conditional_soft_pause(motor, det): for i in range(5): yield Msg('checkpoint') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read', det) if reading['det']...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Other threads can request a pause Calling RE.request_pause(hard=True) or RE.request_pause(hard=False) has the same affect as a 'pause' command. SIGINT (Ctrl+C) is reliably caught before each message is processed, even across threads. SIGINT triggers a hard pause. If no checkpoint commands have been issued, CTRL+C cause...
RE.run(sleepy_scan(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
If the scan contains checkpoints, it's possible to resume after Ctrl+C.
def sleepy_scan_checkpoints(motor, det): "Set, trigger motor, sleep for a fixed time, trigger detector, read" yield Msg('checkpoint') yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('sleep', None, 2) # units: seconds yield Msg('trigger', det) yield Msg('read', det)...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Threading is optional -- switch it off for easier debugging Again, we'll interrupt the scan. We get exactly the same result, but this time we see a full Traceback.
RE.run(simple_scan(motor), use_threading=False)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Any functions can subscribe to the live data stream (e.g., live plotting) In the examples above, the runs have been emitting RunStart and RunStop Documents, but no Events or Event Descriptors. We will add those now. Emitting Events and Event Descriptors The 'create' and 'save' commands collect all the reads between the...
def simple_scan_saving(motor, det): "Set, trigger, read" yield Msg('create') yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('read', motor) yield Msg('read', det) yield Msg('save') RE.run(simple_scan_saving(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Very Simple Example Any user function that accepts a Python dictionary can be registered as a "consumer" of these Event Documents. Here's a toy example.
def print_event_time(doc): print('===== EVENT TIME:', doc['time'], '=====')
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
To use this consumer function during a run:
RE.run(simple_scan_saving(motor, det), subscriptions={'event': print_event_time})
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The use it by default on every run for this instance of the Run Engine:
token = RE.subscribe('event', print_event_time) token
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The output token, an integer, can be use to unsubscribe later.
RE.unsubscribe(token)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Live Plotting First, we'll create some axes. The code below updates the plot while the run is ongoing.
%matplotlib notebook import matplotlib.pyplot as plt fig, ax = plt.subplots() def stepscan(motor, detector): for i in range(-5, 5): yield Msg('create') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) yield Msg('read', motor) ...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Saving Documents to metadatastore Mission-critical consumers can be run on the scan thread, where they will block the scan until they return from processing the emitted Documents. This should not be used for computationally heavy tasks like visualization. Its only intended use is for saving data to metadatastore, but u...
%run register_mds.py register_mds(RE)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
We can verify that this worked by loading this one-point scan from the DataBroker and displaying the data using DataMuxer.
RE.run(simple_scan_saving(motor, det)) from dataportal import DataBroker as db header = db[-1] header from dataportal import DataMuxer as dm dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Flyscan prototype Asserts that flyscans are managed by an object which has three methods: - describe : same as for everything else - kickoff : method which starts the flyscan. This should be a fast-to- execute function that is assumed to just poke at some external hardware. - collect : coll...
flyer = FlyMagic('flyer', 'theta', 'sin') def fly_scan(flyer): yield Msg('kickoff', flyer) yield Msg('collect', flyer) yield Msg('kickoff', flyer) yield Msg('collect', flyer) # Note that there is no 'create'/'save' here. That is managed by 'collect'. RE.run(fly_gen(flyer), use_threading=False)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The fly scan results are in metadatastore....
header = db[-1] header res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe() res fig, ax = plt.subplots() ax.cla() res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe() ax.plot(res['sin'], label='sin') ax.plot(res['theta'], label='theta') ax.legend() fig.canvas.draw()
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Fly scan + stepscan Do a step scan with one motor and a fly scan with another
def fly_step(flyer, motor): for x in range(-5, 5): # step yield Msg('create') yield Msg('set', motor, {'pos': x}) yield Msg('trigger', motor) yield Msg('read', motor) yield Msg('save') # fly yield Msg('kickoff', flyer) yield Msg('collect', flye...
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Matrices
A = [[1, 2, 3], # A has 2 rows and 3 columns [4, 5, 6]] B = [[1, 2], # B has 3 rows and 2 columns [3, 4], [5, 6]] def shape(A): num_rows = len(A) num_cols = len(A[0]) if A else 0 return num_rows, num_cols def get_row(A, i): return A[i] def get_column(A, j): return [A_i[j] f...
Chapter4.ipynb
nifannn/data-science-from-scratch
mit
Defining Terms All definitions taken from CIRI data documentation. The indicators included in this project rank each country from 0 to 3, with 0 being the least respect for rights and 3 being the most. This level of respect is measured by the extent to which right are guaranteed within law and enforced by the governmen...
centamciridf['Women Economic Rights'][centamciridf['Year']==2003].plot(kind='bar', title="Respect for Women's Economic Rights in Central America (2003)", color="mediumslateblue") centamciridf['Women Political Rights'][centamciridf['Year']==2003].plot(kind='bar', title="Respect for Women's Polit...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Looking at these indicators for the same countries in 2010, one sees that Costa Rica again leads the pack, with a full score on both economic and political rights for women. Honduras and Nicaragua fell in their respect for women's political rights, leaving them on par with the region. Belize and Panama increased their ...
centamciridf['Women Economic Rights'][centamciridf['Year']==2010].plot(kind='bar', title="Respect for Women's Economic Rights in Central America (2010)", color="mediumslateblue") centamciridf['Women Political Rights'][centamciridf['Year']==2010].plot(kind='bar', title="Respect for Women's Polit...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Based on the findings above, let's drill down on the shining star in the pack (Costa Rica), a country that regresed (Nicaragua) and a middle of the road performer (El Salvador) to see what their performance over time has been on these indicators. Costa Rica Costa Rica has been consistently high over time for both respe...
costaricaciridf = centamciridf.reset_index() costaricaciridf = costaricaciridf.drop(costaricaciridf.columns[0],axis=1) costaricaciridf = costaricaciridf[costaricaciridf['Country']=='Costa Rica'] costaricaciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in Costa R...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Nicaragua Looking at Nicaragua over time, one sees that its drop in respect for political rights from 2003 stayed consistent until 2008 when it spiked again and then dropped through the end of the period covered by this data. Interestingly, its spike in economic rights over the 8 year period took place in 2007, the yea...
nicciridf = centamciridf.reset_index() nicciridf = nicciridf.drop(nicciridf.columns[0],axis=1) nicciridf = nicciridf[nicciridf['Country']=='Nicaragua'] nicciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in Nicaragua(2003-2011)", color="deepskyblue") nicciridf['...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
El Salvador
salvciridf = centamciridf.reset_index() salvciridf = salvciridf.drop(salvciridf.columns[0],axis=1) salvciridf = salvciridf[salvciridf['Country']=='El Salvador'] salvciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in El Salvador(2003-2011)", color="deepskyblue") ...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
GDP Growth: Costa Rica, Nicaragua, El Salvador Now, let's turn to the question of whether a high level of respect for women's rights has translated into economic wellbeing at a country level. To do this, we look at GDP growth rather than absolute GDP in order to more easily compare countries despite differences in GDP ...
centamgdpwbdf = centamgdpwbdf.reset_index() # getting rid of multiindex for ease of graphing and comparison centamgdpwbdf = centamgdpwbdf.rename(columns={'NY.GDP.MKTP.KD.ZG':'Annual GDP Growth'}) costaricawbdf = centamgdpwbdf[centamgdpwbdf['country']=='Costa Rica'] costaricawbdf = costaricawbdf.drop(costaricawbdf.col...
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Reconstructions in the two bit system I would expect (and hope) for all possible hidden configurations when the visible in in state [1,1] the reconstructions produced would be either v_a = [1,0] v_b = [0,1] or visa versa.
results = performance(np.array([1,1]), a)
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Excellent!## In all case it falls into a stable visible configuration and successfully separates the visibles.
def plot_avg_results_for_visible_pattern(v, sampler): results = performance(v, sampler) avgd_results = {} for key in results: results[key] for inner_key in results[key]: if inner_key not in avgd_results: avgd_results[inner_key] = results[key][inner_key] ...
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Yussssssssss This is perfect, over all cases for the visible pattern it can separate it. Creating excellent reconstructions the majority of the time. We already know the vanilla RBM will fail to do this. Still i graphed the output of on of the visible reconstructions of the vanilla.
plot_avg_results_for_visible_pattern(np.array([1,0]), a) plot_avg_results_for_visible_pattern(np.array([0,1]), a) plot_avg_results_for_visible_pattern(np.array([1,0]), WrappedVanillaSampler(dot)) plot_avg_results_for_visible_pattern(np.array([0,1]), WrappedVanillaSampler(dot))
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Below In this cell below I check the code still holds together given more hidden nodes. Than visibles. It does.
training_set = np.eye(2) dot = RBM(3,2,1) s = VanillaSampler(dot) t = VanillaTrainier(dot, s) t.train(10000, training_set) h_a = np.array([1,0,0]) h_b = np.array([0,1,0]) v = np.array([1,1]) plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([1,1]) , ApproximatedSampler(dot.weights,dot.weights,0,0)) plot_avg_results...
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Climate Data Time-Series We will be using Jena Climate dataset recorded by the Max Planck Institute for Biogeochemistry. The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per 10 minutes. Location: Weather Station, Max Planck Institute for Biogeochemistry in Jena, Germany Tim...
from zipfile import ZipFile import os uri = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip" zip_path = keras.utils.get_file(origin=uri, fname="jena_climate_2009_2016.csv.zip") zip_file = ZipFile(zip_path) zip_file.extractall() csv_path = "jena_climate_2009_2016.csv" df = p...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Raw Data Visualization To give us a sense of the data we are working with, each feature has been plotted below. This shows the distinct pattern of each feature over the time period from 2009 to 2016. It also shows where anomalies are present, which will be addressed during normalization.
titles = [ "Pressure", "Temperature", "Temperature in Kelvin", "Temperature (dew point)", "Relative Humidity", "Saturation vapor pressure", "Vapor pressure", "Vapor pressure deficit", "Specific humidity", "Water vapor concentration", "Airtight", "Wind speed", "Maximum...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Data Preprocessing Here we are picking ~300,000 data points for training. Observation is recorded every 10 mins, that means 6 times per hour. We will resample one point per hour since no drastic change is expected within 60 minutes. We do this via the sampling_rate argument in timeseries_dataset_from_array utility. We ...
split_fraction = 0.715 train_split = int(split_fraction * int(df.shape[0])) step = 6 past = 720 future = 72 learning_rate = 0.001 batch_size = 256 epochs = 10 def normalize(data, train_split): data_mean = data[:train_split].mean(axis=0) data_std = data[:train_split].std(axis=0) return (data - data_mean) ...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Training dataset The training dataset labels starts from the 792nd observation (720 + 72).
start = past + future end = start + train_split x_train = train_data[[i for i in range(7)]].values y_train = features.iloc[start:end][[1]] sequence_length = int(past / step)
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Validation dataset The validation dataset must not contain the last 792 rows as we won't have label data for those records, hence 792 must be subtracted from the end of the data. The validation label dataset must start from 792 after train_split, hence we must add past + future (792) to label_start.
x_end = len(val_data) - past - future label_start = train_split + past + future x_val = val_data.iloc[:x_end][[i for i in range(7)]].values y_val = features.iloc[label_start:][[1]] dataset_val = keras.preprocessing.timeseries_dataset_from_array( x_val, y_val, sequence_length=sequence_length, sampling...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Training
inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2])) lstm_out = keras.layers.LSTM(32)(inputs) outputs = keras.layers.Dense(1)(lstm_out) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss="mse") model.summary()
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
We'll use the ModelCheckpoint callback to regularly save checkpoints, and the EarlyStopping callback to interrupt training when the validation loss is not longer improving.
path_checkpoint = "model_checkpoint.h5" es_callback = keras.callbacks.EarlyStopping(monitor="val_loss", min_delta=0, patience=5) modelckpt_callback = keras.callbacks.ModelCheckpoint( monitor="val_loss", filepath=path_checkpoint, verbose=1, save_weights_only=True, save_best_only=True, ) history = m...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
We can visualize the loss with the function below. After one point, the loss stops decreasing.
def visualize_loss(history, title): loss = history.history["loss"] val_loss = history.history["val_loss"] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, "b", label="Training loss") plt.plot(epochs, val_loss, "r", label="Validation loss") plt.title(title) plt.xlabel("Epoch...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Prediction The trained model above is now able to make predictions for 5 sets of values from validation set.
def show_plot(plot_data, delta, title): labels = ["History", "True Future", "Model Prediction"] marker = [".-", "rx", "go"] time_steps = list(range(-(plot_data[0].shape[0]), 0)) if delta: future = delta else: future = 0 plt.title(title) for i, val in enumerate(plot_data): ...
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
TRY IT Open and close the san-francisco-2013.csv file Text files and lines A text file is just a sequence of lines, in fact if you read it in all at once it is returns a list of strings. Each line is separated by the new line character "\n". This is the special character that is inserted into text files when you hit en...
print("Golden\nGate\nBridge")
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Print your name on two lines using only one print statement Reading from files There are two common ways to read through the file, the first (and usually better way) is to loop through the lines in the file. for line in file_handle: print line The second is to read all the lines at once and store as a str...
fh = open('thingstodo.txt') for line in fh: print(line.rstrip()) fh.close() fh = open('thingstodo.txt') contents = fh.read() fh.close() print(contents) print(type(contents)) fh = open('thingstodo.txt') lines = fh.readlines() fh.close() print(lines) print(type(lines))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Open 'san-francisco-2013.csv' and print out the first line. You can use either method. If you are using the loop method, you can 'break' after printing the first line. Searching through a file When searching through a file, you can use string methods to discover and parse the contents. Let's look at a few exampl...
# Looking for a line that starts with something # I want to see salary data of women with my first name fh = open('san-francisco-2014.csv') for line in fh: if line.startswith('Charlotte'): print(line) fh.close() # Looking for lines that contain a specific string fh = open('san-francisco-2014.csv') # Looki...
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
* Note that sometimes you get a quoted line, instead of the title and salary. If a csv file has a comma inside a cell, the line is quoted. Thus, splitting is not the proper way to read a csv file, but it will work in a pinch. We'll learn about the csv module as well as other ways to read in tabular (excel-like) data in...
# Skipping lines fh = open('thingstodo.txt') for line in fh: if line.startswith('Golden'): continue print(line) fh.close()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Try, except with open If you are worried that the file might not exist, you can wrap the open in a try block try: fh = open('i_dont_exist.txt') except: print "File does not exist" exit()
# Opening a non-existent file try: fh = open('i_dont_exist.txt') print(fh) fh.close() except: print("File does not exist") #exit()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Writing to files You can write to files very easily. You need to give open a second parameter 'w' to indicate you want to open the file in write mode. fh_write = open('new_file.txt', 'w') Then you call the write method on the file handle. You give it the string you want to write to the file. Be careful, write doesn't...
fh = open('numbers.txt', 'w') for i in range(10): fh.write(str(i) + '\n') fh.close() # Now let's prove that we actaully made a file fh = open('numbers.txt') lines = fh.readlines() print(lines) fh.close()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Create a file called 'my_favorite_cities.txt' and put your top 3 favorite cities each on its own line. Bonus check that you did it correctly by reading the lines in python With statement and opening files You can use with to open a file and it will automatically close the file at the end of the with block. This ...
with open('thingstodo.txt') as fh: for line in fh: print((line.rstrip()))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can also use with statements to write files
with open('numbers2.txt', 'w') as fh: for i in range(5): fh.write(str(i) + '\n') with open('numbers2.txt') as fh: for line in fh: print((line.rstrip()))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Set global flags
PROJECT = 'your-project' BUCKET = 'your-project-babyweight' REGION = 'us-central1' ROOT_DIR = 'babyweight_tft' import os os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['ROOT_DIR'] = ROOT_DIR OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET,ROOT_DIR) TRANSFORM_ARTEFACTS_...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Import required packages and modules
import math, os import tensorflow as tf import tensorflow_transform as tft from tensorflow.keras import layers, models
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
2. Define deep and wide regression model Check features in the transformed data You can use these features (except the target feature) as an input to the model.
transformed_metadata = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR).transformed_metadata transformed_metadata.schema
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define wide and deep feature columns This is a feature engineering layer that creates new features from the transformed data.
def create_wide_and_deep_feature_columns(): deep_feature_columns = [] wide_feature_columns = [] inputs = {} categorical_columns = {} # Select features you've checked from the metadata # - Categorical features are associated with the vocabulary size (starting from 0) numeric_features = ['mo...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define a regression model
def create_model(): wide_feature_columns, deep_feature_columns, inputs = create_wide_and_deep_feature_columns() feature_layer_wide = layers.DenseFeatures(wide_feature_columns, name='wide_features') feature_layer_deep = layers.DenseFeatures(deep_feature_columns, name='deep_features') wide_model = featu...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define tfrecords_input_fn This function creates a batched dataset from the transformed dataset.
def tfrecords_input_fn(files_name_pattern, batch_size=512): tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR) TARGET_FEATURE_NAME = 'weight_pounds' batched_dataset = tf.data.experimental.make_batched_features_dataset( file_pattern=files_name_pattern, batch_size=batch...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
3. Train and export the model
def train_and_evaluate(train_pattern, eval_pattern): train_dataset = tfrecords_input_fn(train_pattern, batch_size=BATCH_SIZE) validation_dataset = tfrecords_input_fn(eval_pattern, batch_size=BATCH_SIZE) model = create_model() print(model.summary()) print('Now training the model... hang on') h...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize the training result
from pandas import DataFrame DataFrame({'loss': history.history['loss'], 'val_loss': history.history['val_loss']}).plot(ylim=(0, 2.5)) print('Final RMSE for the validation set: {:f}'.format(math.sqrt(history.history['val_loss'][-1])))
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export the trained model The serving function serveing_fn receives raw input data and apply the tranformation before making predictions with the trained model. You can also add some pre/post-processing within the seriving function. In this example, it accepts an unique identifier for each instance and passes through it...
def export_serving_model(model, output_dir): tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR) # The layer has to be saved to the model for keras tracking purposes. model.tft_layer = tf_transform_output.transform_features_layer() @tf.function def serveing_fn(uid, is_male, mother...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Explore the exported model
%%bash gsutil ls -lR $EXPORT_DIR saved_model_cli show --all --dir=$EXPORT_DIR
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
4. Use the exported model Load the exported model
#tf.keras.backend.clear_session() model = tf.keras.models.load_model(EXPORT_DIR)
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define a local prediction function The serving function of the exported model accepts input data with named options. You can use this wrapper to use input data in the standard python dictionary. When you deply the model to Vertex AI using the pre-built container, you can use the Python client library to make online pre...
def predict(requests): uid, is_male, mother_race, mother_age, plurality, gestation_weeks = [], [], [], [], [], [] for instance in requests: uid.append(instance['uid']) is_male.append(instance['is_male']) mother_race.append(instance['mother_race']) mother_age.append(instance['mot...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make local predictions
instance1 = { 'uid': 'instance1', 'is_male': 'True', 'mother_age': 26.0, 'mother_race': 'Asian Indian', 'plurality': 1.0, 'gestation_weeks': 39 } instance2 = { 'uid': 'instance2', 'is_male': 'False', 'mother_age': 40.0, 'mother_race': 'Japanese', 'plurality': 2.0, 'gesta...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Deploy model to Vertex AI Deployment of the model may take minutes. Please hang on.
%%bash ts=$(date +%y%m%d-%H%M%S) MODEL_NAME="babyweight-$ts" ENDPOINT_NAME="babyweight-endpoint-$ts" gcloud ai models upload --region=$REGION --display-name=$MODEL_NAME \ --container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest \ --artifact-uri=$EXPORT_DIR MODEL_ID=$(gcloud ai models list --...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make online preditions with Python client library
from google.cloud import aiplatform # Use the latest endpoint that starts with 'babyweight-endpoint' endpoints = aiplatform.Endpoint.list( project=PROJECT, location=REGION, order_by='create_time desc') for item in endpoints: if item.display_name.startswith('babyweight-endpoint'): endpoint = item ...
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Are the features predictive?
scatter_matrix(energy, alpha=0.2, figsize=(18,18), diagonal='kde') plt.show()
archive/notebook/energy_efficiency.ipynb
georgetown-analytics/machine-learning
mit
Let's focus on predicting heating load
energy_features = energy.iloc[:,0:8] heat_labels = energy.iloc[:,8] X_train, X_test, y_train, y_test = train_test_split(energy_features, heat_labels, test_size=0.2) model = LinearRegression() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print('Linear Regression model') print('Mean...
archive/notebook/energy_efficiency.ipynb
georgetown-analytics/machine-learning
mit
Capture NOAA Weather Radio Using rtlsdr_helper record a short complex baseband array using capture().
# From the docstring #x = sdr.capture(Tc, fo=88700000.0, fs=2400000.0, gain=40, device_index=0) x = sdr.capture(Tc=5,fo=162.4e6,fs=2.4e6,gain=40,device_index=0) sdr.complex2wav('capture_162475.wav',2400000,x) fs, x = sdr.wav2complex('capture_162475.wav') psd(x,2**10,2400);
tutorial_part3/RTL_SDR2.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
Narrowband FM Demodulator
def NBFM_demod(x,fs=2.4e6,file_name='test.wav',B1=50e3,N1=10,B2=5e3,N2=5): """ Narrowband FM Demodulator """ b = signal.firwin(64,2*B1/float(fs)) # Filter and decimate (should be polyphase) y = signal.lfilter(b,1,x) z = ss.downsample(y,N1) # Apply complex baseband discriminator z_bb ...
tutorial_part3/RTL_SDR2.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
To obtain these curves, we sort the predictions made by the classifier from the smallest to the biggest for each group and put them on a $[0, 1]$ scale on the x-axis. The value corresponding to $x=0.5$ is the median of the distribution. Similarly for each quantile level in $[0,1]$ we obtain the corresponding quantile o...
N = 24 rng = jax.random.PRNGKey(1) rng, *rngs = jax.random.split(rng, 3) y_pred = 3 * jax.random.uniform(rngs[0], (N,)) groups = jax.random.uniform(rngs[1], (N,)) < 0.25 support_0 = jnp.linspace(0, 1, N - jnp.sum(groups)) support_1 = jnp.linspace(0, 1, jnp.sum(groups)) quantiles_0 = jnp.sort(y_pred[jnp.logical_not(gr...
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
We can see on this figure that the support of the two quantile function is different, since the number of points in the two groups is different. In order to compute the gap between the two curves, we first interpolate the two curves on the union of the supports. The Wasserstein distance corresponds to the gap between t...
import scipy kinds = ['linear', 'nearest'] fig, axes = plt.subplots(1, len(kinds), figsize=(8 * len(kinds), 5)) for ax, kind in zip(axes, kinds): q0 = scipy.interpolate.interp1d(support_0, quantiles_0, kind=kind) q1 = scipy.interpolate.interp1d(support_1, quantiles_1, kind=kind) support_01 = jnp.sort(jnp.concat...
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Soft Wasserstein Computing the Wasserstein distance involves complex operations such as sorting and interpolating. Fortunately, regularized optimal transport and its implementation with OTT provides accelerator-friendly differentiable approaches to sort according to a group (setting the weights of the outsiders to zero...
import functools @functools.partial(jax.jit, static_argnums=(2,)) def sort_group(inputs: jnp.ndarray, group: jnp.ndarray, target_size: int = 16): a = group / jnp.sum(group) b = jnp.ones(target_size) / target_size ot = ott.tools.soft_sort.transport_for_sort(inputs, a, b, dict(epsilon=1e-3)) return 1.0 / b * ot....
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
It is noteworthy to see that the obtained interpolation corresponds to a smooth version of the 'nearest' interpolation.
target_sizes = [4, 16, 64] _, axes = plt.subplots(1, len(target_sizes), figsize=(len(target_sizes * 8), 5)) for ax, target_size in zip(axes, target_sizes): ax.plot(sort_group(y_pred, jnp.logical_not(groups), target_size), lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0') ax.plot(sort_...
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Training a network In order to train our classifier with a fairness regularizer, we first turn the categorical features $x$ of the adult dataset into dense ones (using 16 dimensions) and pass the obtained vector to an MLP $f_\theta$ with 2 hidden layers of 64 neurons. We optimize a loss which is the sum of the binary c...
import matplotlib.pyplot as plt fig, axes = plt.subplots(2, 2, figsize=(16, 10)) for weight, curves in result.items(): for ax_row, metric in zip(axes, ['loss', 'accuracy']): for ax, phase in zip(ax_row, ['train', 'eval']): arr = np.array(curves[f'{phase}_{metric}']) ax.plot(arr[:, 0], arr[:, 1], lab...
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
We can see that when we increase the fairness regularization factor $\lambda$, the training accuracy slightly decreases but it does not impact too much the eval accuracy. The fairness regularizer is a rather good regularizer. For $\lambda = 1000$ the training metrics are a bit more degraded as well as the eval ones, bu...
num_rows = 2 num_cols = len(weights[1:]) // 2 fig, axes = plt.subplots(num_rows, num_cols, figsize=(7 * num_cols, 5 * num_rows)) for ax, w in zip(axes.ravel(), weights[1:]): logits, groups = get_predictions(ds_test, config, states[w]) plot_quantiles(logits, groups, ax) ax.set_title(f'$\lambda = {w:.0f}$', fontsiz...
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Loading the tokenizing the corpus
from glob import glob import re import string import funcy as fp from gensim import models from gensim.corpora import Dictionary, MmCorpus import nltk import pandas as pd # quick and dirty.... EMAIL_REGEX = re.compile(r"[a-z0-9\.\+_-]+@[a-z0-9\._-]+\.[a-z]*") FILTER_REGEX = re.compile(r"[^a-z '#]") TOKEN_MAPPINGS = [(...
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause