markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
๋‹ค์Œ์œผ๋กœ, ๊ฒ€์ฆ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๊ฒ€์ฆ์„ ์œ„ํ•ด ํ›ˆ๋ จ ์„ธํŠธ์˜ ๋‚˜๋จธ์ง€ 5,000๊ฐœ ๋ฆฌ๋ทฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ฐธ๊ณ : validation_split ๋ฐ subset ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๊ฒ€์ฆ ๋ฐ ํ›ˆ๋ จ ๋ถ„ํ• ์ด ๊ฒน์น˜์ง€ ์•Š๋„๋ก ์ž„์˜ ์‹œ๋“œ๋ฅผ ์ง€์ •ํ•˜๊ฑฐ๋‚˜ shuffle=False๋ฅผ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”.
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ฐธ๊ณ : ๋‹ค์Œ ์„น์…˜์—์„œ ์‚ฌ์šฉ๋˜๋Š” Preprocessing API๋Š” TensorFlow 2.3์—์„œ ์‹คํ—˜์ ์ด๋ฉฐ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ ๋‹ค์Œ์œผ๋กœ, ์œ ์ตํ•œ preprocessing.TextVectorization ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œ์ค€ํ™”, ํ† ํฐํ™” ๋ฐ ๋ฒกํ„ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ํ‘œ์ค€ํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๊ตฌ๋‘์ ์ด๋‚˜ HTML ์š”์†Œ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋‹จ์ˆœํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™”๋Š” ๋ฌธ์ž์—ด์„ ์—ฌ๋Ÿฌ ํ† ํฐ์œผ๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค(์˜ˆ: ํ™”์ดํŠธ์ŠคํŽ˜์ด์Šค์—์„œ ๋ถ„ํ• ํ•˜์—ฌ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ). ๋ฒกํ„ฐํ™”๋Š” ํ† ํฐ์„ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์‹ ๊ฒฝ๋ง์— ๊ณต๊ธ‰๋  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋“  ์ž‘์—…์„ ์ด ๋ ˆ์ด์–ด์—์„œ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ๋ฆฌ๋ทฐ์—๋Š” <br />์™€ ๊ฐ™์€ ๋‹ค์–‘ํ•œ HTML ํƒœ๊ทธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํƒœ๊ทธ๋Š” TextVectorization ๋ ˆ์ด์–ด์˜ ๊ธฐ๋ณธ ํ‘œ์ค€ํ™” ๋„๊ตฌ๋กœ ์ œ๊ฑฐ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค(ํ…์ŠคํŠธ๋ฅผ ์†Œ๋ฌธ์ž๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ธฐ๋ณธ์ ์œผ๋กœ ๊ตฌ๋‘์ ์„ ์ œ๊ฑฐํ•˜์ง€๋งŒ HTML์€ ์ œ๊ฑฐํ•˜์ง€ ์•Š์Œ). HTML์„ ์ œ๊ฑฐํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž ์ •์˜ ํ‘œ์ค€ํ™” ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ฐธ๊ณ : ํ›ˆ๋ จ/ํ…Œ์ŠคํŠธ ์™œ๊ณก(ํ›ˆ๋ จ/์ œ๊ณต ์™œ๊ณก์ด๋ผ๊ณ ๋„ ํ•จ)๋ฅผ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์‹œ๊ฐ„์— ๋ฐ์ดํ„ฐ๋ฅผ ๋™์ผํ•˜๊ฒŒ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์šฉ์ดํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด TextVectorization ๋ ˆ์ด์–ด๋ฅผ ๋ชจ๋ธ ๋‚ด์— ์ง์ ‘ ํฌํ•จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋‚˜์ค‘์— ์ด ๋‚ด์šฉ์„ ์•Œ์•„๋ด…๋‹ˆ๋‹ค.
def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '')
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์œผ๋กœ TextVectorization ๋ ˆ์ด์–ด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œ์ค€ํ™”, ํ† ํฐํ™” ๋ฐ ๋ฒกํ„ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ํ† ํฐ์— ๋Œ€ํ•ด ๊ณ ์œ ํ•œ ์ •์ˆ˜ ์ธ๋ฑ์Šค๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก output_mode๋ฅผ int๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋ถ„ํ•  ํ•จ์ˆ˜์™€ ์œ„์—์„œ ์ •์˜ํ•œ ์‚ฌ์šฉ์ž ์ง€์ • ํ‘œ์ค€ํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ช…์‹œ์  ์ตœ๋Œ€๊ฐ’์ธ sequence_length์™€ ๊ฐ™์ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•˜์—ฌ ๋ ˆ์ด์–ด๊ฐ€ ์‹œํ€€์Šค๋ฅผ ์ •ํ™•ํžˆ sequence_length ๊ฐ’์œผ๋กœ ์ฑ„์šฐ๊ฑฐ๋‚˜ ์ž๋ฅด๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.
max_features = 10000 sequence_length = 250 vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode='int', output_sequence_length=sequence_length)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์œผ๋กœ, ์ „์ฒ˜๋ฆฌ ๋ ˆ์ด์–ด์˜ ์ƒํƒœ๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ์— ๋งž์ถ”๊ธฐ ์œ„ํ•ด adapt๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด ๋ชจ๋ธ์ด ๋ฌธ์ž์—ด ์ธ๋ฑ์Šค๋ฅผ ์ •์ˆ˜๋กœ ๋นŒ๋“œํ•ฉ๋‹ˆ๋‹ค. ์ฐธ๊ณ : adapt๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค(ํ…Œ์ŠคํŠธ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ •๋ณด๊ฐ€ ๋ˆ„์ถœ๋จ).
# Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda x, y: x) vectorize_layer.adapt(train_text)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ผ๋ถ€ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•œ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # retrieve a batch (of 32 reviews and labels) from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_review, first_label = text_batch[0], label_batch[0] print("Review", first_review) print("Label", raw_train_ds.class_names[first_label]) print("Vectorized review", vectorize_text(first_review, first_label))
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์œ„์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ๊ฐ ํ† ํฐ์€ ์ •์ˆ˜๋กœ ๋Œ€์ฒด๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ ˆ์ด์–ด์—์„œ .get_vocabulary()๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๊ฐ ์ •์ˆ˜์— ํ•ด๋‹นํ•˜๋Š” ํ† ํฐ(๋ฌธ์ž์—ด)์„ ์กฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
print("1287 ---> ",vectorize_layer.get_vocabulary()[1287]) print(" 313 ---> ",vectorize_layer.get_vocabulary()[313]) print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๊ฑฐ์˜ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ตœ์ข… ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋กœ ์ด์ „์— ์ƒ์„ฑํ•œ TextVectorization ๋ ˆ์ด์–ด๋ฅผ ํ›ˆ๋ จ, ๊ฒ€์ฆ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค.
train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์„ฑ๋Šฅ์„ ๋†’์ด๋„๋ก ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ตฌ์„ฑํ•˜๊ธฐ ๋‹ค์Œ์€ I/O๊ฐ€ ์ฐจ๋‹จ๋˜์ง€ ์•Š๋„๋ก ๋ฐ์ดํ„ฐ๋ฅผ ๋กœ๋“œํ•  ๋•Œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ๋ฉ”์„œ๋“œ์ž…๋‹ˆ๋‹ค. .cache()๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๋””์Šคํฌ์—์„œ ๋กœ๋“œ๋œ ํ›„ ๋ฉ”๋ชจ๋ฆฌ์— ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋™์•ˆ ๋ฐ์ดํ„ฐ์„ธํŠธ๋กœ ์ธํ•ด ๋ณ‘๋ชฉ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ๊ฐ€ ๋„ˆ๋ฌด ์ปค์„œ ๋ฉ”๋ชจ๋ฆฌ์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ์ด ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚œ ์˜จ ๋””์Šคํฌ ์บ์‹œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŽ์€ ์ž‘์€ ํŒŒ์ผ๋ณด๋‹ค ์ฝ๊ธฐ๊ฐ€ ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. .prefetch()๋Š” ํ›ˆ๋ จ ์ค‘์— ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐ ๋ชจ๋ธ ์‹คํ–‰๊ณผ ๊ฒน์นฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ฑ๋Šฅ ๊ฐ€์ด๋“œ์—์„œ ๋‘ ๊ฐ€์ง€ ๋ฉ”์„œ๋“œ์™€ ๋ฐ์ดํ„ฐ๋ฅผ ๋””์Šคํฌ์— ์บ์‹ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๊ด€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ์ƒ์„ฑ ์ด์ œ ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค.
embedding_dim = 16 model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1)]) model.summary()
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ธต์„ ์ˆœ์„œ๋Œ€๋กœ ์Œ“์•„ ๋ถ„๋ฅ˜๊ธฐ(classifier)๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ์ฒซ ๋ฒˆ์งธ ์ธต์€ Embedding ์ธต์ž…๋‹ˆ๋‹ค. ์ด ์ธต์€ ์ •์ˆ˜๋กœ ์ธ์ฝ”๋”ฉ๋œ ๋‹จ์–ด๋ฅผ ์ž…๋ ฅ ๋ฐ›๊ณ  ๊ฐ ๋‹จ์–ด ์ธ๋ฑ์Šค์— ํ•ด๋‹นํ•˜๋Š” ์ž„๋ฒ ๋”ฉ ๋ฒกํ„ฐ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด ๋ฒกํ„ฐ๋Š” ๋ชจ๋ธ์ด ํ›ˆ๋ จ๋˜๋ฉด์„œ ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฒกํ„ฐ๋Š” ์ถœ๋ ฅ ๋ฐฐ์—ด์— ์ƒˆ๋กœ์šด ์ฐจ์›์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ตœ์ข… ์ฐจ์›์€ (batch, sequence, embedding)์ด ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋‹ค์Œ GlobalAveragePooling1D ์ธต์€ sequence ์ฐจ์›์— ๋Œ€ํ•ด ํ‰๊ท ์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ ์ƒ˜ํ”Œ์— ๋Œ€ํ•ด ๊ณ ์ •๋œ ๊ธธ์ด์˜ ์ถœ๋ ฅ ๋ฒกํ„ฐ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์ž…๋ ฅ์„ ๋‹ค๋ฃจ๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ๊ณ ์ • ๊ธธ์ด์˜ ์ถœ๋ ฅ ๋ฒกํ„ฐ๋Š” 16๊ฐœ์˜ ์€๋‹‰ ์œ ๋‹›์„ ๊ฐ€์ง„ ์™„์ „ ์—ฐ๊ฒฐ(fully-connected) ์ธต(Dense)์„ ๊ฑฐ์นฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ์ธต์€ ํ•˜๋‚˜์˜ ์ถœ๋ ฅ ๋…ธ๋“œ(node)๋ฅผ ๊ฐ€์ง„ ์™„์ „ ์—ฐ๊ฒฐ ์ธต์ž…๋‹ˆ๋‹ค. sigmoid ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 0๊ณผ 1 ์‚ฌ์ด์˜ ์‹ค์ˆ˜๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ํ™•๋ฅ  ๋˜๋Š” ์‹ ๋ขฐ๋„๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์†์‹ค ํ•จ์ˆ˜์™€ ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ชจ๋ธ์ด ํ›ˆ๋ จํ•˜๋ ค๋ฉด ์†์‹ค ํ•จ์ˆ˜(loss function)๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋Š” ์ด์ง„ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์ด๊ณ  ๋ชจ๋ธ์ด ํ™•๋ฅ ์„ ์ถœ๋ ฅํ•˜๋ฏ€๋กœ(์ถœ๋ ฅ์ธต์˜ ์œ ๋‹›์ด ํ•˜๋‚˜์ด๊ณ  sigmoid ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค), binary_crossentropy ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์„ ํƒํ•  ์ˆ˜ ์—†๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด mean_squared_error๋ฅผ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ binary_crossentropy๊ฐ€ ํ™•๋ฅ ์„ ๋‹ค๋ฃจ๋Š”๋ฐ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ํ™•๋ฅ  ๋ถ„ํฌ ๊ฐ„์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ธก์ •ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” ์ •๋‹ต์ธ ํƒ€๊นƒ ๋ถ„ํฌ์™€ ์˜ˆ์ธก ๋ถ„ํฌ ์‚ฌ์ด์˜ ๊ฑฐ๋ฆฌ์ž…๋‹ˆ๋‹ค.
model.compile(loss=losses.BinaryCrossentropy(from_logits=True), optimizer='adam', metrics=tf.metrics.BinaryAccuracy(threshold=0.0))
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ dataset ๊ฐœ์ฒด๋ฅผ fit ๋ฉ”์„œ๋“œ์— ์ „๋‹ฌํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค.
history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•ด ๋ณด์ฃ . ๋‘ ๊ฐœ์˜ ๊ฐ’์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ์†์‹ค(์˜ค์ฐจ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ˆซ์ž์ด๋ฏ€๋กœ ๋‚ฎ์„์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค)๊ณผ ์ •ํ™•๋„์ž…๋‹ˆ๋‹ค.
loss, accuracy = model.evaluate(test_ds) print("Loss: ", loss) print("Accuracy: ", accuracy)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์ƒ๋‹นํžˆ ๋‹จ์ˆœํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์•ฝ 86%์˜ ์ •ํ™•๋„๋ฅผ ๋‹ฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•๋„์™€ ์†์‹ค ๊ทธ๋ž˜ํ”„ ๊ทธ๋ฆฌ๊ธฐ model.fit()์€ ํ›ˆ๋ จ ์ค‘์— ๋ฐœ์ƒํ•œ ๋ชจ๋“  ๊ฒƒ์„ ๊ฐ€์ง„ ์‚ฌ์ „์„ ํฌํ•จํ•˜๋Š” History ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.
history_dict = history.history history_dict.keys()
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
๋„ค ๊ฐœ์˜ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ๊ณผ ๊ฒ€์ฆ ๋‹จ๊ณ„์—์„œ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋Š” ์ง€ํ‘œ๋“ค์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์†์‹ค์„ ๊ทธ๋ž˜ํ”„๋กœ ๊ทธ๋ ค ๋ณด๊ณ , ํ›ˆ๋ จ ์ •ํ™•๋„์™€ ๊ฒ€์ฆ ์ •ํ™•๋„๋„ ๊ทธ๋ž˜ํ”„๋กœ ๊ทธ๋ ค์„œ ๋น„๊ตํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:
acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show()
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ๊ทธ๋ž˜ํ”„์—์„œ ์ ์„ ์€ ํ›ˆ๋ จ ์†์‹ค๊ณผ ํ›ˆ๋ จ ์ •ํ™•๋„๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์‹ค์„ ์€ ๊ฒ€์ฆ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์ •ํ™•๋„์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์†์‹ค์€ ๊ฐ epoch๋งˆ๋‹ค ๊ฐ์†Œํ•˜๊ณ  ํ›ˆ๋ จ ์ •ํ™•์„ฑ์€ ๊ฐ epoch๋งˆ๋‹ค ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๊ฒฝ์‚ฌ ํ•˜๊ฐ• ์ตœ์ ํ™”๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ด์™€ ๊ฐ™์ด ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฐ˜๋ณต์—์„œ ์›ํ•˜๋Š” ์ˆ˜๋Ÿ‰์„ ์ตœ์†Œํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ฒ€์ฆ ์†์‹ค๊ณผ ๊ฒ€์ฆ ์ •ํ™•๋„์—์„œ๋Š” ๊ทธ๋ ‡์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ •ํ™•๋„ ์ด์ „์ด ํ”ผํฌ์ธ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๊ณผ๋Œ€์ ํ•ฉ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด์ „์— ๋ณธ ์  ์—†๋Š” ๋ฐ์ดํ„ฐ๋ณด๋‹ค ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ๋ชจ๋ธ์ด ๋” ์ž˜ ๋™์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ง€์ ๋ถ€ํ„ฐ๋Š” ๋ชจ๋ธ์ด ๊ณผ๋„ํ•˜๊ฒŒ ์ตœ์ ํ™”๋˜์–ด ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์—์„œ ์ผ๋ฐ˜ํ™”๋˜์ง€ ์•Š๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ํŠน์ • ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” ๊ณผ๋Œ€์ ํ•ฉ์„ ๋ง‰๊ธฐ ์œ„ํ•ด ๋‹จ์ˆœํžˆ ๊ฒ€์ฆ ์ •ํ™•๋„๊ฐ€ ๋” ์ด์ƒ ์ฆ๊ฐ€ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์— ํ›ˆ๋ จ์„ ์ค‘๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•œ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ tf.keras.callbacks.EarlyStopping ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ ์œ„์˜ ์ฝ”๋“œ์—์„œ๋Š” ๋ชจ๋ธ์— ํ…์ŠคํŠธ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ์ „์— TextVectorization ๋ ˆ์ด์–ด๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์›์‹œ ๋ฌธ์ž์—ด์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋ ค๋ฉด(์˜ˆ: ๋ฐฐํฌ๋ฅผ ๋‹จ์ˆœํ™”ํ•˜๊ธฐ ์œ„ํ•ด) ๋ชจ๋ธ ๋‚ด๋ถ€์— TextVectorization ๋ ˆ์ด์–ด๋ฅผ ํฌํ•จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ฐฉ๊ธˆ ํ›ˆ๋ จํ•œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒˆ ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
export_model = tf.keras.Sequential([ vectorize_layer, model, layers.Activation('sigmoid') ]) export_model.compile( loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'] ) # Test it with `raw_test_ds`, which yields raw strings loss, accuracy = export_model.evaluate(raw_test_ds) print(accuracy)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ ์ถ”๋ก ํ•˜๊ธฐ ์ƒˆ๋กœ์šด ์˜ˆ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์–ป์œผ๋ ค๋ฉด ๊ฐ„๋‹จํžˆ model.predict()๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
examples = [ "The movie was great!", "The movie was okay.", "The movie was terrible..." ] export_model.predict(examples)
site/ko/tutorials/keras/text_classification.ipynb
tensorflow/docs-l10n
apache-2.0
Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl.py file that we will generate below. The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables. Building and deploying the pipeline Let us write the pipeline to disk:
%%writefile ./pipeline_vertex/pipeline_vertex_automl.py # Copyright 2021 Google LLC # Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy of # the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" # BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. See the License for the specific language governing # permissions and limitations under the License. """Kubeflow Covertype Pipeline.""" import os from google_cloud_pipeline_components.aiplatform import ( AutoMLTabularTrainingJobRunOp, EndpointCreateOp, ModelDeployOp, TabularDatasetCreateOp, ) from kfp.v2 import dsl PIPELINE_ROOT = os.getenv("PIPELINE_ROOT") PROJECT = os.getenv("PROJECT") DATASET_SOURCE = os.getenv("DATASET_SOURCE") PIPELINE_NAME = os.getenv("PIPELINE_NAME", "covertype") DISPLAY_NAME = os.getenv("MODEL_DISPLAY_NAME", PIPELINE_NAME) TARGET_COLUMN = os.getenv("TARGET_COLUMN", "Cover_Type") SERVING_MACHINE_TYPE = os.getenv("SERVING_MACHINE_TYPE", "n1-standard-16") @dsl.pipeline( name=f"{PIPELINE_NAME}-vertex-automl-pipeline", description=f"AutoML Vertex Pipeline for {PIPELINE_NAME}", pipeline_root=PIPELINE_ROOT, ) def create_pipeline(): dataset_create_task = TabularDatasetCreateOp( display_name=DISPLAY_NAME, bq_source=DATASET_SOURCE, project=PROJECT, ) automl_training_task = AutoMLTabularTrainingJobRunOp( project=PROJECT, display_name=DISPLAY_NAME, optimization_prediction_type="classification", dataset=dataset_create_task.outputs["dataset"], target_column=TARGET_COLUMN, ) endpoint_create_task = EndpointCreateOp( project=PROJECT, display_name=DISPLAY_NAME, ) model_deploy_task = ModelDeployOp( # pylint: disable=unused-variable model=automl_training_task.outputs["model"], endpoint=endpoint_create_task.outputs["endpoint"], deployed_model_display_name=DISPLAY_NAME, dedicated_resources_machine_type=SERVING_MACHINE_TYPE, dedicated_resources_min_replica_count=1, dedicated_resources_max_replica_count=1, )
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Use the CLI compiler to compile the pipeline We compile the pipeline from the Python file we generated into a JSON description using the following command:
PIPELINE_JSON = "covertype_automl_vertex_pipeline.json" !dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl.py --output $PIPELINE_JSON
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Deploy the pipeline package
aiplatform.init(project=PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name="automl_covertype_kfp_pipeline", template_path=PIPELINE_JSON, enable_caching=True, ) pipeline.run()
notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_online_predictions.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
More detailed information about installing Tensorflow can be found at https://www.tensorflow.org/install/.
#@title Load the Universal Sentence Encoder's TF Hub module from absl import logging import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"] model = hub.load(module_url) print ("module %s loaded" % module_url) def embed(input): return model(input) model(["I love things"])[0] hub.load? #@title Compute a representation for each message, showing various lengths supported. word = "Elephant" sentence = "I am a sentence for which I would like to get its embedding." paragraph = ( "Universal Sentence Encoder embeddings also support short paragraphs. " "There is no hard limit on how long the paragraph is. Roughly, the longer " "the more 'diluted' the embedding will be.") messages = [word, sentence, paragraph] # Reduce logging output. logging.set_verbosity(logging.ERROR) message_embeddings = embed(messages) for i, message_embedding in enumerate(np.array(message_embeddings).tolist()): print("Message: {}".format(messages[i])) print("Embedding size: {}".format(len(message_embedding))) message_embedding_snippet = ", ".join( (str(x) for x in message_embedding[:3])) print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Similarity Visualized Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j.
messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "Eating strawberries is healthy", "Is paleo better than keto?", # Asking about age "How old are you?", "How many years have you been alive?", ] run_and_plot(messages)
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Evaluation: STS (Semantic Textual Similarity) Benchmark The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements. Download data
import pandas import scipy import math import csv sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = pandas.read_table( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"), error_bad_lines=False, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) sts_test = pandas.read_table( os.path.join( os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"), error_bad_lines=False, quoting=csv.QUOTE_NONE, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) # cleanup some NaN values in sts_dev sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
Evaluate Sentence Embeddings
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} def run_sts_benchmark(batch): sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1) sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1) cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1) clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0) scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi """Returns the similarity scores""" return scores dev_scores = sts_data['sim'].tolist() scores = [] for batch in np.array_split(sts_data, 10): scores.extend(run_sts_benchmark(batch)) pearson_correlation = scipy.stats.pearsonr(scores, dev_scores) print('Pearson correlation coefficient = {0}\np-value = {1}'.format( pearson_correlation[0], pearson_correlation[1])) try: a /= 0 except Exception as e: err = e str(err)
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mathnathan/notebooks
mit
The Run Engine processes messages A message has four parts: a command string, an object, a tuple of positional arguments, and a dictionary of keyword arguments.
Msg('set', motor, {'pos': 5}) Msg('trigger', motor) Msg('read', motor) RE = RunEngine() def simple_scan(motor): "Set, trigger, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('read', motor) RE.run(simple_scan(motor))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Moving a motor and reading it back is boring. Let's add a detector.
def simple_scan2(motor, det): "Set, trigger motor, trigger detector, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('trigger', det) yield Msg('read', det) RE.run(simple_scan2(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
There is two-way communication between the message generator and the Run Engine Above we the three messages with the responses they generated from the RunEngine. We can use these responses to make our scan adaptive.
def adaptive_scan(motor, det, threshold): """Set, trigger, read until the detector reads intensity < threshold""" i = 0 while True: print("LOOP %d" % i) yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read', det) if reading['det']['value'] < threshold: print('DONE') break i += 1 RE.run(adaptive_scan(motor, det, 0.2))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Control timing with 'sleep' and 'wait' The 'sleep' command is as simple as it sounds.
def sleepy_scan(motor, det): "Set, trigger motor, sleep for a fixed time, trigger detector, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('sleep', None, 2) # units: seconds yield Msg('trigger', det) yield Msg('read', det) RE.run(sleepy_scan(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The 'wait' command is more powerful. It watches for Movers (e.g., motor) to report being done. Wait for one motor to be done moving
def wait_one(motor, det): "Set, trigger, read" yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # Add motor to group 'A'. yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done. yield Msg('trigger', det) yield Msg('read', det) RE.run(wait_one(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Notice, in the log, that the response to wait is the set of Movers the scan was waiting on. Wait for two motors to both be done moving
def wait_multiple(motors, det): "Set motors, trigger all motors, wait for all motors to move." for motor in motors: yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # Trigger each motor and add it to group 'A'. yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done. yield Msg('trigger', det) yield Msg('read', det) motor1 = Mover('motor1', ['pos']) motor2 = Mover('motor2', ['pos']) RE.run(wait_multiple([motor1, motor2], det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Advanced Example: Wait for different groups of motors at different points in the run If the 'A' bit seems pointless, the payoff is here. We trigger all the motors at once, wait for the first two, read, wait for the last one, and read again. This is merely meant to show that complex control flow is possible.
def wait_complex(motors, det): "Set motors, trigger motors, wait for all motors to move." # Same as above... for motor in motors[:-1]: yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor, block_group='A') # ...but put the last motor is separate group. yield Msg('set', motors[-1], {'pos': 5}) yield Msg('trigger', motors[-1], block_group='B') yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done. yield Msg('trigger', det) yield Msg('read', det) yield Msg('wait', None, 'B') # Wait for everything in group 'B' to report done. yield Msg('trigger', det) yield Msg('read', det) motor3 = Mover('motor3', ['pos']) RE.run(wait_complex([motor1, motor2, motor3], det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Runs can be paused and safely resumed or aborted "Hard Pause": Stop immediately. On resume, rerun messages from last 'checkpoint' command. The Run Engine does not guess where it is safe to resume. The 'pause' command must follow a 'checkpoint' command, indicating a safe point to go back to in the event of a hard pause.
def conditional_hard_pause(motor, det): for i in range(5): yield Msg('checkpoint') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read', det) if reading['det']['value'] < 0.2: yield Msg('pause', hard=True) RE.run(conditional_hard_pause(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The scan thread sleeps and waits for more user input, to resume or abort. (On resume, this example will obviously hit the same pause condition again --- nothing has changed.)
RE.state RE.resume() RE.state RE.abort() def conditional_soft_pause(motor, det): for i in range(5): yield Msg('checkpoint') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) reading = yield Msg('read', det) if reading['det']['value'] < 0.2: yield Msg('pause', hard=False) # If a soft pause is requested, the Run Engine will # still execute these messages before pausing. yield Msg('set', motor, {'pos': i + 0.5}) yield Msg('trigger', motor) RE.run(conditional_soft_pause(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Other threads can request a pause Calling RE.request_pause(hard=True) or RE.request_pause(hard=False) has the same affect as a 'pause' command. SIGINT (Ctrl+C) is reliably caught before each message is processed, even across threads. SIGINT triggers a hard pause. If no checkpoint commands have been issued, CTRL+C causes the Run Engine to abort.
RE.run(sleepy_scan(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
If the scan contains checkpoints, it's possible to resume after Ctrl+C.
def sleepy_scan_checkpoints(motor, det): "Set, trigger motor, sleep for a fixed time, trigger detector, read" yield Msg('checkpoint') yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('sleep', None, 2) # units: seconds yield Msg('trigger', det) yield Msg('read', det) RE.run(sleepy_scan_checkpoints(motor, det)) RE.resume()
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Threading is optional -- switch it off for easier debugging Again, we'll interrupt the scan. We get exactly the same result, but this time we see a full Traceback.
RE.run(simple_scan(motor), use_threading=False)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Any functions can subscribe to the live data stream (e.g., live plotting) In the examples above, the runs have been emitting RunStart and RunStop Documents, but no Events or Event Descriptors. We will add those now. Emitting Events and Event Descriptors The 'create' and 'save' commands collect all the reads between them into one Event. If that particular set of objects has never been bundled into an Event during this run, then an Event Descriptor is also created. All four Documents -- RunStart, RunStop, Event, and EventDescriptor -- are simply Python dictionaries.
def simple_scan_saving(motor, det): "Set, trigger, read" yield Msg('create') yield Msg('set', motor, {'pos': 5}) yield Msg('trigger', motor) yield Msg('read', motor) yield Msg('read', det) yield Msg('save') RE.run(simple_scan_saving(motor, det))
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Very Simple Example Any user function that accepts a Python dictionary can be registered as a "consumer" of these Event Documents. Here's a toy example.
def print_event_time(doc): print('===== EVENT TIME:', doc['time'], '=====')
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
To use this consumer function during a run:
RE.run(simple_scan_saving(motor, det), subscriptions={'event': print_event_time})
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The use it by default on every run for this instance of the Run Engine:
token = RE.subscribe('event', print_event_time) token
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The output token, an integer, can be use to unsubscribe later.
RE.unsubscribe(token)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Live Plotting First, we'll create some axes. The code below updates the plot while the run is ongoing.
%matplotlib notebook import matplotlib.pyplot as plt fig, ax = plt.subplots() def stepscan(motor, detector): for i in range(-5, 5): yield Msg('create') yield Msg('set', motor, {'pos': i}) yield Msg('trigger', motor) yield Msg('trigger', det) yield Msg('read', motor) yield Msg('read', detector) yield Msg('save') def live_scalar_plotter(ax, y, x): x_data, y_data = [], [] line, = ax.plot([], [], 'ro', markersize=10) def update_plot(doc): # Update with the latest data. x_data.append(doc['data'][x]['value']) y_data.append(doc['data'][y]['value']) line.set_data(x_data, y_data) # Rescale and redraw. ax.relim(visible_only=True) ax.autoscale_view(tight=True) ax.figure.canvas.draw() return update_plot # Point the function to our axes above, and specify what to plot. my_plotter = live_scalar_plotter(ax, 'det', 'pos') RE.run(stepscan(motor, det), subscriptions={'event': my_plotter})
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Saving Documents to metadatastore Mission-critical consumers can be run on the scan thread, where they will block the scan until they return from processing the emitted Documents. This should not be used for computationally heavy tasks like visualization. Its only intended use is for saving data to metadatastore, but users can register any consumers they want, at risk of slowing down the scan. RE._register_scan_callback('event', some_critical_func) The convenience function register_mds registers metadatastore's four insert_* functions to consume their four respective Documents. These are registered on the scan thread, so data is guaranteed to be saved in metadatastore.
%run register_mds.py register_mds(RE)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
We can verify that this worked by loading this one-point scan from the DataBroker and displaying the data using DataMuxer.
RE.run(simple_scan_saving(motor, det)) from dataportal import DataBroker as db header = db[-1] header from dataportal import DataMuxer as dm dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Flyscan prototype Asserts that flyscans are managed by an object which has three methods: - describe : same as for everything else - kickoff : method which starts the flyscan. This should be a fast-to- execute function that is assumed to just poke at some external hardware. - collect : collects the data from flyscan. This method yields partial event documents. The 'time' and 'data' fields should be filled in, the rest will be filled in by the run engine.
flyer = FlyMagic('flyer', 'theta', 'sin') def fly_scan(flyer): yield Msg('kickoff', flyer) yield Msg('collect', flyer) yield Msg('kickoff', flyer) yield Msg('collect', flyer) # Note that there is no 'create'/'save' here. That is managed by 'collect'. RE.run(fly_gen(flyer), use_threading=False)
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
The fly scan results are in metadatastore....
header = db[-1] header res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe() res fig, ax = plt.subplots() ax.cla() res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe() ax.plot(res['sin'], label='sin') ax.plot(res['theta'], label='theta') ax.legend() fig.canvas.draw()
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Fly scan + stepscan Do a step scan with one motor and a fly scan with another
def fly_step(flyer, motor): for x in range(-5, 5): # step yield Msg('create') yield Msg('set', motor, {'pos': x}) yield Msg('trigger', motor) yield Msg('read', motor) yield Msg('save') # fly yield Msg('kickoff', flyer) yield Msg('collect', flyer) flyer.reset() RE.run(fly_step(flyer, motor)) header = db[-1] header mux = dm.from_events(db.fetch_events(header)) res = mux.bin_on('sin', interpolation={'pos':'nearest'}) %matplotlib notebook import matplotlib.pyplot as plt fig, ax = plt.subplots() sc = ax.scatter(res.theta.val.values, res.pos.val.values, c=res.sin.values, s=150, cmap='RdBu') cb = fig.colorbar(sc) cb.set_label('I [arb]') ax.set_xlabel(r'$\theta$ [rad]') ax.set_ylabel('pos [arb]') ax.set_title('async flyscan + step scan') fig.canvas.draw() res
examples/Blue Sky Demo.ipynb
sameera2004/bluesky
bsd-3-clause
Matrices
A = [[1, 2, 3], # A has 2 rows and 3 columns [4, 5, 6]] B = [[1, 2], # B has 3 rows and 2 columns [3, 4], [5, 6]] def shape(A): num_rows = len(A) num_cols = len(A[0]) if A else 0 return num_rows, num_cols def get_row(A, i): return A[i] def get_column(A, j): return [A_i[j] for A_i in A] def make_matrix(num_rows, num_cols, entry_fn): """returns a num_rows x num_cols matrix whose (i, j)th entry is generated by function entry_fn(i, j)""" return [[entry_fn(i, j) for j in range(num_cols)] for i in range(num_rows)] def is_diagonal(i, j): return 1 if i == j else 0 identity_matrix = make_matrix(5, 5, is_diagonal) identity_matrix
Chapter4.ipynb
nifannn/data-science-from-scratch
mit
Defining Terms All definitions taken from CIRI data documentation. The indicators included in this project rank each country from 0 to 3, with 0 being the least respect for rights and 3 being the most. This level of respect is measured by the extent to which right are guaranteed within law and enforced by the government. In a country rated 0, one might find systematic discrimination against women built into the law. In a country rated 3, one would find virtually all rights for women guaranteed by law and also enforced by the government. Women's Economic Rights Women's economic rights include a number of internationally recognized rights. These rights include: Equal pay for equal work Free choice of profession or employment without the need to obtain a husband or male relative's consent The right to gainful employment without the need to obtain a husband or male relative's consent Equality in hiring and promotion practices Job security (maternity leave, unemployment benefits, no arbitrary firing or layoffs, etc...) Non-discrimination by employers The right to be free from sexual harassment in the workplace The right to work at night The right to work in occupations classified as dangerous The right to work in the military and the police force Women's Political Rights Womenโ€™s political rights include a number of internationally recognized rights. These rights include: The right to vote The right to run for political office The right to hold elected and appointed government positions The right to join political parties The right to petition government officials Women's Rights in Central America in 2003 vs. 2010 We see below that in 2003, Costa Rica led the region in its respect for women's right with a score of 2 on economic rights and score of 3 on political rights. Honduras and Nicaragua score similar to their neighbors on economic rights, but receive full scores on guaranteeing political rights for their female citizens.
centamciridf['Women Economic Rights'][centamciridf['Year']==2003].plot(kind='bar', title="Respect for Women's Economic Rights in Central America (2003)", color="mediumslateblue") centamciridf['Women Political Rights'][centamciridf['Year']==2003].plot(kind='bar', title="Respect for Women's Political Rights in Central America (2003)", color="deepskyblue")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Looking at these indicators for the same countries in 2010, one sees that Costa Rica again leads the pack, with a full score on both economic and political rights for women. Honduras and Nicaragua fell in their respect for women's political rights, leaving them on par with the region. Belize and Panama increased their respect for women's economic rights.
centamciridf['Women Economic Rights'][centamciridf['Year']==2010].plot(kind='bar', title="Respect for Women's Economic Rights in Central America (2010)", color="mediumslateblue") centamciridf['Women Political Rights'][centamciridf['Year']==2010].plot(kind='bar', title="Respect for Women's Political Rights in Central America (2010)", color="deepskyblue")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Based on the findings above, let's drill down on the shining star in the pack (Costa Rica), a country that regresed (Nicaragua) and a middle of the road performer (El Salvador) to see what their performance over time has been on these indicators. Costa Rica Costa Rica has been consistently high over time for both respect of political and economic rights, with a stronger record for political rights.
costaricaciridf = centamciridf.reset_index() costaricaciridf = costaricaciridf.drop(costaricaciridf.columns[0],axis=1) costaricaciridf = costaricaciridf[costaricaciridf['Country']=='Costa Rica'] costaricaciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in Costa Rica (2003-2011)", color="deepskyblue") costaricaciridf['Women Economic Rights'].plot(kind='bar', title="Respect for Women's Economic Rights in Costa Rica (2003-2011)", color="mediumslateblue")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Nicaragua Looking at Nicaragua over time, one sees that its drop in respect for political rights from 2003 stayed consistent until 2008 when it spiked again and then dropped through the end of the period covered by this data. Interestingly, its spike in economic rights over the 8 year period took place in 2007, the year before its spike in political rights. Whether there is a correlation between women's economic rights and political rights (one facilitating the other) requires further examination.
nicciridf = centamciridf.reset_index() nicciridf = nicciridf.drop(nicciridf.columns[0],axis=1) nicciridf = nicciridf[nicciridf['Country']=='Nicaragua'] nicciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in Nicaragua(2003-2011)", color="deepskyblue") nicciridf['Women Economic Rights'].plot(kind='bar', title="Respect for Women's Economic Rights in Nicaragua (2003-2011)", color="mediumslateblue")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
El Salvador
salvciridf = centamciridf.reset_index() salvciridf = salvciridf.drop(salvciridf.columns[0],axis=1) salvciridf = salvciridf[salvciridf['Country']=='El Salvador'] salvciridf['Women Political Rights'].plot(kind='bar', title="Respect for Women's Political Rights in El Salvador(2003-2011)", color="deepskyblue") salvciridf['Women Economic Rights'].plot(kind='bar', title="Respect for Women's Economic Rights in El Salvador (2003-2011)", color="mediumslateblue")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
GDP Growth: Costa Rica, Nicaragua, El Salvador Now, let's turn to the question of whether a high level of respect for women's rights has translated into economic wellbeing at a country level. To do this, we look at GDP growth rather than absolute GDP in order to more easily compare countries despite differences in GDP size.
centamgdpwbdf = centamgdpwbdf.reset_index() # getting rid of multiindex for ease of graphing and comparison centamgdpwbdf = centamgdpwbdf.rename(columns={'NY.GDP.MKTP.KD.ZG':'Annual GDP Growth'}) costaricawbdf = centamgdpwbdf[centamgdpwbdf['country']=='Costa Rica'] costaricawbdf = costaricawbdf.drop(costaricawbdf.columns[0],axis=1) costaricawbdf['Annual GDP Growth'].plot(kind='bar', title="GDP Growth in Costa Rica (2003-2011)", color="lightgreen") nicwbdf = centamgdpwbdf[centamgdpwbdf['country']=='Nicaragua'] nicwbdf = nicwbdf.drop(nicwbdf.columns[0],axis=1) nicwbdf['Annual GDP Growth'].plot(kind='bar', title="GDP Growth in Nicaragaua (2003-2011)", color="tomato") salvwbdf = centamgdpwbdf[centamgdpwbdf['country']=='El Salvador'] salvwbdf = salvwbdf.drop(salvwbdf.columns[0],axis=1) salvwbdf['Annual GDP Growth'].plot(kind='bar', title="GDP Growth in El Salvador (2003-2011)", color="lemonchiffon")
MBA_S16/Chao-WomenRightsGDP.ipynb
NYUDataBootcamp/Projects
mit
Reconstructions in the two bit system I would expect (and hope) for all possible hidden configurations when the visible in in state [1,1] the reconstructions produced would be either v_a = [1,0] v_b = [0,1] or visa versa.
results = performance(np.array([1,1]), a)
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Excellent!## In all case it falls into a stable visible configuration and successfully separates the visibles.
def plot_avg_results_for_visible_pattern(v, sampler): results = performance(v, sampler) avgd_results = {} for key in results: results[key] for inner_key in results[key]: if inner_key not in avgd_results: avgd_results[inner_key] = results[key][inner_key] else: avgd_results[inner_key] += results[key][inner_key] keys = [] vals = [] for key in avgd_results: keys.append(key) vals.append(avgd_results[key]) plt.title("Avg reconstruction given v:{}".format(v)) plt.bar(range(len(vals)), vals, align='center') plt.xticks(range(len(keys)), keys, rotation='vertical') plt.show() def plot_avg_results_for_hidden_pattern(h_a, h_b, v, sampler): results = {} base_key = "v_a{} v_b{}" for count in range(100): v_a, v_b = sampler.v_to_v(h_a, h_b, v, num_gibbs= 1000) inner_key = base_key.format(v_a, v_b) if inner_key not in results: results[inner_key] = 1 else: results[inner_key] = results[inner_key] + 1 avgd_results = {} for inner_key in results: if inner_key not in avgd_results: avgd_results[inner_key] = results[inner_key] else: avgd_results[inner_key] += results[inner_key] keys = [] vals = [] for key in avgd_results: keys.append(key) vals.append(avgd_results[key]) plt.title("Avg reconstruction given v:{}".format(v)) plt.bar(range(len(vals)), vals, align='center') plt.xticks(range(len(keys)), keys, rotation='vertical') plt.show() plot_avg_results_for_visible_pattern(np.array([1,1]), a) plot_avg_results_for_visible_pattern(np.array([1,1]), WrappedVanillaSampler(dot))
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Yussssssssss This is perfect, over all cases for the visible pattern it can separate it. Creating excellent reconstructions the majority of the time. We already know the vanilla RBM will fail to do this. Still i graphed the output of on of the visible reconstructions of the vanilla.
plot_avg_results_for_visible_pattern(np.array([1,0]), a) plot_avg_results_for_visible_pattern(np.array([0,1]), a) plot_avg_results_for_visible_pattern(np.array([1,0]), WrappedVanillaSampler(dot)) plot_avg_results_for_visible_pattern(np.array([0,1]), WrappedVanillaSampler(dot))
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Below In this cell below I check the code still holds together given more hidden nodes. Than visibles. It does.
training_set = np.eye(2) dot = RBM(3,2,1) s = VanillaSampler(dot) t = VanillaTrainier(dot, s) t.train(10000, training_set) h_a = np.array([1,0,0]) h_b = np.array([0,1,0]) v = np.array([1,1]) plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([1,1]) , ApproximatedSampler(dot.weights,dot.weights,0,0)) plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([1,0]) , ApproximatedSampler(dot.weights,dot.weights,0,0)) plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([0,1]) , ApproximatedSampler(dot.weights,dot.weights,0,0)) plot_avg_results_for_hidden_pattern(h_a, h_b,np.array([0,0]) , ApproximatedSampler(dot.weights,dot.weights,0,0)) training_set = np.eye(3) dot = RBM(2,3,1) s = VanillaSampler(dot) t = VanillaTrainier(dot, s) t.train(10000, training_set) plot_avg_results_for_visible_pattern(np.array([1,1]), a) plot_avg_results_for_visible_pattern(np.array([1,1]), WrappedVanillaSampler(dot))
Max/ORBM-Inference-XOR.ipynb
garibaldu/multicauseRBM
mit
Climate Data Time-Series We will be using Jena Climate dataset recorded by the Max Planck Institute for Biogeochemistry. The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per 10 minutes. Location: Weather Station, Max Planck Institute for Biogeochemistry in Jena, Germany Time-frame Considered: Jan 10, 2009 - December 31, 2016 The table below shows the column names, their value formats, and their description. Index| Features |Format |Description -----|---------------|-------------------|----------------------- 1 |Date Time |01.01.2009 00:10:00|Date-time reference 2 |p (mbar) |996.52 |The pascal SI derived unit of pressure used to quantify internal pressure. Meteorological reports typically state atmospheric pressure in millibars. 3 |T (degC) |-8.02 |Temperature in Celsius 4 |Tpot (K) |265.4 |Temperature in Kelvin 5 |Tdew (degC) |-8.9 |Temperature in Celsius relative to humidity. Dew Point is a measure of the absolute amount of water in the air, the DP is the temperature at which the air cannot hold all the moisture in it and water condenses. 6 |rh (%) |93.3 |Relative Humidity is a measure of how saturated the air is with water vapor, the %RH determines the amount of water contained within collection objects. 7 |VPmax (mbar) |3.33 |Saturation vapor pressure 8 |VPact (mbar) |3.11 |Vapor pressure 9 |VPdef (mbar) |0.22 |Vapor pressure deficit 10 |sh (g/kg) |1.94 |Specific humidity 11 |H2OC (mmol/mol)|3.12 |Water vapor concentration 12 |rho (g/m ** 3) |1307.75 |Airtight 13 |wv (m/s) |1.03 |Wind speed 14 |max. wv (m/s) |1.75 |Maximum wind speed 15 |wd (deg) |152.3 |Wind direction in degrees
from zipfile import ZipFile import os uri = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip" zip_path = keras.utils.get_file(origin=uri, fname="jena_climate_2009_2016.csv.zip") zip_file = ZipFile(zip_path) zip_file.extractall() csv_path = "jena_climate_2009_2016.csv" df = pd.read_csv(csv_path)
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Raw Data Visualization To give us a sense of the data we are working with, each feature has been plotted below. This shows the distinct pattern of each feature over the time period from 2009 to 2016. It also shows where anomalies are present, which will be addressed during normalization.
titles = [ "Pressure", "Temperature", "Temperature in Kelvin", "Temperature (dew point)", "Relative Humidity", "Saturation vapor pressure", "Vapor pressure", "Vapor pressure deficit", "Specific humidity", "Water vapor concentration", "Airtight", "Wind speed", "Maximum wind speed", "Wind direction in degrees", ] feature_keys = [ "p (mbar)", "T (degC)", "Tpot (K)", "Tdew (degC)", "rh (%)", "VPmax (mbar)", "VPact (mbar)", "VPdef (mbar)", "sh (g/kg)", "H2OC (mmol/mol)", "rho (g/m**3)", "wv (m/s)", "max. wv (m/s)", "wd (deg)", ] colors = [ "blue", "orange", "green", "red", "purple", "brown", "pink", "gray", "olive", "cyan", ] date_time_key = "Date Time" def show_raw_visualization(data): time_data = data[date_time_key] fig, axes = plt.subplots( nrows=7, ncols=2, figsize=(15, 20), dpi=80, facecolor="w", edgecolor="k" ) for i in range(len(feature_keys)): key = feature_keys[i] c = colors[i % (len(colors))] t_data = data[key] t_data.index = time_data t_data.head() ax = t_data.plot( ax=axes[i // 2, i % 2], color=c, title="{} - {}".format(titles[i], key), rot=25, ) ax.legend([titles[i]]) plt.tight_layout() show_raw_visualization(df)
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Data Preprocessing Here we are picking ~300,000 data points for training. Observation is recorded every 10 mins, that means 6 times per hour. We will resample one point per hour since no drastic change is expected within 60 minutes. We do this via the sampling_rate argument in timeseries_dataset_from_array utility. We are tracking data from past 720 timestamps (720/6=120 hours). This data will be used to predict the temperature after 72 timestamps (72/6=12 hours). Since every feature has values with varying ranges, we do normalization to confine feature values to a range of [0, 1] before training a neural network. We do this by subtracting the mean and dividing by the standard deviation of each feature. 71.5 % of the data will be used to train the model, i.e. 300,693 rows. split_fraction can be changed to alter this percentage. The model is shown data for first 5 days i.e. 720 observations, that are sampled every hour. The temperature after 72 (12 hours * 6 observation per hour) observation will be used as a label.
split_fraction = 0.715 train_split = int(split_fraction * int(df.shape[0])) step = 6 past = 720 future = 72 learning_rate = 0.001 batch_size = 256 epochs = 10 def normalize(data, train_split): data_mean = data[:train_split].mean(axis=0) data_std = data[:train_split].std(axis=0) return (data - data_mean) / data_std
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Training dataset The training dataset labels starts from the 792nd observation (720 + 72).
start = past + future end = start + train_split x_train = train_data[[i for i in range(7)]].values y_train = features.iloc[start:end][[1]] sequence_length = int(past / step)
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Validation dataset The validation dataset must not contain the last 792 rows as we won't have label data for those records, hence 792 must be subtracted from the end of the data. The validation label dataset must start from 792 after train_split, hence we must add past + future (792) to label_start.
x_end = len(val_data) - past - future label_start = train_split + past + future x_val = val_data.iloc[:x_end][[i for i in range(7)]].values y_val = features.iloc[label_start:][[1]] dataset_val = keras.preprocessing.timeseries_dataset_from_array( x_val, y_val, sequence_length=sequence_length, sampling_rate=step, batch_size=batch_size, ) for batch in dataset_train.take(1): inputs, targets = batch print("Input shape:", inputs.numpy().shape) print("Target shape:", targets.numpy().shape)
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Training
inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2])) lstm_out = keras.layers.LSTM(32)(inputs) outputs = keras.layers.Dense(1)(lstm_out) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss="mse") model.summary()
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
We'll use the ModelCheckpoint callback to regularly save checkpoints, and the EarlyStopping callback to interrupt training when the validation loss is not longer improving.
path_checkpoint = "model_checkpoint.h5" es_callback = keras.callbacks.EarlyStopping(monitor="val_loss", min_delta=0, patience=5) modelckpt_callback = keras.callbacks.ModelCheckpoint( monitor="val_loss", filepath=path_checkpoint, verbose=1, save_weights_only=True, save_best_only=True, ) history = model.fit( dataset_train, epochs=epochs, validation_data=dataset_val, callbacks=[es_callback, modelckpt_callback], )
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
We can visualize the loss with the function below. After one point, the loss stops decreasing.
def visualize_loss(history, title): loss = history.history["loss"] val_loss = history.history["val_loss"] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, "b", label="Training loss") plt.plot(epochs, val_loss, "r", label="Validation loss") plt.title(title) plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend() plt.show() visualize_loss(history, "Training and Validation Loss")
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
Prediction The trained model above is now able to make predictions for 5 sets of values from validation set.
def show_plot(plot_data, delta, title): labels = ["History", "True Future", "Model Prediction"] marker = [".-", "rx", "go"] time_steps = list(range(-(plot_data[0].shape[0]), 0)) if delta: future = delta else: future = 0 plt.title(title) for i, val in enumerate(plot_data): if i: plt.plot(future, plot_data[i], marker[i], markersize=10, label=labels[i]) else: plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i]) plt.legend() plt.xlim([time_steps[0], (future + 5) * 2]) plt.xlabel("Time-Step") plt.show() return for x, y in dataset_val.take(5): show_plot( [x[0][:, 1].numpy(), y[0].numpy(), model.predict(x)[0]], 12, "Single Step Prediction", )
examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb
keras-team/keras-io
apache-2.0
TRY IT Open and close the san-francisco-2013.csv file Text files and lines A text file is just a sequence of lines, in fact if you read it in all at once it is returns a list of strings. Each line is separated by the new line character "\n". This is the special character that is inserted into text files when you hit enter (or you can deliberately put it into strings by using the special \n syntax).
print("Golden\nGate\nBridge")
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Print your name on two lines using only one print statement Reading from files There are two common ways to read through the file, the first (and usually better way) is to loop through the lines in the file. for line in file_handle: print line The second is to read all the lines at once and store as a string or list. lines = file_handle.read() # stores as a single string lines = file_handle.readlines() # stores as a list of strings (separates on new lines) Unless you are going to process the lines in a file several times, use the first method. It uses way less memory which will be useful if you ever have big files
fh = open('thingstodo.txt') for line in fh: print(line.rstrip()) fh.close() fh = open('thingstodo.txt') contents = fh.read() fh.close() print(contents) print(type(contents)) fh = open('thingstodo.txt') lines = fh.readlines() fh.close() print(lines) print(type(lines))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Open 'san-francisco-2013.csv' and print out the first line. You can use either method. If you are using the loop method, you can 'break' after printing the first line. Searching through a file When searching through a file, you can use string methods to discover and parse the contents. Let's look at a few examples
# Looking for a line that starts with something # I want to see salary data of women with my first name fh = open('san-francisco-2014.csv') for line in fh: if line.startswith('Charlotte'): print(line) fh.close() # Looking for lines that contain a specific string fh = open('san-francisco-2014.csv') # Looking for all the department heads for line in fh: # Remember if find doesn't find the string, it returns -1 if line.find('Dept Head') != -1: print(line) fh.close() # Counting lines that match criteria fh = open('san-francisco-2014.csv') num_trainees = 0 for line in fh: # Remember if find doesn't find the string, it returns -1 if line.find('Trainee') != -1: num_trainees += 1 fh.close() print("There are {0} trainees".format(num_trainees)) # Splitting lines, this is great for excel like data (tsv, csv) # I want to see salary data of women with my name fh = open('san-francisco-2014.csv') for line in fh: if line.startswith('Emily'): cols = line.split(',') print(cols) # Salary is 3rd column print(cols[1], cols[2], cols[-1]) fh.close()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
* Note that sometimes you get a quoted line, instead of the title and salary. If a csv file has a comma inside a cell, the line is quoted. Thus, splitting is not the proper way to read a csv file, but it will work in a pinch. We'll learn about the csv module as well as other ways to read in tabular (excel-like) data in the second half of the class.
# Skipping lines fh = open('thingstodo.txt') for line in fh: if line.startswith('Golden'): continue print(line) fh.close()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Try, except with open If you are worried that the file might not exist, you can wrap the open in a try block try: fh = open('i_dont_exist.txt') except: print "File does not exist" exit()
# Opening a non-existent file try: fh = open('i_dont_exist.txt') print(fh) fh.close() except: print("File does not exist") #exit()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Writing to files You can write to files very easily. You need to give open a second parameter 'w' to indicate you want to open the file in write mode. fh_write = open('new_file.txt', 'w') Then you call the write method on the file handle. You give it the string you want to write to the file. Be careful, write doesn't add a new line character to the end of strings like print does. fh_write.write('line to write\n') Just like reading files, you need to close your file when you are done. fh_write.close()
fh = open('numbers.txt', 'w') for i in range(10): fh.write(str(i) + '\n') fh.close() # Now let's prove that we actaully made a file fh = open('numbers.txt') lines = fh.readlines() print(lines) fh.close()
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Create a file called 'my_favorite_cities.txt' and put your top 3 favorite cities each on its own line. Bonus check that you did it correctly by reading the lines in python With statement and opening files You can use with to open a file and it will automatically close the file at the end of the with block. This is the python preferred way to open files. (Sorry it took me so long to show you) with open('filename.txt') as file_handle: for line in file_handle: print line # You don't have to close the file
with open('thingstodo.txt') as fh: for line in fh: print((line.rstrip()))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can also use with statements to write files
with open('numbers2.txt', 'w') as fh: for i in range(5): fh.write(str(i) + '\n') with open('numbers2.txt') as fh: for line in fh: print((line.rstrip()))
Lesson06_Files/Files.ipynb
WomensCodingCircle/CodingCirclePython
mit
Set global flags
PROJECT = 'your-project' BUCKET = 'your-project-babyweight' REGION = 'us-central1' ROOT_DIR = 'babyweight_tft' import os os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['ROOT_DIR'] = ROOT_DIR OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET,ROOT_DIR) TRANSFORM_ARTEFACTS_DIR = os.path.join(OUTPUT_DIR,'transform') TRANSFORMED_DATA_DIR = os.path.join(OUTPUT_DIR,'transformed') TEMP_DIR = os.path.join(OUTPUT_DIR, 'tmp') MODELS_DIR = os.path.join(OUTPUT_DIR,'models')
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Import required packages and modules
import math, os import tensorflow as tf import tensorflow_transform as tft from tensorflow.keras import layers, models
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
2. Define deep and wide regression model Check features in the transformed data You can use these features (except the target feature) as an input to the model.
transformed_metadata = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR).transformed_metadata transformed_metadata.schema
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define wide and deep feature columns This is a feature engineering layer that creates new features from the transformed data.
def create_wide_and_deep_feature_columns(): deep_feature_columns = [] wide_feature_columns = [] inputs = {} categorical_columns = {} # Select features you've checked from the metadata # - Categorical features are associated with the vocabulary size (starting from 0) numeric_features = ['mother_age_log', 'mother_age_normalized', 'gestation_weeks_scaled'] categorical_features = [('is_male_index', 1), ('is_multiple_index', 1), ('mother_age_bucketized', 4), ('mother_race_index', 10)] for feature in numeric_features: deep_feature_columns.append(tf.feature_column.numeric_column(feature)) inputs[feature] = layers.Input(shape=(), name=feature, dtype='float32') for feature, vocab_size in categorical_features: categorical_columns[feature] = ( tf.feature_column.categorical_column_with_identity(feature, num_buckets=vocab_size+1)) wide_feature_columns.append(tf.feature_column.indicator_column(categorical_columns[feature])) inputs[feature] = layers.Input(shape=(), name=feature, dtype='int64') mother_race_X_mother_age_bucketized = tf.feature_column.crossed_column( [categorical_columns['mother_age_bucketized'], categorical_columns['mother_race_index']], 55) wide_feature_columns.append(tf.feature_column.indicator_column(mother_race_X_mother_age_bucketized)) mother_race_X_mother_age_bucketized_embedded = tf.feature_column.embedding_column( mother_race_X_mother_age_bucketized, 5) deep_feature_columns.append(mother_race_X_mother_age_bucketized_embedded) return wide_feature_columns, deep_feature_columns, inputs
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define a regression model
def create_model(): wide_feature_columns, deep_feature_columns, inputs = create_wide_and_deep_feature_columns() feature_layer_wide = layers.DenseFeatures(wide_feature_columns, name='wide_features') feature_layer_deep = layers.DenseFeatures(deep_feature_columns, name='deep_features') wide_model = feature_layer_wide(inputs) deep_model = layers.Dense(64, activation='relu', name='DNN_layer1')(feature_layer_deep(inputs)) deep_model = layers.Dense(32, activation='relu', name='DNN_layer2')(deep_model) wide_deep_model = layers.Dense(1, name='weight')(layers.concatenate([wide_model, deep_model])) model = models.Model(inputs=inputs, outputs=wide_deep_model) # Compile Keras model model.compile(loss='mse', optimizer='adam') # tf.keras.optimizers.Adam(learning_rate=0.0001)) return model
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define tfrecords_input_fn This function creates a batched dataset from the transformed dataset.
def tfrecords_input_fn(files_name_pattern, batch_size=512): tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR) TARGET_FEATURE_NAME = 'weight_pounds' batched_dataset = tf.data.experimental.make_batched_features_dataset( file_pattern=files_name_pattern, batch_size=batch_size, features=tf_transform_output.transformed_feature_spec(), reader=tf.data.TFRecordDataset, label_key=TARGET_FEATURE_NAME, shuffle=True).prefetch(tf.data.experimental.AUTOTUNE) return batched_dataset
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
3. Train and export the model
def train_and_evaluate(train_pattern, eval_pattern): train_dataset = tfrecords_input_fn(train_pattern, batch_size=BATCH_SIZE) validation_dataset = tfrecords_input_fn(eval_pattern, batch_size=BATCH_SIZE) model = create_model() print(model.summary()) print('Now training the model... hang on') history = model.fit( train_dataset, validation_data=validation_dataset, epochs=NUM_EPOCHS, steps_per_epoch=math.ceil(NUM_TRAIN_INSTANCES / BATCH_SIZE), validation_steps=math.ceil(NUM_TEST_INSTANCES / BATCH_SIZE), verbose=0) print('Evaluate the trained model.') print(model.evaluate(validation_dataset, steps=NUM_TEST_INSTANCES)) return history, model train_pattern = os.path.join(TRANSFORMED_DATA_DIR, 'train-*.tfrecords') eval_pattern = os.path.join(TRANSFORMED_DATA_DIR, 'eval-*.tfrecords') DATA_SIZE = 10000 BATCH_SIZE = 512 NUM_EPOCHS = 100 NUM_TRAIN_INSTANCES = math.ceil(DATA_SIZE * 0.7) NUM_TEST_INSTANCES = math.ceil(DATA_SIZE * 0.3) history, trained_model = train_and_evaluate(train_pattern, eval_pattern)
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize the training result
from pandas import DataFrame DataFrame({'loss': history.history['loss'], 'val_loss': history.history['val_loss']}).plot(ylim=(0, 2.5)) print('Final RMSE for the validation set: {:f}'.format(math.sqrt(history.history['val_loss'][-1])))
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export the trained model The serving function serveing_fn receives raw input data and apply the tranformation before making predictions with the trained model. You can also add some pre/post-processing within the seriving function. In this example, it accepts an unique identifier for each instance and passes through it to the output, that is useful for batch predictions.
def export_serving_model(model, output_dir): tf_transform_output = tft.TFTransformOutput(TRANSFORM_ARTEFACTS_DIR) # The layer has to be saved to the model for keras tracking purposes. model.tft_layer = tf_transform_output.transform_features_layer() @tf.function def serveing_fn(uid, is_male, mother_race, mother_age, plurality, gestation_weeks): features = { 'is_male': is_male, 'mother_race': mother_race, 'mother_age': mother_age, 'plurality': plurality, 'gestation_weeks': gestation_weeks } transformed_features = model.tft_layer(features) outputs = model(transformed_features) # The prediction results have multiple elements in general. # But we need only the first element in our case. outputs = tf.map_fn(lambda item: item[0], outputs) return {'uid': uid, 'weight': outputs} concrete_serving_fn = serveing_fn.get_concrete_function( tf.TensorSpec(shape=[None], dtype=tf.string, name='uid'), tf.TensorSpec(shape=[None], dtype=tf.string, name='is_male'), tf.TensorSpec(shape=[None], dtype=tf.string, name='mother_race'), tf.TensorSpec(shape=[None], dtype=tf.float32, name='mother_age'), tf.TensorSpec(shape=[None], dtype=tf.float32, name='plurality'), tf.TensorSpec(shape=[None], dtype=tf.float32, name='gestation_weeks') ) signatures = {'serving_default': concrete_serving_fn} model.save(output_dir, save_format='tf', signatures=signatures) EXPORT_DIR = os.path.join(MODELS_DIR, 'export') export_serving_model(trained_model, EXPORT_DIR) os.environ['EXPORT_DIR'] = EXPORT_DIR
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Explore the exported model
%%bash gsutil ls -lR $EXPORT_DIR saved_model_cli show --all --dir=$EXPORT_DIR
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
4. Use the exported model Load the exported model
#tf.keras.backend.clear_session() model = tf.keras.models.load_model(EXPORT_DIR)
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define a local prediction function The serving function of the exported model accepts input data with named options. You can use this wrapper to use input data in the standard python dictionary. When you deply the model to Vertex AI using the pre-built container, you can use the Python client library to make online predictions without the wrapper.
def predict(requests): uid, is_male, mother_race, mother_age, plurality, gestation_weeks = [], [], [], [], [], [] for instance in requests: uid.append(instance['uid']) is_male.append(instance['is_male']) mother_race.append(instance['mother_race']) mother_age.append(instance['mother_age']) plurality.append(instance['plurality']) gestation_weeks.append(float(instance['gestation_weeks'])) result = model.signatures['serving_default']( uid = tf.convert_to_tensor(uid), is_male = tf.convert_to_tensor(is_male), mother_race = tf.convert_to_tensor(mother_race), mother_age = tf.convert_to_tensor(mother_age), plurality = tf.convert_to_tensor(plurality), gestation_weeks = tf.convert_to_tensor(gestation_weeks)) result = zip(result['uid'].numpy().tolist(), result['weight'].numpy().tolist()) result = [{'uid': output[0].decode('ascii'), 'weight': output[1]} for output in result] return {'predictions': result}
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make local predictions
instance1 = { 'uid': 'instance1', 'is_male': 'True', 'mother_age': 26.0, 'mother_race': 'Asian Indian', 'plurality': 1.0, 'gestation_weeks': 39 } instance2 = { 'uid': 'instance2', 'is_male': 'False', 'mother_age': 40.0, 'mother_race': 'Japanese', 'plurality': 2.0, 'gestation_weeks': 37 } predict([instance1, instance2])
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Deploy model to Vertex AI Deployment of the model may take minutes. Please hang on.
%%bash ts=$(date +%y%m%d-%H%M%S) MODEL_NAME="babyweight-$ts" ENDPOINT_NAME="babyweight-endpoint-$ts" gcloud ai models upload --region=$REGION --display-name=$MODEL_NAME \ --container-image-uri=us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest \ --artifact-uri=$EXPORT_DIR MODEL_ID=$(gcloud ai models list --region=$REGION \ --filter=display_name=$MODEL_NAME --format "value(MODEL_ID)") gcloud ai endpoints create --region=$REGION --display-name=$ENDPOINT_NAME ENDPOINT_ID=$(gcloud ai endpoints list --region=$REGION \ --filter=display_name=$ENDPOINT_NAME --format "value(ENDPOINT_ID)") gcloud ai endpoints deploy-model $ENDPOINT_ID \ --region=$REGION --model=$MODEL_ID --display-name=$MODEL_NAME \ --machine-type=n1-standard-2 \ --min-replica-count=1 --max-replica-count=3 --traffic-split=0=100
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make online preditions with Python client library
from google.cloud import aiplatform # Use the latest endpoint that starts with 'babyweight-endpoint' endpoints = aiplatform.Endpoint.list( project=PROJECT, location=REGION, order_by='create_time desc') for item in endpoints: if item.display_name.startswith('babyweight-endpoint'): endpoint = item break endpoint.predict(instances=[instance1, instance2]).predictions
blogs/babyweight_tft/babyweight_tft_keras_02.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Are the features predictive?
scatter_matrix(energy, alpha=0.2, figsize=(18,18), diagonal='kde') plt.show()
archive/notebook/energy_efficiency.ipynb
georgetown-analytics/machine-learning
mit
Let's focus on predicting heating load
energy_features = energy.iloc[:,0:8] heat_labels = energy.iloc[:,8] X_train, X_test, y_train, y_test = train_test_split(energy_features, heat_labels, test_size=0.2) model = LinearRegression() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print('Linear Regression model') print('Mean Squared Error: %0.3f' % mse(expected, predicted)) print('Coefficient of Determination: %0.3f' % r2_score(expected, predicted)) model = Ridge(alpha=0.1) model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print('Ridge model') print('Mean Squared Error: %0.3f' % mse(expected, predicted)) print('Coefficient of Determination: %0.3f' % r2_score(expected, predicted)) model = RandomForestRegressor() model.fit(X_train, y_train) expected = y_test predicted = model.predict(X_test) print('Random Forest model') print('Mean squared error = %0.3f' % mse(expected, predicted)) print('R2 score = %0.3f' % r2_score(expected, predicted))
archive/notebook/energy_efficiency.ipynb
georgetown-analytics/machine-learning
mit
Capture NOAA Weather Radio Using rtlsdr_helper record a short complex baseband array using capture().
# From the docstring #x = sdr.capture(Tc, fo=88700000.0, fs=2400000.0, gain=40, device_index=0) x = sdr.capture(Tc=5,fo=162.4e6,fs=2.4e6,gain=40,device_index=0) sdr.complex2wav('capture_162475.wav',2400000,x) fs, x = sdr.wav2complex('capture_162475.wav') psd(x,2**10,2400);
tutorial_part3/RTL_SDR2.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
Narrowband FM Demodulator
def NBFM_demod(x,fs=2.4e6,file_name='test.wav',B1=50e3,N1=10,B2=5e3,N2=5): """ Narrowband FM Demodulator """ b = signal.firwin(64,2*B1/float(fs)) # Filter and decimate (should be polyphase) y = signal.lfilter(b,1,x) z = ss.downsample(y,N1) # Apply complex baseband discriminator z_bb = sdr.discrim(z) z_bb -= mean(z_bb) # Design 2nd decimation lowpass filter bb = signal.firwin(64,2*B2/(float(fs)/N1)) # Filter and decimate zz_bb = signal.lfilter(bb,1,z_bb) # Decimate by N2 z_out = ss.downsample(zz_bb,N2) # Save to wave file ss.to_wav(file_name, 48000, z_out) print('Done!') return z_bb, z_out z_bb, z_demod = NBFM_demod(x,file_name='NOAA_cos_demod.wav') psd(z_demod,2**10,2400/50); xlabel(r'Frequency (kHz)'); ylim([-80,-20]) Audio('NOAA_cos_demod.wav')
tutorial_part3/RTL_SDR2.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
To obtain these curves, we sort the predictions made by the classifier from the smallest to the biggest for each group and put them on a $[0, 1]$ scale on the x-axis. The value corresponding to $x=0.5$ is the median of the distribution. Similarly for each quantile level in $[0,1]$ we obtain the corresponding quantile of the distribution. In this example we observe that the median prediction for women is about 5% while the median prediction for men is about 25%. If we were to set to 0.25 the threshold to consider a prediction positive, we would keep half of the men, but would reject about 90% of the women. Quantiles and Wasserstein distance. The gap between the two quantile functions is corresponds to the Wasserstein distance between the two distributions. Closing the gap between the two curves is equivalent to minimizing the Wasserstein distance between the two distributions. Note that in practice, we approximate the gap by the average of the distances between corresponding quantile levels. For this we need to interpolate the values over the union of the supports of the two discrete quantile maps. Let's do this on an example. We sample some points uniformly at random, and assign randomly groups to them and plot the quantiles functions.
N = 24 rng = jax.random.PRNGKey(1) rng, *rngs = jax.random.split(rng, 3) y_pred = 3 * jax.random.uniform(rngs[0], (N,)) groups = jax.random.uniform(rngs[1], (N,)) < 0.25 support_0 = jnp.linspace(0, 1, N - jnp.sum(groups)) support_1 = jnp.linspace(0, 1, jnp.sum(groups)) quantiles_0 = jnp.sort(y_pred[jnp.logical_not(groups)]) quantiles_1 = jnp.sort(y_pred[groups]) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) ax.plot(support_0, quantiles_0, lw=3, marker='o', markersize=10, label='group 0', markeredgecolor='k') ax.plot(support_1, quantiles_1, lw=3, marker='o', markersize=10, label='group 1', markeredgecolor='k') ax.set_xlabel('Quantile level', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) ax.legend(fontsize=16)
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
We can see on this figure that the support of the two quantile function is different, since the number of points in the two groups is different. In order to compute the gap between the two curves, we first interpolate the two curves on the union of the supports. The Wasserstein distance corresponds to the gap between the two quantile functions. Here we show two interpolations schemes that make it easy to estimate the Wasserstein distance between two 1D measures.
import scipy kinds = ['linear', 'nearest'] fig, axes = plt.subplots(1, len(kinds), figsize=(8 * len(kinds), 5)) for ax, kind in zip(axes, kinds): q0 = scipy.interpolate.interp1d(support_0, quantiles_0, kind=kind) q1 = scipy.interpolate.interp1d(support_1, quantiles_1, kind=kind) support_01 = jnp.sort(jnp.concatenate([support_0, support_1])) ax.plot(support_01, q0(support_01), label='group 0', lw=3, marker='o', markersize=10, markeredgecolor='k') ax.plot(support_01, q1(support_01), label='group 1', lw=3, marker='o', markersize=10, markeredgecolor='k') ax.fill_between(support_01, q0(support_01), q1(support_01), color='y', hatch='|', fc='w') ax.set_xlabel('Quantile level', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) ax.legend(fontsize=16) ax.set_title(f'Interpolation {kind}', fontsize=20)
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Soft Wasserstein Computing the Wasserstein distance involves complex operations such as sorting and interpolating. Fortunately, regularized optimal transport and its implementation with OTT provides accelerator-friendly differentiable approaches to sort according to a group (setting the weights of the outsiders to zero) while mapping onto a common support (sorting onto a fixed target of the same size, no matter what the group is). Here is an example of how to use OTT to obtain a sorted vector of a fixed size for each group. Note how simple this function is.
import functools @functools.partial(jax.jit, static_argnums=(2,)) def sort_group(inputs: jnp.ndarray, group: jnp.ndarray, target_size: int = 16): a = group / jnp.sum(group) b = jnp.ones(target_size) / target_size ot = ott.tools.soft_sort.transport_for_sort(inputs, a, b, dict(epsilon=1e-3)) return 1.0 / b * ot.apply(inputs, axis=0)
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
It is noteworthy to see that the obtained interpolation corresponds to a smooth version of the 'nearest' interpolation.
target_sizes = [4, 16, 64] _, axes = plt.subplots(1, len(target_sizes), figsize=(len(target_sizes * 8), 5)) for ax, target_size in zip(axes, target_sizes): ax.plot(sort_group(y_pred, jnp.logical_not(groups), target_size), lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0') ax.plot(sort_group(y_pred, groups, target_size), lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0') ax.legend(fontsize=16) ax.tick_params(axis='both', which='major', labelsize=16) ax.set_title(f'Group soft sorting on support of size {target_size}', fontsize=20)
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Training a network In order to train our classifier with a fairness regularizer, we first turn the categorical features $x$ of the adult dataset into dense ones (using 16 dimensions) and pass the obtained vector to an MLP $f_\theta$ with 2 hidden layers of 64 neurons. We optimize a loss which is the sum of the binary crossentropy and the Wasserstein distance between the distributions of predictions for the two classes. Since we want to work with minibatches and we do not want to change the common optimization scheme, we decide to use rather big batches of size $512$, in order to ensure that we have enough predictions across groups in a batch for the Wasserstein distance between them to make sense. We scale the Wasserstein distance by a factor $\lambda$ to control the balance between the fitness term (binary crossentropy) and the fairness regularization term (Wasserstein distance). We run the training procedure for 100 epochs with the Adam optimizer with learning rate $10^{-4}$, an entropic regularization factor $\epsilon=10^{-3}$ and a common interpolation support of size $12$. We compare the results for $\lambda \in {1, 10, 100, 1000}$ in terms of demographic parity as well as accuracy. Loss and Accuracy Let's first compare the performance of all those classifiers.
import matplotlib.pyplot as plt fig, axes = plt.subplots(2, 2, figsize=(16, 10)) for weight, curves in result.items(): for ax_row, metric in zip(axes, ['loss', 'accuracy']): for ax, phase in zip(ax_row, ['train', 'eval']): arr = np.array(curves[f'{phase}_{metric}']) ax.plot(arr[:, 0], arr[:, 1], label=f'$\lambda={weight:.0f}$', lw=5, marker='o', markersize=12, markeredgecolor='k', markevery=10) ax.set_title(f'{metric} / {phase}', fontsize=20) ax.legend(fontsize=18) ax.set_xlabel('Epoch', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) plt.tight_layout()
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
We can see that when we increase the fairness regularization factor $\lambda$, the training accuracy slightly decreases but it does not impact too much the eval accuracy. The fairness regularizer is a rather good regularizer. For $\lambda = 1000$ the training metrics are a bit more degraded as well as the eval ones, but we also note that after 100 epochs this classifier has not converged yet, so we could also imagine that it would catch up in terms of eval metrics. Demographic Parity Now that we have seen the effect of the fairness regularizer on the classification performance, we focus on the applicability of this regularizer on the distributions of predictions for the two groups. For this, we compute all the predictions, sort them and plot the quantile functions. The smaller the area between them, the more fair the classifier is.
num_rows = 2 num_cols = len(weights[1:]) // 2 fig, axes = plt.subplots(num_rows, num_cols, figsize=(7 * num_cols, 5 * num_rows)) for ax, w in zip(axes.ravel(), weights[1:]): logits, groups = get_predictions(ds_test, config, states[w]) plot_quantiles(logits, groups, ax) ax.set_title(f'$\lambda = {w:.0f}$', fontsize=22) ax.set_ylabel('Prediction', fontsize=18) plt.tight_layout()
docs/notebooks/fairness.ipynb
google-research/ott
apache-2.0
Loading the tokenizing the corpus
from glob import glob import re import string import funcy as fp from gensim import models from gensim.corpora import Dictionary, MmCorpus import nltk import pandas as pd # quick and dirty.... EMAIL_REGEX = re.compile(r"[a-z0-9\.\+_-]+@[a-z0-9\._-]+\.[a-z]*") FILTER_REGEX = re.compile(r"[^a-z '#]") TOKEN_MAPPINGS = [(EMAIL_REGEX, "#email"), (FILTER_REGEX, ' ')] def tokenize_line(line): res = line.lower() for regexp, replacement in TOKEN_MAPPINGS: res = regexp.sub(replacement, res) return res.split() def tokenize(lines, token_size_filter=2): tokens = fp.mapcat(tokenize_line, lines) return [t for t in tokens if len(t) > token_size_filter] def load_doc(filename): group, doc_id = filename.split('/')[-2:] with open(filename) as f: doc = f.readlines() return {'group': group, 'doc': doc, 'tokens': tokenize(doc), 'id': doc_id} docs = pd.DataFrame(map(load_doc, glob('data/20news-bydate-train/*/*'))).set_index(['group','id']) docs.head()
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause