markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<a id="keras_metric_names"></a> ์ผ€๋ผ์Šค ์ง€ํ‘œ ์ด๋ฆ„ ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ผ€๋ผ์Šค ๋ชจ๋ธ์€ ์ง€ํ‘œ ์ด๋ฆ„์„ ๋” ์ผ๊ด€์„ฑ์žˆ๊ฒŒ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ง€ํ‘œ๋ฅผ ๋ฌธ์ž์—ด๋กœ ์ „๋‹ฌํ•˜๋ฉด ์ •ํ™•ํžˆ ๊ฐ™์€ ๋ฌธ์ž์—ด์ด ์ง€ํ‘œ์˜ name์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. model.fit ๋ฉ”์„œ๋“œ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ํžˆ์Šคํ† ๋ฆฌ(history) ๊ฐ์ฒด์™€ keras.callbacks๋กœ ์ „๋‹ฌํ•˜๋Š” ๋กœ๊ทธ์— ๋‚˜ํƒ€๋‚˜๋Š” ์ด๋ฆ„์ด ์ง€ํ‘œ๋กœ ์ „๋‹ฌํ•œ ๋ฌธ์ž์—ด์ด ๋ฉ๋‹ˆ๋‹ค.
model.compile( optimizer = tf.keras.optimizers.Adam(0.001), loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")]) history = model.fit(train_data) history.history.keys()
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ด์ „ ๋ฒ„์ „์€ ์ด์™€ ๋‹ค๋ฅด๊ฒŒ metrics=["accuracy"]๋ฅผ ์ „๋‹ฌํ•˜๋ฉด dict_keys(['loss', 'acc'])๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ์ผ€๋ผ์Šค ์˜ตํ‹ฐ๋งˆ์ด์ € v1.train.AdamOptimizer๋‚˜ v1.train.GradientDescentOptimizer ๊ฐ™์€ v1.train์— ์žˆ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €๋Š” tf.keras.optimizers์— ์žˆ๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. v1.train์„ keras.optimizers๋กœ ๋ฐ”๊พธ๊ธฐ ๋‹ค์Œ์€ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ๋ฐ”๊ฟ€ ๋•Œ ์œ ๋…ํ•ด์•ผ ํ•  ๋‚ด์šฉ์ž…๋‹ˆ๋‹ค: ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋ฉด ์˜ˆ์ „ ์ฒดํฌํฌ์ธํŠธ์™€ ํ˜ธํ™˜์ด๋˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…์‹ค๋ก  ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ณธ๊ฐ’์€ ๋ชจ๋‘ 1e-8์—์„œ 1e-7๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค(๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ํฐ ์ฐจ์ด๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค). v1.train.GradientDescentOptimizer๋Š” tf.keras.optimizers.SGD๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. v1.train.MomentumOptimizer๋Š” ๋ชจ๋ฉ˜ํ…€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” SGD ์˜ตํ‹ฐ๋งˆ์ด์ €๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: tf.keras.optimizers.SGD(..., momentum=...). v1.train.AdamOptimizer๋Š” tf.keras.optimizers.Adam๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. beta1๊ณผ beta2 ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” beta_1๊ณผ beta_2๋กœ ์ด๋ฆ„์ด ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. v1.train.RMSPropOptimizer๋Š” tf.keras.optimizers.RMSprop๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. decay ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” rho๋กœ ์ด๋ฆ„์ด ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. v1.train.AdadeltaOptimizer๋Š” tf.keras.optimizers.Adadelta๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.train.AdagradOptimizer๋Š” tf.keras.optimizers.Adagrad๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.train.FtrlOptimizer๋Š” tf.keras.optimizers.Ftrl๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. accum_name๊ณผ linear_name ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ์‚ญ์ œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. tf.contrib.AdamaxOptimizer์™€ tf.contrib.NadamOptimizer๋Š” tf.keras.optimizers.Adamax์™€ tf.keras.optimizers.Nadam๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. beta1, beta2 ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” beta_1, beta_2๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. tf.keras.optimizers์˜ ์ƒˆ๋กœ์šด ๊ธฐ๋ณธ๊ฐ’ <a id="keras_optimizer_lr"></a> ์ฃผ์˜: ๋งŒ์•ฝ ๋ชจ๋ธ์ด ์ˆ˜๋ ดํ•˜๋Š”๋ฐ ๋ณ€ํ™”๊ฐ€ ์žˆ๋‹ค๋ฉด ํ•™์Šต๋ฅ  ๊ธฐ๋ณธ๊ฐ’์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”. optimizers.SGD, optimizers.Adam, optimizers.RMSprop ๊ธฐ๋ณธ๊ฐ’์€ ๊ทธ๋Œ€๋กœ์ž…๋‹ˆ๋‹ค.. ํ•™์Šต๋ฅ  ๊ธฐ๋ณธ๊ฐ’์ด ๋ฐ”๋€ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: optimizers.Adagrad๋Š” 0.01์—์„œ 0.001๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. optimizers.Adadelta๋Š” 1.0์—์„œ 0.001๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. optimizers.Adamax๋Š” 0.002์—์„œ 0.001๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. optimizers.Nadam์€ 0.002์—์„œ 0.001๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. ํ…์„œ๋ณด๋“œ ํ…์„œํ”Œ๋กœ 2๋Š” ํ…์„œ๋ณด๋“œ(TensorBoard) ์‹œ๊ฐํ™”๋ฅผ ์œ„ํ•ด ์„œ๋จธ๋ฆฌ(summary) ๋ฐ์ดํ„ฐ๋ฅผ ์ž‘์„ฑํ•˜๋Š”๋ฐ ์‚ฌ์šฉํ•˜๋Š” tf.summary API์— ํฐ ๋ณ€ํ™”๊ฐ€์žˆ์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด tf.summary์— ๋Œ€ํ•œ ๊ฐœ๊ด„ ์†Œ๊ฐœ๋Š” TF 2 API๋ฅผ ์‚ฌ์šฉํ•œ ์‹œ์ž‘ํ•˜๊ธฐ ํŠœํ† ๋ฆฌ์–ผ์™€ ํ…์„œ๋ณด๋“œ TF 2 ์ด์ „ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ์ €์žฅ๊ณผ ๋ณต์› ์ฒดํฌํฌ์ธํŠธ ํ˜ธํ™˜์„ฑ ํ…์„œํ”Œ๋กœ 2.0์€ ๊ฐ์ฒด ๊ธฐ๋ฐ˜์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด์ „ ์ด๋ฆ„ ๊ธฐ๋ฐ˜ ์Šคํƒ€์ผ์˜ ์ฒดํฌํฌ์ธํŠธ๋„ ์—ฌ์ „ํžˆ ๋ณต์›ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ์ฃผ์˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ ๋ณ€ํ™˜ ๊ณผ์ • ๋•Œ๋ฌธ์— ๋ณ€์ˆ˜ ์ด๋ฆ„์ด ๋ฐ”๋€” ์ˆ˜ ์žˆ์ง€๋งŒ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์˜ ์ด๋ฆ„๊ณผ ์ฒดํฌํฌ์ธํŠธ์— ์žˆ๋Š” ์ด๋ฆ„์„ ๋‚˜์—ดํ•ด ๋ณด๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ์—ฌ์ „ํžˆ ๋ชจ๋“  ๋ณ€์ˆ˜๋Š” ์„ค์ • ๊ฐ€๋Šฅํ•œ name ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ผ€๋ผ์Šค ๋ชจ๋ธ๋„ name ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ๋ณ€์ˆ˜ ์ด๋ฆ„์˜ ์ ‘๋‘์–ด๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. v1.name_scope ํ•จ์ˆ˜๋ฅผ ๋ณ€์ˆ˜ ์ด๋ฆ„์˜ ์ ‘๋‘์–ด๋ฅผ ์ง€์ •ํ•˜๋Š”๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” tf.variable_scope์™€๋Š” ๋งค์šฐ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋ฆ„์—๋งŒ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋ฉฐ ๋ณ€์ˆ˜๋ฅผ ์ถ”์ ํ•˜๊ฑฐ๋‚˜ ์žฌ์‚ฌ์šฉ์„ ๊ด€์žฅํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ์ฃผ์–ด์ง„ ์ƒํ™ฉ์— ์ž˜ ๋งž์ง€ ์•Š๋Š”๋‹ค๋ฉด v1.train.init_from_checkpoint ํ•จ์ˆ˜๋ฅผ ์‹œ๋„ํ•ด ๋ณด์„ธ์š”. ์ด ํ•จ์ˆ˜๋Š” assignment_map ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์˜ˆ์ „ ์ด๋ฆ„๊ณผ ์ƒˆ๋กœ์šด ์ด๋ฆ„์„ ๋งคํ•‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธํŠธ: ์ง€์—ฐ ์ ์žฌ๊ฐ€ ๋˜๋Š” ๊ฐ์ฒด ๊ธฐ๋ฐ˜ ์ฒดํฌํฌ์ธํŠธ์™€๋Š” ๋‹ฌ๋ฆฌ ์ด๋ฆ„ ๊ธฐ๋ฐ˜ ์ฒดํฌํฌ์ธํŠธ๋Š” ํ•จ์ˆ˜๊ฐ€ ํ˜ธ์ถœ๋  ๋•Œ ๋ชจ๋“  ๋ณ€์ˆ˜๊ฐ€ ๋งŒ๋“ค์–ด ์ง‘๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ build ๋ฉ”์„œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜๊ฑฐ๋‚˜ ๋ฐฐ์น˜ ๋ฐ์ดํ„ฐ์—์„œ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•  ๋•Œ๊นŒ์ง€ ๋ณ€์ˆ˜ ์ƒ์„ฑ์„ ์ง€์—ฐํ•ฉ๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ ์ถ”์ •๊ธฐ(Estimator) ์ €์žฅ์†Œ์—๋Š” ํ…์„œํ”Œ๋กœ 1.X์˜ ์ถ”์ •๊ธฐ์—์„œ ๋งŒ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ 2.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๋ณ€ํ™˜ ๋„๊ตฌ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋น„์Šทํ•œ ๊ฒฝ์šฐ๋ฅผ ์œ„ํ•œ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” ์‚ฌ๋ก€์ž…๋‹ˆ๋‹ค. saved_model ํ˜ธํ™˜์„ฑ saved_model์—๋Š” ์‹ฌ๊ฐํ•œ ํ˜ธํ™˜์„ฑ ๋ฌธ์ œ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ 1.x์˜ saved_model์€ ํ…์„œํ”Œ๋กœ 2.0์™€ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ 2.0์˜ saved_model๋กœ ์ €์žฅํ•œ ๋ชจ๋ธ๋„ ์—ฐ์‚ฐ์ด ์ง€์›๋œ๋‹ค๋ฉด TensorFlow 1.x์—์„œ ์ž‘๋™๋ฉ๋‹ˆ๋‹ค. Graph.pb ๋˜๋Š” Graph.pbtxt ์›๋ณธ Graph.pb ํŒŒ์ผ์„ ํ…์„œํ”Œ๋กœ 2.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ ์—†์Šต๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ "๋™๊ฒฐ๋œ ๊ทธ๋ž˜ํ”„"(๋ณ€์ˆ˜๊ฐ€ ์ƒ์ˆ˜๋กœ ๋ฐ”๋€ tf.Graph)๋ผ๋ฉด v1.wrap_function๋ฅผ ์‚ฌ์šฉํ•ด concrete_function๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค:
def wrap_frozen_graph(graph_def, inputs, outputs): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []) import_graph = wrapped_import.graph return wrapped_import.prune( tf.nest.map_structure(import_graph.as_graph_element, inputs), tf.nest.map_structure(import_graph.as_graph_element, outputs))
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์˜ˆ๋ฅผ ๋“ค์–ด 2016๋…„ Inception v1์˜ ๋™๊ฒฐ๋œ ๊ทธ๋ž˜ํ”„์ž…๋‹ˆ๋‹ค:
path = tf.keras.utils.get_file( 'inception_v1_2016_08_28_frozen.pb', 'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz', untar=True)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
tf.GraphDef๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค:
graph_def = tf.compat.v1.GraphDef() loaded = graph_def.ParseFromString(open(path,'rb').read())
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
concrete_function๋กœ ๊ฐ์Œ‰๋‹ˆ๋‹ค:
inception_func = wrap_frozen_graph( graph_def, inputs='input:0', outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
ํ…์„œ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค:
input_img = tf.ones([1,224,224,3], dtype=tf.float32) inception_func(input_img).shape
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ถ”์ •๊ธฐ ์ถ”์ •๊ธฐ๋กœ ํ›ˆ๋ จํ•˜๊ธฐ ํ…์„œํ”Œ๋กœ 2.0์€ ์ถ”์ •๊ธฐ(estimator)๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ถ”์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ํ…์„œํ”Œ๋กœ 1.x์˜ input_fn(), tf.estimator.TrainSpec, tf.estimator.EvalSpec๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ input_fn์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ๊ณผ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์˜ˆ์ž…๋‹ˆ๋‹ค. input_fn๊ณผ ํ›ˆ๋ จ/ํ‰๊ฐ€ ์ŠคํŽ™ ๋งŒ๋“ค๊ธฐ
# ์ถ”์ •๊ธฐ input_fn์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. def input_fn(): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] BUFFER_SIZE = 10000 BATCH_SIZE = 64 def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label[..., tf.newaxis] train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) return train_data.repeat() # ํ›ˆ๋ จ๊ณผ ํ‰๊ฐ€ ์ŠคํŽ™์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. train_spec = tf.estimator.TrainSpec(input_fn=input_fn, max_steps=STEPS_PER_EPOCH * NUM_EPOCHS) eval_spec = tf.estimator.EvalSpec(input_fn=input_fn, steps=STEPS_PER_EPOCH)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ผ€๋ผ์Šค ๋ชจ๋ธ ์ •์˜ ์‚ฌ์šฉํ•˜๊ธฐ ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ถ”์ •๊ธฐ๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์กฐ๊ธˆ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ผ€๋ผ์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ tf.keras.model_to_estimator ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ถ”์ •๊ธฐ๋กœ ๋ฐ”๊พธ์„ธ์š”. ๋‹ค์Œ ์ฝ”๋“œ๋Š” ์ถ”์ •๊ธฐ๋ฅผ ๋งŒ๋“ค๊ณ  ํ›ˆ๋ จํ•  ๋•Œ ์ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ ์ค๋‹ˆ๋‹ค.
def make_model(): return tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) model = make_model() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) estimator = tf.keras.estimator.model_to_estimator( keras_model = model ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๋…ธํŠธ: ์ผ€๋ผ์Šค์—์„œ๋Š” ๊ฐ€์ค‘์น˜๊ฐ€ ์ ์šฉ๋œ ์ง€ํ‘œ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. model_to_estimator๋ฅผ ์‚ฌ์šฉํ•ด ์ถ”์ •๊ธฐ API์˜ ๊ฐ€์ค‘ ์ง€ํ‘œ๋กœ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. add_metrics ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ์ถ”์ •๊ธฐ ์ŠคํŽ™(spec)์— ์ง์ ‘ ์ด๋Ÿฐ ์ง€ํ‘œ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ model_fn ์‚ฌ์šฉํ•˜๊ธฐ ๊ธฐ์กด์— ์ž‘์„ฑํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ์ถ”์ •๊ธฐ model_fn์„ ์œ ์ง€ํ•ด์•ผ ํ•œ๋‹ค๋ฉด ์ด model_fn์„ ์ผ€๋ผ์Šค ๋ชจ๋ธ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ˜ธํ™˜์„ฑ ๋•Œ๋ฌธ์— ์‚ฌ์šฉ์ž ์ •์˜ model_fn์€ 1.x ์Šคํƒ€์ผ์˜ ๊ทธ๋ž˜ํ”„ ๋ชจ๋“œ๋กœ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฆ‰ ์ฆ‰์‹œ ์‹คํ–‰๊ณผ ์˜์กด์„ฑ ์ž๋™ ์ œ์–ด๊ฐ€ ์—†๋‹ค๋Š” ๋œป์ž…๋‹ˆ๋‹ค. <a name="minimal_changes"></a> ์‚ฌ์šฉ์ž ์ •์˜ model_fn์„ ์ตœ์†Œํ•œ๋งŒ ๋ณ€๊ฒฝํ•˜๊ธฐ ์‚ฌ์šฉ์ž ์ •์˜ model_fn์„ ์ตœ์†Œํ•œ์˜ ๋ณ€๊ฒฝ๋งŒ์œผ๋กœ TF 2.0๊ณผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด tf.compat.v1 ์•„๋ž˜์˜ optimizers์™€ metrics์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ model_fn์— ์ผ€๋ผ์Šค ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์‚ฌ์šฉ์ž ์ •์˜ ํ›ˆ๋ จ ๋ฃจํ”„์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๊ณผ ๋น„์Šทํ•ฉ๋‹ˆ๋‹ค: mode ๋งค๊ฐœ๋ณ€์ˆ˜์— ๊ธฐ์ดˆํ•˜์—ฌ training ์ƒํƒœ๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์ง€์ •ํ•˜์„ธ์š”. ์˜ตํ‹ฐ๋งˆ์ด์ €์— ๋ชจ๋ธ์˜ trainable_variables๋ฅผ ๋ช…์‹œ์ ์œผ๋กœ ์ „๋‹ฌํ•˜์„ธ์š”. ์‚ฌ์šฉ์ž ์ •์˜ ๋ฃจํ”„์™€ ํฐ ์ฐจ์ด์ ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: model.losses๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋Œ€์‹  tf.keras.Model.get_losses_for ์‚ฌ์šฉํ•˜์—ฌ ์†์‹ค์„ ์ถ”์ถœํ•˜์„ธ์š”. tf.keras.Model.get_updates_for๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ์—…๋ฐ์ดํŠธ ๊ฐ’์„ ์ถ”์ถœํ•˜์„ธ์š”. ๋…ธํŠธ: "์—…๋ฐ์ดํŠธ(update)"๋Š” ๊ฐ ๋ฐฐ์น˜๊ฐ€ ๋๋‚œ ํ›„์— ๋ชจ๋ธ์— ์ ์šฉํ•ด์•ผ ํ•  ๋ณ€ํ™”๋Ÿ‰์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด tf.keras.layers.BatchNormalization ์ธต์—์„œ ํ‰๊ท ๊ณผ ๋ถ„์‚ฐ์˜ ์ด๋™ ํ‰๊ท (moving average)์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์‚ฌ์šฉ์ž ์ •์˜ model_fn์œผ๋กœ๋ถ€ํ„ฐ ์ถ”์ •๊ธฐ๋ฅผ ๋งŒ๋“œ๋Š” ์ฝ”๋“œ๋กœ ์ด๋Ÿฐ ๊ฐœ๋…์„ ์ž˜ ๋ณด์—ฌ ์ค๋‹ˆ๋‹ค.
def my_model_fn(features, labels, mode): model = make_model() optimizer = tf.compat.v1.train.AdamOptimizer() loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) training = (mode == tf.estimator.ModeKeys.TRAIN) predictions = model(features, training=training) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) reg_losses = model.get_losses_for(None) + model.get_losses_for(features) total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses) accuracy = tf.compat.v1.metrics.accuracy(labels=labels, predictions=tf.math.argmax(predictions, axis=1), name='acc_op') update_ops = model.get_updates_for(None) + model.get_updates_for(features) minimize_op = optimizer.minimize( total_loss, var_list=model.trainable_variables, global_step=tf.compat.v1.train.get_or_create_global_step()) train_op = tf.group(minimize_op, update_ops) return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=total_loss, train_op=train_op, eval_metric_ops={'accuracy': accuracy}) # ์ถ”์ •๊ธฐ๋ฅผ ๋งŒ๋“ค๊ณ  ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. estimator = tf.estimator.Estimator(model_fn=my_model_fn) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TF 2.0์œผ๋กœ ์‚ฌ์šฉ์ž ์ •์˜ model_fn ๋งŒ๋“ค๊ธฐ ์‚ฌ์šฉ์ž ์ •์˜ model_fn์—์„œ TF 1.x API๋ฅผ ๋ชจ๋‘ ์ œ๊ฑฐํ•˜๊ณ  TF 2.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋ ค๋ฉด ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ์ง€ํ‘œ๋ฅผ tf.keras.optimizers์™€ tf.keras.metrics๋กœ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ตœ์†Œํ•œ์˜ ๋ณ€๊ฒฝ์™ธ์—๋„ ์‚ฌ์šฉ์ž ์ •์˜ model_fn์—์„œ ์—…๊ทธ๋ ˆ์ด๋“œํ•ด์•ผ ํ•  ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค: v1.train.Optimizer ๋Œ€์‹ ์— tf.keras.optimizers์„ ์‚ฌ์šฉํ•˜์„ธ์š”. tf.keras.optimizers์— ๋ชจ๋ธ์˜ trainable_variables์„ ๋ช…์‹œ์ ์œผ๋กœ ์ „๋‹ฌํ•˜์„ธ์š”. train_op/minimize_op์„ ๊ณ„์‚ฐํ•˜๋ ค๋ฉด, ์†์‹ค์ด (ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•œ ๊ฐ์ฒด๊ฐ€ ์•„๋‹ˆ๋ผ) ์Šค์นผ๋ผ Tensor์ด๋ฉด Optimizer.get_updates()์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ฐ˜ํ™˜๋˜๋Š” ๋ฆฌ์ŠคํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์›์†Œ๊ฐ€ train_op/minimize_op์ž…๋‹ˆ๋‹ค. ์†์‹ค์ด (ํ•จ์ˆ˜ ๊ฐ™์€) ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•œ ๊ฐ์ฒด๋ผ๋ฉด train_op/minimize_op ๊ฐ์ฒด๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด Optimizer.minimize()๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ‰๊ฐ€๋ฅผ ํ•˜๋ ค๋ฉด tf.compat.v1.metrics ๋Œ€์‹ ์— tf.keras.metrics๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์œ„์˜ my_model_fn๋ฅผ 2.0์œผ๋กœ ์ด์ „ํ•œ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:
def my_model_fn(features, labels, mode): model = make_model() training = (mode == tf.estimator.ModeKeys.TRAIN) loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) predictions = model(features, training=training) # ์กฐ๊ฑด์ด ์—†๋Š” ์†์‹ค(None ๋ถ€๋ถ„)๊ณผ # ์ž…๋ ฅ ์กฐ๊ฑด์ด ์žˆ๋Š” ์†์‹ค(features ๋ถ€๋ถ„)์„ ์–ป์Šต๋‹ˆ๋‹ค. reg_losses = model.get_losses_for(None) + model.get_losses_for(features) total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses) # tf.keras.metrics๋กœ ์—…๊ทธ๋ ˆ์ด๋“œ ํ•ฉ๋‹ˆ๋‹ค. accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj') accuracy = accuracy_obj.update_state( y_true=labels, y_pred=tf.math.argmax(predictions, axis=1)) train_op = None if training: # tf.keras.optimizers๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•ฉ๋‹ˆ๋‹ค. optimizer = tf.keras.optimizers.Adam() # tf.compat.v1.train.global_step์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ ์œ„ํ•ด # ์ˆ˜๋™์œผ๋กœ tf.compat.v1.global_step ๋ณ€์ˆ˜๋ฅผ optimizer.iterations์— ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. # SessionRunHooks์ด global_step์— ์˜์กดํ•˜๊ธฐ ๋•Œ๋ฌธ์— # ์ด ํ• ๋‹น๋ฌธ์€ ์ถ”์ •๊ธฐ์— ์ง€์ •๋œ ๋ชจ๋“  `tf.train.SessionRunHook`์— ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. optimizer.iterations = tf.compat.v1.train.get_or_create_global_step() # ์กฐ๊ฑด์ด ์—†๋Š” ์†์‹ค(None ๋ถ€๋ถ„)๊ณผ # ์ž…๋ ฅ ์กฐ๊ฑด์ด ์žˆ๋Š” ์†์‹ค(features ๋ถ€๋ถ„)์„ ์–ป์Šต๋‹ˆ๋‹ค. update_ops = model.get_updates_for(None) + model.get_updates_for(features) # minimize_op์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. minimize_op = optimizer.get_updates( total_loss, model.trainable_variables)[0] train_op = tf.group(minimize_op, *update_ops) return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=total_loss, train_op=train_op, eval_metric_ops={'Accuracy': accuracy_obj}) # ์ถ”์ •๊ธฐ๋ฅผ ๋งŒ๋“ค๊ณ  ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. estimator = tf.estimator.Estimator(model_fn=my_model_fn) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
ํ”„๋ฆฌ๋ฉ”์ด๋“œ ์ถ”์ •๊ธฐ tf.estimator.DNN*, tf.estimator.Linear*, tf.estimator.DNNLinearCombined* ๋ชจ๋“ˆ ์•„๋ž˜์— ์žˆ๋Š” ํ”„๋ฆฌ๋ฉ”์ด๋“œ ์ถ”์ •๊ธฐ(premade estimator)๋Š” ๊ณ„์† ํ…์„œํ”Œ๋กœ 2.0 API๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ผ๋ถ€ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค: input_layer_partitioner: 2.0์—์„œ ์‚ญ์ œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. loss_reduction: tf.compat.v1.losses.Reduction ๋Œ€์‹ ์— tf.keras.losses.Reduction๋กœ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์ด tf.compat.v1.losses.Reduction.SUM์—์„œ tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. optimizer, dnn_optimizer, linear_optimizer: ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” tf.compat.v1.train.Optimizer ๋Œ€์‹ ์— tf.keras.optimizers๋กœ ์—…๋ฐ์ดํŠธ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์œ„ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ฐ˜์˜ํ•˜๋ ค๋ฉด: 1. Distribution Strategy๋Š” TF 2.0์„ ์ž๋™์œผ๋กœ ๋Œ€์‘ํ•˜๋ฏ€๋กœ input_layer_partitioner์— ๋Œ€ํ•ด ์ด์ „์ด ํ•„์š”์—†์Šต๋‹ˆ๋‹ค. 2. loss_reduction์˜ ๊ฒฝ์šฐ ์ง€์›๋˜๋Š” ์˜ต์…˜์„ tf.keras.losses.Reduction ํ™•์ธํ•ด ๋ณด์„ธ์š”.. 3. optimizer ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ๊ฒฝ์šฐ, optimizer, dnn_optimizer, linear_optimizer ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ optimizer ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ string์œผ๋กœ ์ง€์ •ํ–ˆ๋‹ค๋ฉด ์•„๋ฌด๊ฒƒ๋„ ๋ฐ”๊ฟ€ ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ tf.keras.optimizers๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ์™ธ์˜ ๊ฒฝ์šฐ tf.compat.v1.train.Optimizer๋ฅผ ์ด์— ์ƒ์‘ํ•˜๋Š” tf.keras.optimizers๋กœ ๋ฐ”๊พธ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ ๋ณ€ํ™˜๊ธฐ <a id="checkpoint_converter"></a> keras.optimizers๋กœ ์ด์ „ํ•˜๋ฉด TF 1.X๋กœ ์ €์žฅํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์— ์ €์žฅํ•˜๋Š” tf.keras.optimizers ๋ณ€์ˆ˜๊ฐ€ ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. Tf 2.0์œผ๋กœ ์ด์ „ํ•œ ํ›„์— ์˜ˆ์ „ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ฒดํฌํฌ์ธํŠธ ๋ณ€ํ™˜๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”.
! curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์Šคํฌ๋ฆฝํŠธ๋Š” ๋„์›€๋ง์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค:
! python checkpoint_converter.py -h
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TensorShape ์ด ํด๋ž˜์Šค๋Š” tf.compat.v1.Dimension ๊ฐ์ฒด ๋Œ€์‹ ์— int ๊ฐ’์„ ๊ฐ€์ง€๋„๋ก ๋‹จ์ˆœํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ int ๊ฐ’์„ ์–ป๊ธฐ ์œ„ํ•ด .value() ๋ฉ”์„œ๋“œ๋ฅผ ํ˜ธ์ถœํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์—ฌ์ „ํžˆ ๊ฐœ๋ณ„ tf.compat.v1.Dimension ๊ฐ์ฒด๋Š” tf.TensorShape.dims๋กœ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋Š” ํ…์„œํ”Œ๋กœ 1.x์™€ ํ…์„œํ”Œ๋กœ 2.0์˜ ์ฐจ์ด์ ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
# TensorShape ๊ฐ์ฒด๋ฅผ ๋งŒ๋“ค๊ณ  ์ธ๋ฑ์Šค๋ฅผ ์ฐธ์กฐํ•ฉ๋‹ˆ๋‹ค. i = 0 shape = tf.TensorShape([16, None, 256]) shape
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TF 1.x์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: python value = shape[i].value TF 2.0์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:
value = shape[i] value
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TF 1.x์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: python for dim in shape: value = dim.value print(value) TF 2.0์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:
for value in shape: print(value)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TF 1.x์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค(๋‹ค๋ฅธ Dimension ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ๋„): python dim = shape[i] dim.assert_is_compatible_with(other_dim) TF 2.0์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:
other_dim = 16 Dimension = tf.compat.v1.Dimension if shape.rank is None: dim = Dimension(None) else: dim = shape.dims[i] dim.is_compatible_with(other_dim) # ๋‹ค๋ฅธ Dimension ๋ฉ”์„œ๋“œ๋„ ๋™์ผ shape = tf.TensorShape(None) if shape: dim = shape.dims[i] dim.is_compatible_with(other_dim) # ๋‹ค๋ฅธ Dimension ๋ฉ”์„œ๋“œ๋„ ๋™์ผ
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๋žญํฌ(rank)๋ฅผ ์•Œ ์ˆ˜ ์žˆ๋‹ค๋ฉด tf.TensorShape์˜ ๋ถˆ๋ฆฌ์–ธ ๊ฐ’์€ True๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด False์ž…๋‹ˆ๋‹ค.
print(bool(tf.TensorShape([]))) # ์Šค์นผ๋ผ print(bool(tf.TensorShape([0]))) # ๊ธธ์ด 0์ธ ๋ฒกํ„ฐ print(bool(tf.TensorShape([1]))) # ๊ธธ์ด 1์ธ ๋ฒกํ„ฐ print(bool(tf.TensorShape([None]))) # ๊ธธ์ด๋ฅผ ์•Œ ์ˆ˜ ์—†๋Š” ๋ฒกํ„ฐ print(bool(tf.TensorShape([1, 10, 100]))) # 3D ํ…์„œ print(bool(tf.TensorShape([None, None, None]))) # ํฌ๊ธฐ๋ฅผ ๋ชจ๋ฅด๋Š” 3D ํ…์„œ print() print(bool(tf.TensorShape(None))) # ๋žญํฌ๋ฅผ ์•Œ ์ˆ˜ ์—†๋Š” ํ…์„œ
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.
# Install latest pre-release version of BigDL Orca # Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade bigdl-orca # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0
python/orca/colab-notebook/quickstart/pytorch_lenet_mnist.ipynb
intel-analytics/BigDL
apache-2.0
Next, fit and evaluate using the Estimator.
from bigdl.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, checkpoint_trigger=EveryEpoch())
python/orca/colab-notebook/quickstart/pytorch_lenet_mnist.ipynb
intel-analytics/BigDL
apache-2.0
We'll plot all the prices at Adj Close using matplotlib, a python 2D plotting library that is Matlab flavored. We use Adjusted Close because it is commonly used for historical pricing, and accounts for all corporate actions such as stock splits, dividends/distributions and rights offerings. This happens to be our exact use-case.
%matplotlib inline import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter from matplotlib import style style.use('fivethirtyeight') spy['Adj Close'].plot(figsize=(20,10)) ax = plt.subplot() ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols plt.title('SPY Historical Price on Close') plt.xlabel('') plt.ylabel('Stock Price ($)');
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Great, looks similar to the SPY chart from before. Notice how due to historical pricing, the effect of including things like dividend yields increases to total return over the years. We can easily see the the bubble and crash around 2007-2009, as well as the long bull market up since then. Also we can see in the last couple months the small dip in September/October, and barely see the drop in the last couple days in the beginning of 2016. Calculate Lump Sum Lump Sum means to invest everything available all at once, in this case we have a hypothetical $10,000 to spend at any day in our history of the last 16 years. Then we want to know how much that investment would be worth today. Another way to look at this is we can make a chart where the X axis is the date we invest the lump sum, and the Y axis is the value of that investment today.
value_price = spy['Adj Close'][-1] # The final value of our stock initial_investment = 10000 # Our initial investment of $10k num_stocks_bought = initial_investment / spy['Adj Close'] lumpsum = num_stocks_bought * value_price lumpsum.name = 'Lump Sum' lumpsum.plot(figsize=(20,10)) ax = plt.subplot() ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols plt.title('Lump sum - Value today of $10,000 invested on date') plt.xlabel('') plt.ylabel('Investment Value ($)');
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Cool! Pandas makes it really easy to manipulate data with datetime indices. Looking at the chart we see that if we'd bought right at the bottom of the 2007-2009 crash our \$10,000 would be worth ~ $32,500. If only we had a time machine...
print("Lump sum: Investing on the 1 - Best day, 2 - Worst day in past, 3 - Worst day in all") print("1 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max())) print("2 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum[:-1000].idxmin().strftime('%b %d, %Y'), lumpsum[:-1000].min())) print("3 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min()))
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
What's nice to note as well is that even if we'd invested at the worst possible time, peak of the bubble in 2007, on Oct 9th, we'd still have come out net positive at \$14,593 today. The worst time to invest so far turns out to be more recent, on July 20th of 2015. This is because not only was the market down, but it's so recent we haven't had time for the investment to grow. Something something the best time to plant a tree was yesterday. Calculating Dollar Cost Averaging (DCA) Now lets do the same experiment, but instead we'll invest the \$10,000 we have using Dollar Cost Averaging (DCA). For this simple test, I'll assume instead of investing all at once, I'll invest in equal portions every 30 days (roughly a month), over a course of 360 days (roughly a year) total. So on day 1, I invest $10,000 / 12 ~ $833.33, on day 31, the same $833.33 and so on for 12 total investments. A special case is investing within the last year, when there isn't time to DCA all of it, as a compromise, I invest what portions I can and keep the rest as cash, since that is how reality works.
def doDCA(investment, start_date): # Get 12 investment dates in 30 day increments starting from start date investment_dates_all = pd.date_range(start_date,periods=12,freq='30D') # Remove those dates beyond our known data range investment_dates = investment_dates_all[investment_dates_all < spy.index[-1]] # Get closest business dates with available data closest_investment_dates = spy.index.searchsorted(investment_dates) # How much to invest on each date portion = investment/12.0 # (Python 3.0 does implicit double conversion, Python 2.7 does not) # Get the total of all stocks purchased for each of those dates (on the Close) stocks_invested = sum(portion / spy['Adj Close'][closest_investment_dates]) # Add uninvested amount back uninvested_dollars = portion * sum(investment_dates_all >= spy.index[-1]) # value of stocks today total_value = value_price*stocks_invested + uninvested_dollars return total_value # Generate DCA series for every possible date dca = pd.Series(spy.index.map(lambda x: doDCA(initial_investment, x)), index=spy.index, name='Dollar Cost Averaging (DCA)')
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Surprisingly straightforward, good job Pandas. Let's plot it similar to how we did with lump sum. The x axis is the date at which we start dollar cost averaging (and then continue for the next 360 days in 30 day increments from that date). The y axis is the final value of our investment today.
dca.plot(figsize=(20,10)) ax = plt.subplot() ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols plt.title('Dollar Cost Averaging - Value today of $10,000 invested on date') plt.xlabel('') plt.ylabel('Investment Value ($)');
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Interesting! DCA looks way really smooth and the graph is really high up, so it must be better right!? Wait, no, the Y axis is different, in fact it's highest high is around \$28,000 in comparison to the lump sums \$32,500. Let's look at the ideal/worst investment dates for DCA, I include the lump sum from before as well.
print("Lump sum") print(" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max())) print("Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum[:-1500].idxmin().strftime('%b %d, %Y'), lumpsum[:-1500].min())) print("Recent - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min())) print("\nDollar Cost Averaging") print(" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca.idxmax().strftime('%b %d, %Y'), dca.max())) print("Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca[:-1500].idxmin().strftime('%b %d, %Y'), dca[:-1500].min())) print("Recent - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca.idxmin().strftime('%b %d, %Y'), dca.min()))
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Looking at dollar cost averaging, the best day to start dollar cost averaging was July 12, 2002, when we were still recovering from the 'tech crashs. The worst day to start was around the peak of the 2007 bubble on Jan 26, 2007, and the absolute worst would have been to start last year on Jan 20, 2015. We can already see that there's some similarities between lump sum and DCA, DCA appears to have lower highs, but also higher lows. It's difficult to compare just by looking at numbers, we need to compare the two strategies visually side by side. Comparison of Lump Sum vs Dollar Cost Averaging So we've just individuallly tested two investing strategies exhaustively on every possible day in the last 16 years. Let's plot three charts on top of each other. The raw SPY stock price over the years on the top. Then in the middle we plot both lump sum and DCA on top of each other. Finally we'll plot the difference between them as $diff = lump sum - DCA$
# Difference between lump sum and DCA diff = (lumpsum - dca) diff.name = 'Difference (Lump Sum - DCA)' fig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, figsize=(20,15)) # SPY Actual spy['Adj Close'].plot(ax=ax1) ax1.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis in dollars ax1.set_xlabel('') ax1.set_title('SPY Historical Stock Price') ax1.set_ylabel('Stock Value ($)') # Comparison dca.plot(ax=ax2) lumpsum.plot(ax=ax2) ax2.axhline(initial_investment, alpha=0.5, linestyle="--", color="black") ax2.text(spy.index[50],initial_investment*1.1, "Initial Investment") # ax2.axhline(conservative, alpha=0.5, linestyle="--", color="black") # ax2.text(spy.index[-800],conservative*1.05, "Conservative Investing Strategy") ax2.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s ax2.legend() ax2.set_title('Comparison Lump Sum vs. Dollar Cost Averaging - Value of $10,000 invested on date') ax2.set_ylabel('Investment Value ($)') # Difference ax3.fill_between(diff.index, y1=diff, y2=0, color='blue', where=diff>0) ax3.fill_between(diff.index, y1=diff, y2=0, color='red', where=diff<0) ax3.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s ax3.set_ylabel('Difference ($)') ax3.set_title('Difference (Lump Sum - Dollar Cost Average)') ax3.legend(["Lump Sum > DCA", "DCA > Lump Sum"]);
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Before we start comparing, definitely take note of the middle chart, where the initial investment of \$10k is. Notice that if we had invested using either strategy, and at any point before 2 years ago, no matter which bubble or crash, we'd have made some pretty huge returns on our investments, double and tripling at some points. This is the power of compound interest. Looking at the DCA curve we do see the two similar humps we saw with the lump sum, but is both smoother and lags behind it. This makes perfect sense, as we're taking a type of moving average of the stock price over a year (in 30D increments) when we buy, instead of a single date. As a result our investment with DCA is less volatile (smoother), and lags behind (averages in previous investments) the lump sum values. The line for difference shows a positive dollar value for how much more investing in one lump sum would return versus dollar cost averaging in (blue). Similarly a negative value shows how much more dollar cost averaging would return vs a lump sum (red). The chart shows a wide swing around 2002 and 2009 between the two strategies, but elsewhere it's mostly positive (blue), suggesting lump sum tends to return a bit more overall. Let's look at the actual percentage where the values are positive (ie. where lump sum returns more).
print("Lump sum returns more than DCA %.1f%% of all the days" % (100*sum(diff>0)/len(diff))) print("DCA returns more than Lump sum %.1f%% of all the days" % (100*sum(diff<0)/len(diff)))
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Remarkable! So 66.3% of the time lump sum results in a higher final investment value over our monthly dollar cost averaging strategy. Almost dead on to the claims of 66% by the investopedia article I'd read. But maybe this isn't the whole story, perhaps the lump sum returned a little better than DCA most of the time, but in the really bad times DCA would do much better? One way to look at this, would be to see the average amount improvement lump sum has when it is better, versus the average amount DCA improves, when it is better.
print("Mean difference: Average dollar improvement lump sum returns vs. dca: ${:,.2f}".format(sum(diff) / len(diff))) print("Mean difference when lump sum > dca: ${:,.2f}".format(sum(diff[diff>0]) / sum(diff>0))) print("Mean difference when dca > lump sum: ${:,.2f}".format(sum(-diff[diff<0]) / sum(diff<0)))
Lumpsum_vs_DCA.ipynb
Elucidation/lumpsum_vs_dca
apache-2.0
Training examples Training examples are defined through the Imageset class of TRIOSlib. The class defines a list of tuples with pairs of input and desired output image paths, and an (optional) binary image mask (see http://trioslib.sourceforge.net/index.html for more details). In this case, we use a training set with two training examples, and for each example, we define the mask as being the input image, that is, the operator is applied in each white pixel of the input image:
train_imgs = trios.Imageset.read('images/train_images.set') for i in range(len(train_imgs)): print("sample %d:" % (i + 1)) print("\t input: %s" % train_imgs[i][0]) print("\t desired output: %s" % train_imgs[i][1]) print("\t mask: %s\n" % train_imgs[i][2]) print("The first pair of input and ouput examples:") fig = plt.figure(1, figsize=(15,15)) img=mpimg.imread(train_imgs[0][0]) fig.add_subplot(121) plt.imshow(img, cmap=cm.gray) plt.title('Input') img_gt=mpimg.imread(train_imgs[0][1]) fig.add_subplot(122) plt.title('Desired output') plt.imshow(img_gt, cmap=cm.gray)
cnn_trioslib.ipynb
fjulca-aguilar/DeepTRIOS
gpl-3.0
Training We define a CNN architecture through the CNN_TFClassifier class. The classifier requires the input image shape and number of outputs for initialization. We define the input shape according to the patches extracted from the images, in this example, 11x11, and use a single sigmoid output unit for binary classification: text and non-text classes. Additional (optional) parameters include: learning_rate (default 1e-4), dropout_prob (default 0.5), and output_activation=(default 'sigmoid').
patch_side = 19 num_outputs = 1 win = np.ones((patch_side, patch_side), np.uint8) cnn_classifier = CNN_TFClassifier((patch_side, patch_side, 1), num_outputs, num_epochs=10, model_dir='cnn_text_segmentation') op_tf = trios.WOperator(win, TFClassifier(cnn_classifier), RAWFeatureExtractor, batch=True) op_tf.train(train_imgs)
cnn_trioslib.ipynb
fjulca-aguilar/DeepTRIOS
gpl-3.0
Applying the operator to a new image
test_img = sp.ndimage.imread('images/veja11.sh50.png', mode='L') out_img = op_tf.apply(test_img, test_img) fig = plt.figure(2, figsize=(15,15)) fig.add_subplot(121) plt.imshow(test_img, cmap=cm.gray) plt.title('Input') fig.add_subplot(122) plt.imshow(out_img, cmap=cm.gray) plt.title('CNN output')
cnn_trioslib.ipynb
fjulca-aguilar/DeepTRIOS
gpl-3.0
Download and process data In this section we'll: * Download the wine quality data directly from UCI Machine Learning * Read it into a Pandas dataframe and preview it * Split the data and labels into train and test sets
!wget 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv' data = pd.read_csv('winequality-white.csv', index_col=False, delimiter=';') data = shuffle(data, random_state=4) data.head() labels = data['quality'] print(labels.value_counts()) data = data.drop(columns=['quality']) train_size = int(len(data) * 0.8) train_data = data[:train_size] train_labels = labels[:train_size] test_data = data[train_size:] test_labels = labels[train_size:] train_data.head()
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Train tf.keras model In this section we'll: Build a regression model using tf.keras to predict a wine's quality score Train the model Add a layer to the model to prepare it for serving
# This is the size of the array we'll be feeding into our model for each wine example input_size = len(train_data.iloc[0]) print(input_size) model = Sequential() model.add(Dense(200, input_shape=(input_size,), activation='relu')) model.add(Dense(50, activation='relu')) model.add(Dense(25, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.summary() model.fit(train_data.values,train_labels.values, epochs=4, batch_size=32, validation_split=0.1)
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Deploy keras model to Cloud AI Platform In this section we'll: * Set up some global variables for our GCP project * Add a serving layer to our model so we can deploy it on Cloud AI Platform * Run the deploy command to deploy our model * Generate a test prediction on our deployed model
# Update these to your own GCP project + model names GCP_PROJECT = 'your_gcp_project' KERAS_MODEL_BUCKET = 'gs://your_storage_bucket' KERAS_VERSION_NAME = 'v1' # Add the serving input layer below in order to serve our model on AI Platform class ServingInput(tf.keras.layers.Layer): # the important detail in this boilerplate code is "trainable=False" def __init__(self, name, dtype, batch_input_shape=None): super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape) def get_config(self): return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name } restored_model = model serving_model = tf.keras.Sequential() serving_model.add(ServingInput('serving', tf.float32, (None, input_size))) serving_model.add(restored_model) tf.contrib.saved_model.save_keras_model(serving_model, os.path.join(KERAS_MODEL_BUCKET, 'keras_export')) # export the model to your GCS bucket export_path = KERAS_MODEL_BUCKET + '/keras_export' # Configure gcloud to use your project !gcloud config set project $GCP_PROJECT # Create a new model in our project, you only need to run this once !gcloud ai-platform models create keras_wine # Deploy the model to Cloud AI Platform !gcloud beta ai-platform versions create $KERAS_VERSION_NAME --model keras_wine \ --origin=$export_path \ --python-version=3.5 \ --runtime-version=1.14 \ --framework='TENSORFLOW' %%writefile predictions.json [7.8, 0.21, 0.49, 1.2, 0.036, 20.0, 99.0, 0.99, 3.05, 0.28, 12.1] # Test the deployed model on an example from our test set # The correct score for this prediction is 7 prediction = !gcloud ai-platform predict --model=keras_wine --json-instances=predictions.json --version=$KERAS_VERSION_NAME print(prediction[1])
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Build and train Scikit learn model In this section we'll: * Train a regression model using Scikit Learn * Save the model to a local file using pickle
SKLEARN_VERSION_NAME = 'v1' SKLEARN_MODEL_BUCKET = 'gs://sklearn_model_bucket' scikit_model = LinearRegression().fit(train_data.values, train_labels.values) # Export the model to a local file using pickle pickle.dump(scikit_model, open('model.pkl', 'wb'))
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Deploy Scikit model to CAIP In this section we'll: * Copy our saved model file to Cloud Storage * Run the gcloud command to deploy our model * Generate a prediction on our deployed model
# Copy the saved model to Cloud Storage !gsutil cp ./model.pkl gs://wine_sklearn/model.pkl # Create a new model in our project, you only need to run this once !gcloud ai-platform models create sklearn_wine !gcloud beta ai-platform versions create $SKLEARN_VERSION_NAME --model=sklearn_wine \ --origin=$SKLEARN_MODEL_BUCKET \ --runtime-version=1.14 \ --python-version=3.5 \ --framework='SCIKIT_LEARN' # Test the model usnig the same example instance from above !gcloud ai-platform predict --model=sklearn_wine --json-instances=predictions.json --version=$SKLEARN_VERSION_NAME
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Compare tf.keras and Scikit models with the What-if Tool Now we're ready for the What-if Tool! In this section we'll: * Create an array of our test examples with their ground truth values. The What-if Tool works best when we send the actual values for each example input. * Instantiate the What-if Tool using the set_compare_ai_platform_model method. This lets us compare 2 models deployed on Cloud AI Platform.
# Create np array of test examples + their ground truth labels test_examples = np.hstack((test_data[:200].values,test_labels[:200].values.reshape(-1,1))) print(test_examples.shape) # Create a What-if Tool visualization, it may take a minute to load # See the cell below this for exploration ideas # We use `set_predict_output_tensor` here becuase our tf.keras model returns a dict with a 'sequential' key config_builder = (WitConfigBuilder(test_examples.tolist(), data.columns.tolist() + ['quality']) .set_ai_platform_model(GCP_PROJECT, 'keras_wine', KERAS_VERSION_NAME).set_predict_output_tensor('sequential').set_uses_predict_api(True) .set_target_feature('quality') .set_model_type('regression') .set_compare_ai_platform_model(GCP_PROJECT, 'sklearn_wine', SKLEARN_VERSION_NAME)) WitWidget(config_builder, height=800)
keras_sklearn_compare_caip_e2e.ipynb
PAIR-code/what-if-tool
apache-2.0
Projections Bipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge. Exercise Find the bipartite projection function in the NetworkX bipartite module docs, and use it to obtain the unipartite projection of the bipartite graph. (5 min.)
person_nodes = pG = list(pG.nodes(data=True))[0:5]
archive/6-bipartite-graphs-student.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Exercise Try visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections. (5 min.) Again, recapping the Circos Plot API: python c = CircosPlot(graph_object, node_color='metadata_key1', node_grouping='metadata_key2', node_order='metadat_key3') c.draw() plt.show() # or plt.savefig('...')
for n, d in pG.nodes(data=True): ____________________ c = CircosPlot(______, node_color=_________, node_grouping=_________, node_order=__________) _________ plt.savefig('images/crime-person.png', dpi=300)
archive/6-bipartite-graphs-student.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Exercise Use a similar logic to extract crime links. (2 min.)
crime_nodes = _________ cG = _____________ # cG stands for "crime graph"
archive/6-bipartite-graphs-student.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Exercise Can you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections. (5 min.)
for n in cG.nodes(): ___________ c = CircosPlot(___________) ___________ plt.savefig('images/crime-crime.png', dpi=300)
archive/6-bipartite-graphs-student.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Exercise NetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis. Try the following challenges, referring to the API documentation to help you: Which crimes have the most number of people involved? Which people are involved in the most number of crimes? Exercise total: 5 min.
# Degree Centrality bpdc = _______________________ sorted(___________, key=lambda x: ___, reverse=True)
archive/6-bipartite-graphs-student.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Using the Meta-Dataset Data Pipeline This notebook shows how to use meta_datasetโ€™s input pipeline to sample data for the Meta-Dataset benchmark. There are two main ways in which data is sampled: 1. episodic: Returns N-way classification episodes, which contain a support (training) set and a query (test) set. The number of classes (N) may vary from episode to episode. 2. batch: Returns batches of images and their corresponding label, sampled from all available classes. We first import meta_dataset and other required packages, and define utility functions for visualization. Weโ€™ll make use of meta_dataset.data.learning_spec and meta_dataset.data.pipeline; their purpose will be made clear later on.
#@title Imports and Utility Functions from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from collections import Counter import gin import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from meta_dataset.data import config from meta_dataset.data import dataset_spec as dataset_spec_lib from meta_dataset.data import learning_spec from meta_dataset.data import pipeline def plot_episode(support_images, support_class_ids, query_images, query_class_ids, size_multiplier=1, max_imgs_per_col=10, max_imgs_per_row=10): for name, images, class_ids in zip(('Support', 'Query'), (support_images, query_images), (support_class_ids, query_class_ids)): n_samples_per_class = Counter(class_ids) n_samples_per_class = {k: min(v, max_imgs_per_col) for k, v in n_samples_per_class.items()} id_plot_index_map = {k: i for i, k in enumerate(n_samples_per_class.keys())} num_classes = min(max_imgs_per_row, len(n_samples_per_class.keys())) max_n_sample = max(n_samples_per_class.values()) figwidth = max_n_sample figheight = num_classes if name == 'Support': print('#Classes: %d' % len(n_samples_per_class.keys())) figsize = (figheight * size_multiplier, figwidth * size_multiplier) fig, axarr = plt.subplots( figwidth, figheight, figsize=figsize) fig.suptitle('%s Set' % name, size='20') fig.tight_layout(pad=3, w_pad=0.1, h_pad=0.1) reverse_id_map = {v: k for k, v in id_plot_index_map.items()} for i, ax in enumerate(axarr.flat): ax.patch.set_alpha(0) # Print the class ids, this is needed since, we want to set the x axis # even there is no picture. ax.set(xlabel=reverse_id_map[i % figheight], xticks=[], yticks=[]) ax.label_outer() for image, class_id in zip(images, class_ids): # First decrement by one to find last spot for the class id. n_samples_per_class[class_id] -= 1 # If class column is filled or not represented: pass. if (n_samples_per_class[class_id] < 0 or id_plot_index_map[class_id] >= max_imgs_per_row): continue # If width or height is 1, then axarr is a vector. if axarr.ndim == 1: ax = axarr[n_samples_per_class[class_id] if figheight == 1 else id_plot_index_map[class_id]] else: ax = axarr[n_samples_per_class[class_id], id_plot_index_map[class_id]] ax.imshow(image / 2 + 0.5) plt.show() def plot_batch(images, labels, size_multiplier=1): num_examples = len(labels) figwidth = np.ceil(np.sqrt(num_examples)).astype('int32') figheight = num_examples // figwidth figsize = (figwidth * size_multiplier, (figheight + 1.5) * size_multiplier) _, axarr = plt.subplots(figwidth, figheight, dpi=300, figsize=figsize) for i, ax in enumerate(axarr.transpose().ravel()): # Images are between -1 and 1. ax.imshow(images[i] / 2 + 0.5) ax.set(xlabel=labels[i], xticks=[], yticks=[]) plt.show()
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
Primers Download your data and process it as explained in link. Set BASE_PATH pointing the processed tf-records ($RECORDS in the conversion instructions). meta_dataset supports many different setting for sampling data. We use gin-config to control default parameters of our functions. You can go to default gin file we are pointing and see the default values. You can use meta_dataset in eager or graph mode. Let's write a generator that makes the right calls to return data from dataset. dataset.make_one_shot_iterator() returns an iterator where each element is an episode. SPLIT is used to define which part of the meta-split is going to be used. Different splits have different classes and the details on how they are created can be found in the paper.
# 1 BASE_PATH = '/path/to/records' GIN_FILE_PATH = 'meta_dataset/learn/gin/setups/data_config.gin' # 2 gin.parse_config_file(GIN_FILE_PATH) # 3 # Comment out to disable eager execution. tf.enable_eager_execution() # 4 def iterate_dataset(dataset, n): if not tf.executing_eagerly(): iterator = dataset.make_one_shot_iterator() next_element = iterator.get_next() with tf.Session() as sess: for idx in range(n): yield idx, sess.run(next_element) else: for idx, episode in enumerate(dataset): if idx == n: break yield idx, episode # 5 SPLIT = learning_spec.Split.TRAIN
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
Reading datasets In order to sample data, we need to read the dataset_spec files for each dataset. Following snippet reads those files into a list.
ALL_DATASETS = ['aircraft', 'cu_birds', 'dtd', 'fungi', 'ilsvrc_2012', 'omniglot', 'quickdraw', 'vgg_flower'] all_dataset_specs = [] for dataset_name in ALL_DATASETS: dataset_records_path = os.path.join(BASE_PATH, dataset_name) dataset_spec = dataset_spec_lib.load_dataset_spec(dataset_records_path) all_dataset_specs.append(dataset_spec)
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
(1) Episodic Mode meta_dataset uses tf.data.Dataset API and it takes one call to pipeline.make_multisource_episode_pipeline(). We loaded or defined most of the variables used during this call above. The remaining parameters are explained below: use_bilevel_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use bilevel ontology. Omniglot is set up with a hierarchy with two level: the alphabet (Latin, Inuktitut...), and the character (with 20 examples per character). The flag means that each episode will contain classes from a single alphabet. use_dag_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use dag_ontology. Same idea for ImageNet, except it uses the hierarchical sampling procedure described in the article. image_size: All images from various datasets are down or upsampled to the same size. This is the flag controls the edge size of the square. shuffle_buffer_size: Controls the amount of shuffling among examples from any given class.
use_bilevel_ontology_list = [False]*len(ALL_DATASETS) use_dag_ontology_list = [False]*len(ALL_DATASETS) # Enable ontology aware sampling for Omniglot and ImageNet. use_bilevel_ontology_list[5] = True use_dag_ontology_list[4] = True variable_ways_shots = config.EpisodeDescriptionConfig( num_query=None, num_support=None, num_ways=None) dataset_episodic = pipeline.make_multisource_episode_pipeline( dataset_spec_list=all_dataset_specs, use_dag_ontology_list=use_dag_ontology_list, use_bilevel_ontology_list=use_bilevel_ontology_list, episode_descr_config=variable_ways_shots, split=SPLIT, image_size=84, shuffle_buffer_size=300)
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
Using Dataset The episodic dataset consist in a tuple of the form (Episode, data source ID). The data source ID is an integer Tensor containing a value in the range [0, len(all_dataset_specs) - 1] signifying which of the datasets of the multisource pipeline the given episode came from. Episodes consist of support and query sets and we want to learn to classify images at the query set correctly given the support images. For both support and query set we have images, labels and class_ids. Labels are transformed class_ids offset to zero, so that global class_ids are set to [0, N] where N is the number of classes in an episode. As one can see the number of images in query set and support set is different. Images are scaled, copied into 84*84*3 tensors. Labels are presented in two forms: *_labels are relative to the classes selected for the current episode only. They are used as targets for this episode. *_class_ids are the original class ids relative to the whole dataset. They are used for visualization and diagnostics. It easy to convert tensors of the episode into numpy arrays and use them outside of the Tensorflow framework. Classes might have different number of samples in the support set, whereas each class has 10 samples in the query set.
# 1 idx, (episode, source_id) = next(iterate_dataset(dataset_episodic, 1)) print('Got an episode from dataset:', all_dataset_specs[source_id].name) # 2 for t, name in zip(episode, ['support_images', 'support_labels', 'support_class_ids', 'query_images', 'query_labels', 'query_class_ids']): print(name, t.shape) # 3 episode = [a.numpy() for a in episode] # 4 support_class_ids, query_class_ids = episode[2], episode[5] print(Counter(support_class_ids)) print(Counter(query_class_ids))
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
Visualizing Episodes Let's visualize the episodes. Support and query set for each episode plotted sequentially. Set N_EPISODES to control number of episodes visualized. Each episode is sampled from a single dataset and include N different classes. Each class might have different number of samples in support set, whereas number of images in query set is fixed. We limit number of classes and images per class to 10 in order to create legible plots. Actual episodes might have more classes and samples. Each column represents a distinct class and dataset specific class ids are plotted on the x_axis.
# 1 N_EPISODES=2 # 2, 3 for idx, (episode, source_id) in iterate_dataset(dataset_episodic, N_EPISODES): print('Episode id: %d from source %s' % (idx, all_dataset_specs[source_id].name)) episode = [a.numpy() for a in episode] plot_episode(support_images=episode[0], support_class_ids=episode[2], query_images=episode[3], query_class_ids=episode[5])
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
(2) Batch Mode Second mode that meta_dataset library provides is the batch mode, where one can sample batches from the list of datasets in a non-episodic manner and use it to train baseline models. There are couple things to note here: Each batch is sampled from a different dataset. ADD_DATASET_OFFSET controls whether the class_id's returned by the iterator overlaps among different datasets or not. A dataset specific offset is added in order to make returned ids unique. make_multisource_batch_pipeline() creates a tf.data.Dataset object that returns datasets of the form (Batch, data source ID) where similarly to the episodic case, the data source ID is an integer Tensor that identifies which dataset the given batch originates from. shuffle_buffer_size controls the amount of shuffling done among examples from a given dataset (unlike for the episodic pipeline).
BATCH_SIZE = 16 ADD_DATASET_OFFSET = True dataset_batch = pipeline.make_multisource_batch_pipeline( dataset_spec_list=all_dataset_specs, batch_size=BATCH_SIZE, split=SPLIT, image_size=84, add_dataset_offset=ADD_DATASET_OFFSET, shuffle_buffer_size=1000) for idx, ((images, labels), source_id) in iterate_dataset(dataset_batch, 1): print(images.shape, labels.shape) N_BATCH = 2 for idx, (batch, source_id) in iterate_dataset(dataset_batch, N_BATCH): print('Batch-%d from source %s' % (idx, all_dataset_specs[source_id].name)) plot_batch(*map(lambda a: a.numpy(), batch), size_multiplier=0.5)
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
(3) Fixing Ways and Shots meta_dataset library provides option to set number of classes/samples per episode. There are 3 main flags you can set. NUM_WAYS: Fixes the # classes per episode. We would still get variable number of samples per class in the support set. NUM_SUPPORT: Fixes # samples per class in the support set. NUM_SUPPORT: Fixes # samples per class in the query set. If we want to use fixed num_ways, we have to disable ontology based sampling for omniglot and imagenet. We advise using single dataset for using this feature, since using multiple datasets is not supported/tested. In this notebook, we are using Quick, Draw! Dataset. We sample episodes and visualize them as we did earlier.
#1 NUM_WAYS = 8 NUM_SUPPORT = 3 NUM_QUERY = 5 fixed_ways_shots = config.EpisodeDescriptionConfig( num_ways=NUM_WAYS, num_support=NUM_SUPPORT, num_query=NUM_QUERY) #2 use_bilevel_ontology_list = [False]*len(ALL_DATASETS) use_dag_ontology_list = [False]*len(ALL_DATASETS) quickdraw_spec = [all_dataset_specs[6]] #3 dataset_fixed = pipeline.make_multisource_episode_pipeline( dataset_spec_list=quickdraw_spec, use_dag_ontology_list=[False], use_bilevel_ontology_list=use_bilevel_ontology_list, split=SPLIT, image_size=84, episode_descr_config=fixed_ways_shots) N_EPISODES = 2 for idx, (episode, source_id) in iterate_dataset(dataset_fixed, N_EPISODES): print('Episode id: %d from source %s' % (idx, quickdraw_spec[source_id].name)) episode = [a.numpy() for a in episode] plot_episode(support_images=episode[0], support_class_ids=episode[2], query_images=episode[3], query_class_ids=episode[5])
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
(4) Using Meta-dataset with PyTorch As mentioned above it is super easy to consume meta_dataset as NumPy arrays. This also enables easy integration into other popular deep learning frameworks like PyTorch. TensorFlow code processes the data and passes it to PyTorch, ready to be consumed. Since the data loader and processing steps do not have any operation on the GPU, TF should not attempt to grab the GPU, and it should be available for PyTorch. 1. Let's use an episodic dataset created earlier, dataset_episodic, and build on top of it. We will transpose tensor to CHW, which is the common order used by convolutional layers of PyTorch. 2. We will use zero-indexed labels, therefore grabbing e[1] and e[4]. At the end we return a generator that consumes the tf.Dataset. 3. Using .cuda() on PyTorch tensors should distribute them to appropriate devices.
import torch # 1 to_torch_labels = lambda a: torch.from_numpy(a.numpy()).long() to_torch_imgs = lambda a: torch.from_numpy(np.transpose(a.numpy(), (0, 3, 1, 2))) # 2 def data_loader(n_batches): for i, (e, _) in enumerate(dataset_episodic): if i == n_batches: break yield (to_torch_imgs(e[0]), to_torch_labels(e[1]), to_torch_imgs(e[3]), to_torch_labels(e[4])) for i, batch in enumerate(data_loader(n_batches=2)): #3 data_support, labels_support, data_query, labels_query = [x.cuda() for x in batch] print(data_support.shape, labels_support.shape, data_query.shape, labels_query.shape)
Intro_to_Metadataset.ipynb
google-research/meta-dataset
apache-2.0
Add to the function to allow amplitude to be varied and aadd in an additional slider to vary both f and a may want to limit ylim
interact(pltsin, f=(1, 10, 0.2), x = (1, 10, 0.2)) def pltsina(f, a): plt.plot(x, a*sin(2*pi*x*f)) plt.ylim(-10.5, 10.5) interact(pltsina, f=(1, 10, 0.2), a = (1, 10, 0.2))
Basemap-final.ipynb
WillRhB/PythonLesssons
mit
Climate data
f=Dataset ('ncep-data/air.sig995.2013.nc') # get individual data set out of the right folder air = f.variables['air'] # get variable plt.imshow(air[0,:,:]) # display first timestep # Create function to browse through the days def sh(time): plt.imshow(air[time,:,:]) # Now make it interactive interact(sh, time=(0, 355, 1)) # Browse variable def sh(time =0, var='air', year = '2013'): f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc') vv=f.variables[var] plt.imshow(vv[time,:,:]) #Give a list of variables variabs =['air', 'uwnd', 'vwnd', 'rhum'] year = ['2013', '2014', '2015'] # Now interact with it interact(sh, time=(0, 355, 1), year = year, var=variabs) help(sh) from mpl_toolkits.basemap import Basemap # create north polar sterographic projection m=Basemap(projection='npstere', boundinglat=60, lon_0=0, resolution ='l') m.fillcontinents(color='gray', lake_color='gray') m.drawparallels(arange(-80.,81.,20.)) m.drawmeridians(arange(-180.,181.,20.)) m.drawmapboundary(fill_color='white') # Set up some variables lon = f.variables['lon'][:] lat = f.variables['lat'][:] lon, lat = meshgrid(lon, lat) x, y = m(lon, lat) def sh(time =0, var='air', year = '2013'): f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc') vv=f.variables[var] tt=f.variables['time'] dd=num2date(tt[time], tt.units) m.fillcontinents(color='gray', lake_color='gray') m.drawparallels(arange(-80.,81.,20.)) m.drawmeridians(arange(-180.,181.,20.)) m.drawmapboundary(fill_color='white') cs = m.contourf(x, y, vv[time,:,:]-273.15) interact(sh, year=year, time=(0,355,1), var=variabs) my_map = Basemap (projection='merc', lat_0=0, lon_0=30, resolution='h', area_thresh=1000.0, llcrnrlon=29, llcrnrlat=-1, urcrnrlon=31, urcrnrlat=1) # area threshold states how rivers etc look - scale, resolution sets resolution, llcrnlon etc sets box, # lat and lon decide where you look my_map.drawcoastlines() my_map.drawcountries() my_map.fillcontinents(color='coral') my_map.drawmapboundary() my_map.drawmeridians(arange(0,360,30)) my_map.drawparallels(arange(-90, 90, 30)) lon=30 lat=0 x,y=my_map(lon, lat) my_map.plot(x, y, 'bo', markersize=7.2) plt.show() # here the function that decides actually plots # This just lets the output of the following code samples # display inline on this page, at an appropirate size from pylab import rcParams # Create a simple basemap my_map = Basemap (projection='ortho', lat_0=50, lon_0=0, resolution='l', area_thresh=1000.0) my_map.drawcoastlines() my_map.drawcountries() my_map.fillcontinents(color='red', lake_color='gray') plt.show()
Basemap-final.ipynb
WillRhB/PythonLesssons
mit
Plotting some live (ish) earthquake data... Download the data first: http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_week.csv This will download a file locally- move it into your working directory. Alternatively, use the historic dataset provided in this repo.
#Check the first few lats and longs import csv # Open the earthquake data file. filename = '1.0_week.csv' # Create empty lists for the latitudes and longitudes. lats, lons, mags = [], [], [] # Read through the entire file, skip the first line, # and pull out just the lats and lons. with open(filename) as f: # Create a csv reader object. reader = csv.reader(f) # Ignore the header row. next(reader) # Store the latitudes and longitudes in the appropriate lists. for row in reader: lats.append(float(row[1])) lons.append(float(row[2])) mags.append(float(row[4])) # Display the first 5 lats and lons. print('lats', lats[0:5]) print('lons', lons[0:5]) print('mags', mags[0:5]) ### And now create a plot of these on a map projection import csv # Open the earthquake data file. filename = '1.0_week.csv' # Create empty lists for the latitudes and longitudes. lats, lons, mags = [], [], [] # Read through the entire file, skip the first line, # and pull out just the lats and lons. with open(filename) as f: # Create a csv reader object. reader = csv.reader(f) # Ignore the header row. next(reader) # Store the latitudes and longitudes in the appropriate lists. for row in reader: lats.append(float(row[1])) lons.append(float(row[2])) mags.append(float(row[4])) # --- Build Map --- from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np eq_map = Basemap(projection='robin', resolution = 'l', area_thresh = 1000.0, lat_0=52, lon_0=0) eq_map.drawcoastlines() eq_map.drawcountries() eq_map.fillcontinents(color = 'coral') eq_map.drawmapboundary() eq_map.drawmeridians(np.arange(0, 360, 30)) eq_map.drawparallels(np.arange(-90, 90, 30)) min_marker_size = 1 for lon, lat, mags in zip(lons, lats, mags): x,y = eq_map(lon, lat) msize = mags * min_marker_size eq_map.plot(x, y, , markersize=msize) if mags >= 5.0 eqcolor = 'r' elif: mags >= 1.0 and <= 3.0 eqcolor = 'g' elif: <= 1.0 eqcolor = 'y eq_map.plot(x, y, eqcolor, markersize=msize) plt.show()
Basemap-final.ipynb
WillRhB/PythonLesssons
mit
This is great but one cool enhancement would be to make the size of the point represent the magnitude of the earthquake. Here's one way to do it: Read the magnitudes into a list along with their respective lat and long Loop through the list, plotting one point at a time As the magnitudes start at 1.0, you can just use the magnitude directly as the scale factor To get the marker size, multiply the magnitude by the smallest dot you want on the map. Add an extra enhancement of colour: make small earthquakes See if you can get similar data, perhaps for Whale sightings, and plot those on a map. You might even have some of your own data to plot..
x,y
Basemap-final.ipynb
WillRhB/PythonLesssons
mit
doc2vec
%load_ext autoreload %autoreload 2 import word2vec word2vec.doc2vec('/Users/drodriguez/Downloads/alldata.txt', '/Users/drodriguez/Downloads/vectors.bin', cbow=0, size=100, window=10, negative=5, hs=0, sample='1e-4', threads=12, iter_=20, min_count=1, verbose=True)
examples/doc2vec.ipynb
chivalrousS/word2vec
apache-2.0
Predictions Is possible to load the vectors using the same wordvectors class as a regular word2vec binary file.
%load_ext autoreload %autoreload 2 import word2vec model = word2vec.load('/Users/drodriguez/Downloads/vectors.bin') model.vectors.shape
examples/doc2vec.ipynb
chivalrousS/word2vec
apache-2.0
We can ask for similarity words or documents on document 1
indexes, metrics = model.cosine('_*1') model.generate_response(indexes, metrics).tolist()
examples/doc2vec.ipynb
chivalrousS/word2vec
apache-2.0
For each publically available version of EXIOBASE pymrio provides a specific parser. All exiobase parser work with the zip archive (as downloaded from the exiobase webpage) or the extracted data. To parse EXIOBASE 1 use:
exio1 = pymrio.parse_exiobase1(path='/tmp/mrios/exio1/zip/121016_EXIOBASE_pxp_ita_44_regions_coeff_txt.zip')
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
The parameter 'path' needs to point to either the folder with the extracted EXIOBASE1 files for the downloaded zip archive. Similarly, EXIOBASE 2 can be parsed by:
exio2 = pymrio.parse_exiobase2(path='/tmp/mrios/exio2/zip/mrIOT_PxP_ita_coefficient_version2.2.2.zip', charact=True, popvector='exio2')
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
The additional parameter 'charact' specifies if the characterization matrix provided with EXIOBASE 2 should be used. This can be specified with True or False; in addition, a custom one can be provided. In the latter case, pass the full path to the custom characterisatio file to 'charact'. The parameter 'popvector' allows to pass information about the population per EXIOBASE2 country. This can either be a custom vector of, if 'exio2' is passed, the one provided with pymrio. EXIOBASE 3 can be parsed by:
exio3 = pymrio.parse_exiobase3(path='/tmp/mrios/exio3/zip/exiobase3.4_iot_2009_pxp.zip')
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Currently, no characterization or population vectors are provided for EXIOBASE 3. For the rest of the tutorial, we use exio2; deleting exio1 and exio3 to free some memory:
del exio1 del exio3
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Exploring EXIOBASE After parsing a EXIOBASE version, the handling of the database is the same as for any IO. Here we use the parsed EXIOBASE2 to explore some characteristics of the EXIBOASE system. After reading the raw files, metadata about EXIOBASE can be accessed within the meta field:
exio2.meta
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Custom points can be added to the history in the meta record. For example:
exio2.meta.note("First test run of EXIOBASE 2") exio2.meta
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
To check for sectors, regions and extensions:
exio2.get_sectors() exio2.get_regions() list(exio2.get_extensions())
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Calculating the system and extension results The following command checks for missing parts in the system and calculates them. In case of the parsed EXIOBASE this includes A, L, multipliers M, footprint accounts, ..
exio2.calc_all()
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Exploring the results
import matplotlib.pyplot as plt plt.figure(figsize=(15,15)) plt.imshow(exio2.A, vmax=1E-3) plt.xlabel('Countries - sectors') plt.ylabel('Countries - sectors') plt.show()
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
The available impact data can be checked with:
list(exio2.impact.get_rows())
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
And to get for example the footprint of a specific impact do:
print(exio2.impact.unit.loc['global warming (GWP100)']) exio2.impact.D_cba_reg.loc['global warming (GWP100)']
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Visualizing the data
with plt.style.context('ggplot'): exio2.impact.plot_account(['global warming (GWP100)'], figsize=(15,10)) plt.show()
doc/source/notebooks/working_with_exiobase.ipynb
konstantinstadler/pymrio
gpl-3.0
Chapter 16 - The BART model of risk taking 16.1 The BART model Balloon Analogue Risk Task (BART: Lejuez et al., 2002): Every trial in this task starts by showing a balloon representing a small monetary value. The subject can then either transfer the money to a virtual bank account, or choose to pump, which adds a small amount of air to the balloon, and increases its value. There is some probability, however, that pumping the balloon will cause it to burst, causing all the money to be lost. A trial finishes when either the subject has transferred the money, or the balloon has burst. $$ \gamma^{+} \sim \text{Uniform}(0,10) $$ $$ \beta \sim \text{Uniform}(0,10) $$ $$ \omega = -\gamma^{+} \,/\,\text{log}(1-p) $$ $$ \theta_{jk} = \frac{1} {1+e^{\beta(k-\omega)}} $$ $$ d_{jk} \sim \text{Bernoulli}(\theta_{jk}) $$
p = .15 # (Belief of) bursting probability ntrials = 90 # Number of trials for the BART Data = pd.read_csv('data/GeorgeSober.txt', sep='\t') # Data.head() cash = np.asarray(Data['cash']!=0, dtype=int) npumps = np.asarray(Data['pumps'], dtype=int) options = cash + npumps d = np.full([ntrials,30], np.nan) k = np.full([ntrials,30], np.nan) # response vector for j, ipumps in enumerate(npumps): inds = np.arange(options[j],dtype=int) k[j,inds] = inds+1 if ipumps > 0: d[j,0:ipumps] = 0 if cash[j] == 1: d[j,ipumps] = 1 indexmask = np.isfinite(d) d = d[indexmask] k = k[indexmask] with pm.Model(): gammap = pm.Uniform('gammap', lower=0, upper=10, testval=1.2) beta = pm.Uniform('beta', lower=0, upper=10, testval=.5) omega = pm.Deterministic('omega', -gammap/np.log(1-p)) thetajk = 1 - pm.math.invlogit(- beta * (k - omega)) djk = pm.Bernoulli('djk', p=thetajk, observed=d) trace = pm.sample(3e3, njobs=2) pm.traceplot(trace, varnames=['gammap', 'beta']); from scipy.stats.kde import gaussian_kde burnin=2000 gammaplus = trace['gammap'][burnin:] beta = trace['beta'][burnin:] fig = plt.figure(figsize=(15, 5)) gs = gridspec.GridSpec(1, 3) ax0 = plt.subplot(gs[0]) ax0.hist(npumps, bins=range(1, 9), rwidth=.8, align='left') plt.xlabel('Number of Pumps', fontsize=12) plt.ylabel('Frequency', fontsize=12) ax1 = plt.subplot(gs[1]) my_pdf1 = gaussian_kde(gammaplus) x1=np.linspace(.5, 1, 200) ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function plt.xlim((.5, 1)) plt.xlabel(r'$\gamma^+$', fontsize=15) plt.ylabel('Posterior Density', fontsize=12) ax2 = plt.subplot(gs[2]) my_pdf2 = gaussian_kde(beta) x2=np.linspace(0.3, 1.3, 200) ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6,) # distribution function plt.xlim((0.3, 1.3)) plt.xlabel(r'$\beta$', fontsize=15) plt.ylabel('Posterior Density', fontsize=12);
CaseStudies/TheBARTModelofRiskTaking.ipynb
junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3
gpl-3.0
16.2 A hierarchical extension of the BART model $$ \mu_{\gamma^{+}} \sim \text{Uniform}(0,10) $$ $$ \sigma_{\gamma^{+}} \sim \text{Uniform}(0,10) $$ $$ \mu_{\beta} \sim \text{Uniform}(0,10) $$ $$ \sigma_{\beta} \sim \text{Uniform}(0,10) $$ $$ \gamma^{+}i \sim \text{Gaussian}(\mu{\gamma^{+}}, 1/\sigma_{\gamma^{+}}^2) $$ $$ \beta_i \sim \text{Gaussian}(\mu_{\beta}, 1/\sigma_{\beta}^2) $$ $$ \omega_i = -\gamma^{+}i \,/\,\text{log}(1-p) $$ $$ \theta{ijk} = \frac{1} {1+e^{\beta_i(k-\omega_i)}} $$ $$ d_{ijk} \sim \text{Bernoulli}(\theta_{ijk}) $$
p = .15 # (Belief of) bursting probability ntrials = 90 # Number of trials for the BART Ncond = 3 dall = np.full([Ncond,ntrials,30], np.nan) options = np.zeros((Ncond,ntrials)) kall = np.full([Ncond,ntrials,30], np.nan) npumps_ = np.zeros((Ncond,ntrials)) for icondi in range(Ncond): if icondi == 0: Data = pd.read_csv('data/GeorgeSober.txt',sep='\t') elif icondi == 1: Data = pd.read_csv('data/GeorgeTipsy.txt',sep='\t') elif icondi == 2: Data = pd.read_csv('data/GeorgeDrunk.txt',sep='\t') # Data.head() cash = np.asarray(Data['cash']!=0, dtype=int) npumps = np.asarray(Data['pumps'], dtype=int) npumps_[icondi,:] = npumps options[icondi,:] = cash + npumps # response vector for j, ipumps in enumerate(npumps): inds = np.arange(options[icondi,j],dtype=int) kall[icondi,j,inds] = inds+1 if ipumps > 0: dall[icondi,j,0:ipumps] = 0 if cash[j] == 1: dall[icondi,j,ipumps] = 1 indexmask = np.isfinite(dall) dij = dall[indexmask] kij = kall[indexmask] condall = np.tile(np.arange(Ncond,dtype=int),(30,ntrials,1)) condall = np.swapaxes(condall,0,2) cij = condall[indexmask] with pm.Model() as model2: mu_g = pm.Uniform('mu_g', lower=0, upper=10) sigma_g = pm.Uniform('sigma_g', lower=0, upper=10) mu_b = pm.Uniform('mu_b', lower=0, upper=10) sigma_b = pm.Uniform('sigma_b', lower=0, upper=10) gammap = pm.Normal('gammap', mu=mu_g, sd=sigma_g, shape=Ncond) beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=Ncond) omega = -gammap[cij]/np.log(1-p) thetajk = 1 - pm.math.invlogit(- beta[cij] * (kij - omega)) djk = pm.Bernoulli("djk", p=thetajk, observed=dij) approx = pm.fit(n=100000, method='advi', obj_optimizer=pm.adagrad_window ) # type: pm.MeanField start = approx.sample(draws=2, include_transformed=True) trace2 = pm.sample(3e3, njobs=2, init='adapt_diag', start=list(start)) pm.traceplot(trace2, varnames=['gammap', 'beta']); burnin=1000 gammaplus = trace2['gammap'][burnin:] beta = trace2['beta'][burnin:] ylabels = ['Sober', 'Tipsy', 'Drunk'] fig = plt.figure(figsize=(15, 12)) gs = gridspec.GridSpec(3, 3) for ic in range(Ncond): ax0 = plt.subplot(gs[0+ic*3]) ax0.hist(npumps_[ic], bins=range(1, 10), rwidth=.8, align='left') plt.xlabel('Number of Pumps', fontsize=12) plt.ylabel(ylabels[ic], fontsize=12) ax1 = plt.subplot(gs[1+ic*3]) my_pdf1 = gaussian_kde(gammaplus[:, ic]) x1=np.linspace(.5, 1.8, 200) ax1.plot(x1, my_pdf1(x1), 'k', lw=2.5, alpha=0.6) # distribution function plt.xlim((.5, 1.8)) plt.xlabel(r'$\gamma^+$', fontsize=15) plt.ylabel('Posterior Density', fontsize=12) ax2 = plt.subplot(gs[2+ic*3]) my_pdf2 = gaussian_kde(beta[:, ic]) x2=np.linspace(0.1, 1.5, 200) ax2.plot(x2, my_pdf2(x2), 'k', lw=2.5, alpha=0.6) # distribution function plt.xlim((0.1, 1.5)) plt.xlabel(r'$\beta$', fontsize=15) plt.ylabel('Posterior Density', fontsize=12);
CaseStudies/TheBARTModelofRiskTaking.ipynb
junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3
gpl-3.0
Interact with SVG display SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
s = """ <svg width="100" height="100"> <circle cx="50" cy="50" r="20" fill="aquamarine" /> </svg> """ S = SVG(s) display(S)
assignments/assignment06/InteractEx05.ipynb
nproctor/phys202-2015-work
mit
Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'): """Draw an SVG circle. Parameters ---------- width : int The width of the svg drawing area in px. height : int The height of the svg drawing area in px. cx : int The x position of the center of the circle in px. cy : int The y position of the center of the circle in px. r : int The radius of the circle in px. fill : str The fill color of the circle. """ listed = ['<svg width=','"',str(width),'"',' height=','"', str(height),'"', '> <circle cx=','"', str(cx),'"',' cy=','"', str(cy),'"',' r=','"', str(r),'"', ' fill=','"',fill,'"',' /> </svg>'] s = "".join(listed) S = SVG(s) display(S) print(s) draw_circle(cx=30, cy=30, r=30, fill='plum') assert True # leave this to grade the draw_circle function
assignments/assignment06/InteractEx05.ipynb
nproctor/phys202-2015-work
mit
ๆณจๆ„๏ผš่ฟ™้‡Œๅฆ‚ๆžœไฝฟ็”จnp.allclose็š„่ฏไผš่ฟ‡ไธไบ†assert๏ผ›ไบ‹ๅฎžไธŠ๏ผŒไป…ไป…ๆ˜ฏๅฐ†ๆ•ฐ็ป„็š„dtypeไปŽfloat64ๅ˜ๆˆfloat32ใ€็ฒพๅบฆๅฐฑไผšไธ‹้™ๅพˆๅคš๏ผŒๆฏ•็ซŸๅท็งฏๆถ‰ๅŠๅˆฐ็š„่ฟ็ฎ—ๅคชๅคš
@nb.jit(nopython=True) def jit_conv_kernel2(x, w, rs, n, n_channels, height, width, n_filters, filter_height, filter_width, out_h, out_w): for i in range(n): for j in range(out_h): for p in range(out_w): for q in range(n_filters): for r in range(n_channels): for s in range(filter_height): for t in range(filter_width): rs[i, q, j, p] += x[i, r, j+s, p+t] * w[q, r, s, t] assert np.allclose(conv(x, w, jit_conv_kernel, args), conv(x, w, jit_conv_kernel, args)) %timeit conv(x, w, jit_conv_kernel, args) %timeit conv(x, w, jit_conv_kernel2, args) %timeit cs231n_conv(x, w, args)
Notebooks/numba/zh-cn/CNN.ipynb
carefree0910/MachineLearning
mit
ๅฏไปฅ็œ‹ๅˆฐ๏ผŒไฝฟ็”จjitๅ’Œไฝฟ็”จ็บฏnumpy่ฟ›่กŒ็ผ–็จ‹็š„ๅพˆๅคงไธ€็‚นไธๅŒๅฐฑๆ˜ฏ๏ผŒไธ่ฆ็•ๆƒง็”จfor๏ผ›ไบ‹ๅฎžไธŠไธ€่ˆฌๆฅ่ฏด๏ผŒไปฃ็ โ€œ้•ฟๅพ—่ถŠๅƒ Cโ€ใ€้€Ÿๅบฆๅฐฑไผš่ถŠๅฟซ
def max_pool_kernel(x, rs, *args): n, n_channels, pool_height, pool_width, out_h, out_w = args for i in range(n): for j in range(n_channels): for p in range(out_h): for q in range(out_w): window = x[i, j, p:p+pool_height, q:q+pool_width] rs[i, j, p, q] += np.max(window) @nb.jit(nopython=True) def jit_max_pool_kernel(x, rs, *args): n, n_channels, pool_height, pool_width, out_h, out_w = args for i in range(n): for j in range(n_channels): for p in range(out_h): for q in range(out_w): window = x[i, j, p:p+pool_height, q:q+pool_width] rs[i, j, p, q] += np.max(window) @nb.jit(nopython=True) def jit_max_pool_kernel2(x, rs, *args): n, n_channels, pool_height, pool_width, out_h, out_w = args for i in range(n): for j in range(n_channels): for p in range(out_h): for q in range(out_w): _max = x[i, j, p, q] for r in range(pool_height): for s in range(pool_width): _tmp = x[i, j, p+r, q+s] if _tmp > _max: _max = _tmp rs[i, j, p, q] += _max def max_pool(x, kernel, args): n, n_channels = args[:2] out_h, out_w = args[-2:] rs = np.zeros([n, n_filters, out_h, out_w], dtype=np.float32) kernel(x, rs, *args) return rs pool_height, pool_width = 2, 2 n, n_channels, height, width = x.shape out_h = height - pool_height + 1 out_w = width - pool_width + 1 args = (n, n_channels, pool_height, pool_width, out_h, out_w) assert np.allclose(max_pool(x, max_pool_kernel, args), max_pool(x, jit_max_pool_kernel, args)) assert np.allclose(max_pool(x, jit_max_pool_kernel, args), max_pool(x, jit_max_pool_kernel2, args)) %timeit max_pool(x, max_pool_kernel, args) %timeit max_pool(x, jit_max_pool_kernel, args) %timeit max_pool(x, jit_max_pool_kernel2, args)
Notebooks/numba/zh-cn/CNN.ipynb
carefree0910/MachineLearning
mit
Confirmation that the sensors are sensitive to airflow. The outlier sensor (:F8) is still there. The spike at 18:20 is probably from me holding it while wondering about heat disappation. One of the WiFi drop-out issues got fixed (and another discovered). Applying the same guestimated correction to the outlier sensor from the first experiment...
downsampled_f['5C:CF:7F:33:F7:F8'] += 5.0 downsampled_f.plot();
temperature/FoamCoreExperiment.ipynb
davewsmith/notebooks
mit
Vertex AI: Vertex AI Migration: Custom Scikit-Learn model with pre-built training container <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Dataset The dataset used for this tutorial is the UCI Machine Learning US Census Data (1990) dataset.The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The dataset predicts whether a persons income will be above $50K USD. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set pre-built containers Set the pre-built Docker container image for training and prediction. For the latest list, see Pre-built containers for training. For the latest list, see Pre-built containers for prediction.
TRAIN_VERSION = "scikit-learn-cpu.0-23" DEPLOY_VERSION = "sklearn-cpu.0-23" TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Examine the training package Package layout Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py). Package Assembly In the following cells, you will assemble the training package.
# Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: US Census Data (1990) tabular binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py %%writefile custom/trainer/task.py # Single Instance Training for Census Income from sklearn.ensemble import RandomForestClassifier import joblib from sklearn.feature_selection import SelectKBest from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelBinarizer import datetime import pandas as pd from google.cloud import storage import numpy as np import argparse import os import sys parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) # Public bucket holding the census data bucket = storage.Client().bucket('cloud-samples-data') # Path to the data inside the public bucket blob = bucket.blob('ai-platform/sklearn/census_data/adult.data') # Download the data blob.download_to_filename('adult.data') # Define the format of your input data including unused columns (These are the columns from the census data files) COLUMNS = ( 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income-level' ) # Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn CATEGORICAL_COLUMNS = ( 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country' ) # Load the training census dataset with open('./adult.data', 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS) # Remove the column we are trying to predict ('income-level') from our features list # Convert the Dataframe to a lists of lists train_features = raw_training_data.drop('income-level', axis=1).values.tolist() # Create our training labels list, convert the Dataframe to a lists of lists train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist() # Since the census data set has categorical features, we need to convert # them to numerical values. We'll use a list of pipelines to convert each # categorical column and then use FeatureUnion to combine them before calling # the RandomForestClassifier. categorical_pipelines = [] # Each categorical column needs to be extracted individually and converted to a numerical value. # To do this, each categorical column will use a pipeline that extracts one feature column via # SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one. # A scores array (created below) will select and extract the feature column. The scores array is # created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN. for i, col in enumerate(COLUMNS[:-1]): if col in CATEGORICAL_COLUMNS: # Create a scores array to get the individual categorical column. # Example: # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical', # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States'] # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # # Returns: [['State-gov']] # Build the scores array. scores = [0] * len(COLUMNS[:-1]) # This column is the categorical column we want to extract. scores[i] = 1 skb = SelectKBest(k=1) skb.scores_ = scores # Convert the categorical column to a numerical value lbn = LabelBinarizer() r = skb.transform(train_features) lbn.fit(r) # Create the pipeline to extract the categorical feature categorical_pipelines.append( ('categorical-{}'.format(i), Pipeline([ ('SKB-{}'.format(i), skb), ('LBN-{}'.format(i), lbn)]))) # Create pipeline to extract the numerical features skb = SelectKBest(k=6) # From COLUMNS use the features that are numerical skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0] categorical_pipelines.append(('numerical', skb)) # Combine all the features using FeatureUnion preprocess = FeatureUnion(categorical_pipelines) # Create the classifier classifier = RandomForestClassifier() # Transform the features and fit them to the classifier classifier.fit(preprocess.transform(train_features), train_labels) # Create the overall model as a single pipeline pipeline = Pipeline([ ('union', preprocess), ('classifier', classifier) ]) # Split path into bucket and subdirectory bucket = args.model_dir.split('/')[2] subdirs = args.model_dir.split('/')[3:] subdir = subdirs[0] subdirs.pop(0) for comp in subdirs: subdir = os.path.join(subdir, comp) # Write model to a local file joblib.dump(pipeline, 'model.joblib') # Upload the model to GCS bucket = storage.Client().bucket(bucket) blob = bucket.blob(subdir + '/model.joblib') blob.upload_from_filename('model.joblib')
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_census.tar.gz
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model training.create-python-pre-built-container Create and run custom training job To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training job A custom training job is created with the CustomTrainingJob class, with the following parameters: display_name: The human readable name for the custom training job. container_uri: The training container image. requirements: Package requirements for the training container image (e.g., pandas). script_path: The relative path to the training script.
job = aip.CustomTrainingJob( display_name="census_" + TIMESTAMP, script_path="custom/trainer/task.py", container_uri=TRAIN_IMAGE, requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"], ) print(job)
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
general.import-model Upload the model Next, upload your model to a Model resource using Model.upload() method, with the following parameters: display_name: The human readable name for the Model resource. artifact: The Cloud Storage location of the trained model artifacts. serving_container_image_uri: The serving container image. sync: Whether to execute the upload asynchronously or synchronously. If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
model = aip.Model.upload( display_name="census_" + TIMESTAMP, artifact_uri=MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, sync=False, ) model.wait()
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.models:Creating Model INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840 INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232 INFO:google.cloud.aiplatform.models:To use this Model in another session: INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232') Make batch predictions predictions.batch-prediction Make test items You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
INSTANCES = [ [ 25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States", ], [ 38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States", ], ]
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form: [ [ content_1], [content_2] ] content: The feature values of the test item as a list.
import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: for i in INSTANCES: f.write(json.dumps(i) + "\n") ! gsutil cat $gcs_input_uri
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch prediction request Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters: job_display_name: The human readable name for the batch prediction job. gcs_source: A list of one or more batch request input files. gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls. instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'. predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'. machine_type: The type of machine to use for training. sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
MIN_NODES = 1 MAX_NODES = 1 batch_predict_job = model.batch_predict( job_display_name="census_" + TIMESTAMP, gcs_source=gcs_input_uri, gcs_destination_prefix=BUCKET_NAME, instances_format="jsonl", predictions_format="jsonl", model_parameters=None, machine_type=DEPLOY_COMPUTE, starting_replica_count=MIN_NODES, max_replica_count=MAX_NODES, sync=False, ) print(batch_predict_job)
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: {'instance': [25, 'Private', 226802, '11th', 7, 'Never-married', 'Machine-op-inspct', 'Own-child', 'Black', 'Male', 0, 0, 40, 'United-States'], 'prediction': False} Make online predictions predictions.deploy-model-api Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. machine_type: The type of machine to use for training. starting_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
DEPLOYED_NAME = "census-" + TIMESTAMP TRAFFIC_SPLIT = {"0": 100} MIN_NODES = 1 MAX_NODES = 1 endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, )
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.models:Creating Endpoint INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352 INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472 INFO:google.cloud.aiplatform.models:To use this Endpoint in another session: INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472') INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472 INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480 INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472 predictions.online-prediction-automl Make test item You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
INSTANCE = [ 25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States", ]
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: Prediction(predictions=[False], deployed_model_id='7220545636163125248', explanations=None) Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
endpoint.undeploy_all()
notebooks/official/migration/UJ10 Vertex SDK Custom Scikit-Learn with pre-built training container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb.
# Setup target configuration my_conf = { # Target platform and board "platform" : 'linux', "board" : 'juno', "host" : '192.168.0.1', # Folder where all the results will be collected "results_dir" : "EnergyMeter_AEP", # Define devlib modules to load "modules" : ["cpufreq"], # Required by rt-app calibration "exclude_modules" : [ 'hwmon' ], # Energy Meters Configuration for ARM Energy Probe "emeter" : { "instrument" : "aep", "conf" : { # Value of the shunt resistor in Ohm 'resistor_values' : [0.099], # Device entry assigned to the probe on the host 'device_entry' : '/dev/ttyACM0', }, 'channel_map' : { 'BAT' : 'BAT' } }, # Tools required by the experiments "tools" : [ 'trace-cmd', 'rt-app' ], # Comment this line to calibrate RTApp in your own platform # "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353}, } # Initialize a test environment using: te = TestEnv(my_conf, wipe=False, force_new=True) target = te.target
ipynb/examples/energy_meter/EnergyMeter_AEP.ipynb
arnoldlu/lisa
apache-2.0
<table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/radiance_fields/tiny_nerf.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/projects/radiance_fields/tiny_nerf.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Setup and imports
%pip install tensorflow_graphics import matplotlib.pyplot as plt import tensorflow as tf import tensorflow.keras.layers as layers import tensorflow_graphics.projects.radiance_fields.data_loaders as data_loaders import tensorflow_graphics.projects.radiance_fields.utils as utils import tensorflow_graphics.rendering.camera.perspective as perspective import tensorflow_graphics.geometry.representation.ray as ray import tensorflow_graphics.math.feature_representation as feature_rep import tensorflow_graphics.rendering.volumetric.ray_radiance as ray_radiance
tensorflow_graphics/projects/radiance_fields/TFG_tiny_nerf.ipynb
tensorflow/graphics
apache-2.0
Please download the data from the original repository. In this tutorial we experimented with the synthetic data (lego, ship, boat, etc) that can be found here. Then, you can either point to them locally (if you run a custom kernel) or upload them to the google colab.
DATASET_DIR = '/content/nerf_synthetic/' #@title Parameters batch_size = 10 #@param {type:"integer"} n_posenc_freq = 6 #@param {type:"integer"} learning_rate = 0.0005 #@param {type:"number"} n_filters = 256 #@param {type:"integer"} num_epochs = 100 #@param {type:"integer"} n_rays = 512 #@param {type:"integer"} near = 2.0 #@param {type:"number"} far = 6.0 #@param {type:"number"} ray_steps = 64 #@param {type:"integer"}
tensorflow_graphics/projects/radiance_fields/TFG_tiny_nerf.ipynb
tensorflow/graphics
apache-2.0
Training a NeRF network
#@title Load the lego dataset { form-width: "350px" } dataset, height, width = data_loaders.load_synthetic_nerf_dataset( dataset_dir=DATASET_DIR, dataset_name='lego', split='train', scale=0.125, batch_size=batch_size) #@title Prepare the NeRF model and optimizer { form-width: "350px" } input_dim = n_posenc_freq * 2 * 3 + 3 def get_model(): """Tiny NeRF network.""" with tf.name_scope("Network/"): input_features = layers.Input(shape=[input_dim]) fc0 = layers.Dense(n_filters, activation=layers.ReLU())(input_features) fc1 = layers.Dense(n_filters, activation=layers.ReLU())(fc0) fc2 = layers.Dense(n_filters, activation=layers.ReLU())(fc1) fc3 = layers.Dense(n_filters, activation=layers.ReLU())(fc2) fc4 = layers.Dense(n_filters, activation=layers.ReLU())(fc3) fc4 = layers.concatenate([fc4, input_features], -1) fc5 = layers.Dense(n_filters, activation=layers.ReLU())(fc4) fc6 = layers.Dense(n_filters, activation=layers.ReLU())(fc5) fc7 = layers.Dense(n_filters, activation=layers.ReLU())(fc6) rgba = layers.Dense(4)(fc7) return tf.keras.Model(inputs=[input_features], outputs=[rgba]) model = get_model() optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) # @title Set up the training procedure { form-width: "350px" } @tf.function def network_inference_and_rendering(ray_points, model): """Render the 3D ray points into rgb pixels. Args: ray_points: A tensor of shape `[A, B, C, 3]` where A is the batch size, B is the number of rays, C is the number of samples per ray. model: the NeRF model to run Returns: Two tensors of size `[A, B, 3]`. """ features_xyz = feature_rep.positional_encoding(ray_points, n_posenc_freq) features_xyz = tf.reshape(features_xyz, [-1, tf.shape(features_xyz)[-1]]) rgba = model([features_xyz]) target_shape = tf.concat([tf.shape(ray_points)[:-1], [4]], axis=-1) rgba = tf.reshape(rgba, target_shape) rgb, alpha = tf.split(rgba, [3, 1], axis=-1) rgb = tf.sigmoid(rgb) alpha = tf.nn.relu(alpha) rgba = tf.concat([rgb, alpha], axis=-1) dists = utils.get_distances_between_points(ray_points) rgb_render, _, _ = ray_radiance.compute_radiance(rgba, dists) return rgb_render @tf.function def train_step(ray_origin, ray_direction, gt_rgb): """Training function for coarse and fine networks. Args: ray_origin: A tensor of shape `[A, B, 3]` where A is the batch size, B is the number of rays. ray_direction: A tensor of shape `[A, B, 3]` where A is the batch size, B is the number of rays. gt_rgb: A tensor of shape `[A, B, 3]` where A is the batch size, B is the number of rays. Returns: A scalar. """ with tf.GradientTape() as tape: ray_points, _ = ray.sample_1d( ray_origin, ray_direction, near=near, far=far, n_samples=ray_steps, strategy='stratified') rgb = network_inference_and_rendering(ray_points, model) total_loss = utils.l2_loss(rgb, gt_rgb) gradients = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) return total_loss for epoch in range(0, num_epochs): epoch_loss = 0.0 for image, focal, principal_point, transform_matrix in dataset: # Prepare the rays random_rays, random_pixels_xy = perspective.random_rays(focal, principal_point, height, width, n_rays) # TF-Graphics camera rays to NeRF world rays random_rays = utils.change_coordinate_system(random_rays, (0., 0., 0.), (1., -1., -1.)) rays_org, rays_dir = utils.camera_rays_from_transformation_matrix( random_rays, transform_matrix) random_pixels_yx = tf.reverse(random_pixels_xy, axis=[-1]) pixels = tf.gather_nd(image, random_pixels_yx, batch_dims=1) pixels_rgb, _ = tf.split(pixels, [3, 1], axis=-1) dist_loss = train_step(rays_org, rays_dir, pixels_rgb) epoch_loss += dist_loss print('Epoch {0} loss: {1:.3f}'.format(epoch, epoch_loss))
tensorflow_graphics/projects/radiance_fields/TFG_tiny_nerf.ipynb
tensorflow/graphics
apache-2.0
Testing
# @title Load the test data test_dataset, height, width = data_loaders.load_synthetic_nerf_dataset( dataset_dir=DATASET_DIR, dataset_name='lego', split='val', scale=0.125, batch_size=1, shuffle=False) for testimg, focal, principal_point, transform_matrix in test_dataset.take(1): testimg = testimg[0, :, :, :3] img_rays, _ = perspective.random_patches( focal, principal_point, height, width, patch_height=height, patch_width=width, scale=1.0) # Break the test image into lines, so we don't run out of memory batch_rays = tf.split(img_rays, height, axis=1) output = [] for random_rays in batch_rays: random_rays = utils.change_coordinate_system(random_rays, (0., 0., 0.), (1., -1., -1.)) rays_org, rays_dir = utils.camera_rays_from_transformation_matrix( random_rays, transform_matrix) ray_points, _ = ray.sample_1d( rays_org, rays_dir, near=near, far=far, n_samples=ray_steps, strategy='stratified') rgb = network_inference_and_rendering(ray_points, model) output.append(rgb) final_image = tf.concat(output, axis=0) fig, ax = plt.subplots(1, 2) ax[0].imshow(final_image) ax[1].imshow(testimg) plt.show() loss = tf.reduce_mean(tf.square(final_image - testimg)) psnr = -10. * tf.math.log(loss) / tf.math.log(10.) print(psnr.numpy())
tensorflow_graphics/projects/radiance_fields/TFG_tiny_nerf.ipynb
tensorflow/graphics
apache-2.0
MSTL applied to a toy dataset Create a toy dataset with multiple seasonalities We create a time series with hourly frequency that has a daily and weekly seasonality which follow a sine wave. We demonstrate a more real world example later in the notebook.
t = np.arange(1, 1000) daily_seasonality = 5 * np.sin(2 * np.pi * t / 24) weekly_seasonality = 10 * np.sin(2 * np.pi * t / (24 * 7)) trend = 0.0001 * t**2 y = trend + daily_seasonality + weekly_seasonality + np.random.randn(len(t)) ts = pd.date_range(start="2020-01-01", freq="H", periods=len(t)) df = pd.DataFrame(data=y, index=ts, columns=["y"]) df.head()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Let's plot the time series
df["y"].plot(figsize=[10, 5])
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Decompose the toy dataset with MSTL Let's use MSTL to decompose the time series into a trend component, daily and weekly seasonal component, and residual component.
mstl = MSTL(df["y"], periods=[24, 24 * 7]) res = mstl.fit()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
If the input is a pandas dataframe then the output for the seasonal component is a dataframe. The period for each component is reflect in the column names.
res.seasonal.head() ax = res.plot()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
We see that the hourly and weekly seasonal components have been extracted. Any of the STL parameters other than period and seasonal (as they are set by periods and windows in MSTL) can also be set by passing arg:value pairs as a dictionary to stl_kwargs (we will show that in an example now). Here we show that we can still set the trend smoother of STL via trend and order of the polynomial for the seasonal fit via seasonal_deg. We will also explicitly set the windows, seasonal_deg, and iterate parameter explicitly. We will get a worse fit but this is just an example of how to pass these parameters to the MSTL class.
mstl = MSTL( df, periods=[24, 24 * 7], # The periods and windows must be the same length and will correspond to one another. windows=[101, 101], # Setting this large along with `seasonal_deg=0` will force the seasonality to be periodic. iterate=3, stl_kwargs={ "trend":1001, # Setting this large will force the trend to be smoother. "seasonal_deg":0, # Means the seasonal smoother is fit with a moving average. } ) res = mstl.fit() ax = res.plot()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
MSTL applied to electricity demand dataset Prepare the data We will use the Victoria electricity demand dataset found here: https://github.com/tidyverts/tsibbledata/tree/master/data-raw/vic_elec. This dataset is used in the original MSTL paper [1]. It is the total electricity demand at a half hourly granularity for the state of Victora in Australia from 2002 to the start of 2015. A more detailed description of the dataset can be found here.
url = "https://raw.githubusercontent.com/tidyverts/tsibbledata/master/data-raw/vic_elec/VIC2015/demand.csv" df = pd.read_csv(url) df.head()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause