markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
In SHARPpy, Profile objects have quality control checks built into them to alert the user to bad data and in order to prevent the program from crashing on computational routines. For example, upon construction of the Profile object, the SHARPpy will check for unrealistic values (i.e. dewpoint or temperature below abso...
import matplotlib.pyplot as plt plt.plot(prof.tmpc, prof.hght, 'r-') plt.plot(prof.dwpc, prof.hght, 'g-') #plt.barbs(40*np.ones(len(prof.hght)), prof.hght, prof.u, prof.v) plt.xlabel("Temperature [C]") plt.ylabel("Height [m above MSL]") plt.grid() plt.show()
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
SHARPpy Profile objects keep track of the height grid the profile lies on. Within the profile object, the height grid is assumed to be in meters above mean sea level. In the example data provided, the profile can be converted to and from AGL from MSL:
msl_hght = prof.hght[prof.sfc] # Grab the surface height value print "SURFACE HEIGHT (m MSL):",msl_hght agl_hght = tab.interp.to_agl(prof, msl_hght) # Converts to AGL print "SURFACE HEIGHT (m AGL):", agl_hght msl_hght = tab.interp.to_msl(prof, agl_hght) # Converts to MSL print "SURFACE HEIGHT (m MSL):",msl_hght
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Showing derived profiles: By default, Profile objects also create derived profiles such as Theta-E and Wet-Bulb when they are constructed. These profiles are accessible to the user too.
plt.plot(tab.thermo.ktoc(prof.thetae), prof.hght, 'r-', label='Theta-E') plt.plot(prof.wetbulb, prof.hght, 'c-', label='Wetbulb') plt.xlabel("Temperature [C]") plt.ylabel("Height [m above MSL]") plt.legend() plt.grid() plt.show()
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Lifting Parcels: In SHARPpy, parcels are lifted via the params.parcelx() routine. The parcelx() routine takes in the arguments of a Profile object and a flag to indicate what type of parcel you would like to be lifted. Additional arguments can allow for custom/user defined parcels to be passed to the parcelx() routin...
sfcpcl = tab.params.parcelx( prof, flag=1 ) # Surface Parcel fcstpcl = tab.params.parcelx( prof, flag=2 ) # Forecast Parcel mupcl = tab.params.parcelx( prof, flag=3 ) # Most-Unstable Parcel mlpcl = tab.params.parcelx( prof, flag=4 ) # 100 mb Mean Layer Parcel
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Once your parcel attributes are computed by params.parcelx(), you can extract information about the parcel such as CAPE, CIN, LFC height, LCL height, EL height, etc.
print "Most-Unstable CAPE:", mupcl.bplus # J/kg print "Most-Unstable CIN:", mupcl.bminus # J/kg print "Most-Unstable LCL:", mupcl.lclhght # meters AGL print "Most-Unstable LFC:", mupcl.lfchght # meters AGL print "Most-Unstable EL:", mupcl.elhght # meters AGL print "Most-Unstable LI:", mupcl.li5 # C
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Other Parcel Object Attributes: Here is a list of the attributes and their units contained in each parcel object (pcl): pcl.pres - Parcel beginning pressure (mb) pcl.tmpc - Parcel beginning temperature (C) pcl.dwpc - Parcel beginning dewpoint (C) pcl.ptrace - Parcel trace pressure (mb) pcl.ttrace - Parcel trace tempera...
# This serves as an intensive exercise of matplotlib's transforms # and custom projection API. This example produces a so-called # SkewT-logP diagram, which is a common plot in meteorology for # displaying vertical profiles of temperature. As far as matplotlib is # concerned, the complexity comes from having X and Y ax...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Calculating Kinematic Variables: SHARPpy also allows the user to compute kinematic variables such as shear, mean-winds, and storm relative helicity. SHARPpy will also compute storm motion vectors based off of the work by Stephen Corfidi and Matthew Bunkers. Below is some example code to compute the following: 1.) 0-3...
sfc = prof.pres[prof.sfc] p3km = tab.interp.pres(prof, tab.interp.to_msl(prof, 3000.)) p6km = tab.interp.pres(prof, tab.interp.to_msl(prof, 6000.)) p1km = tab.interp.pres(prof, tab.interp.to_msl(prof, 1000.)) mean_3km = tab.winds.mean_wind(prof, pbot=sfc, ptop=p3km) sfc_6km_shear = tab.winds.wind_shear(prof, pbot=sfc, ...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Calculating variables based off of the effective inflow layer: The effective inflow layer concept is used to obtain the layer of buoyant parcels that feed a storm's inflow. Here are a few examples of how to compute variables that require the effective inflow layer in order to calculate them:
stp_fixed = tab.params.stp_fixed(sfcpcl.bplus, sfcpcl.lclhght, srh1km[0], tab.utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1]) ship = tab.params.ship(prof) eff_inflow = tab.params.effective_inflow_layer(prof) ebot_hght = tab.interp.to_agl(prof, tab.interp.hght(prof, eff_inflow[0])) etop_hght = tab.interp.to_agl(p...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
Putting it all together into one plot:
indices = {'SBCAPE': [int(sfcpcl.bplus), 'J/kg'],\ 'SBCIN': [int(sfcpcl.bminus), 'J/kg'],\ 'SBLCL': [int(sfcpcl.lclhght), 'm AGL'],\ 'SBLFC': [int(sfcpcl.lfchght), 'm AGL'],\ 'SBEL': [int(sfcpcl.elhght), 'm AGL'],\ 'SBLI': [int(sfcpcl.li5), 'C'],\ 'MLCAP...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
List of functions in each module: This tutorial cannot cover all of the functions in SHARPpy. Below is a list of all of the functions accessible through SHARPTAB. In order to learn more about the function in this IPython Notebook, open up a new "In[]:" field and type in the path to the function (for example): tab.par...
print "Functions within params.py:" for key in tab.params.__all__: print "\ttab.params." + key + "()" print "\nFunctions within winds.py:" for key in tab.winds.__all__: print "\ttab.winds." + key + "()" print "\nFunctions within thermo.py:" for key in tab.thermo.__all__: print "\ttab.thermo." + key + "()" p...
tutorials/SHARPpy_basics.ipynb
djgagne/SHARPpy
bsd-3-clause
ํ…์„œํ”Œ๋กœ ํ—ˆ๋ธŒ์™€ ์ „์ดํ•™์Šต <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab...
import matplotlib.pylab as plt import tensorflow as tf !pip install -U tf-hub-nightly import tensorflow_hub as hub from tensorflow.keras import layers
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
ImageNet ๋ถ„๋ฅ˜๊ธฐ ๋ถ„๋ฅ˜๊ธฐ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ ์ด๋™ ๋„คํŠธ์›Œํฌ ์ปดํ“จํ„ฐ๋ฅผ ๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด hub.module์„, ๊ทธ๋ฆฌ๊ณ  ํ•˜๋‚˜์˜ keras์ธต์œผ๋กœ ๊ฐ์‹ธ๊ธฐ ์œ„ํ•ด tf.keras.layers.Lambda๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. Fthub.dev์˜ ํ…์„œํ”Œ๋กœ2.0 ๋ฒ„์ „์˜ ์–‘๋ฆฝ ๊ฐ€๋Šฅํ•œ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๊ธฐ URL ๋Š” ์ด๊ณณ์—์„œ ์ž‘๋™ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.
classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"} IMAGE_SHAPE = (224, 224) classifier = tf.keras.Sequential([ hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,)) ])
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์‹ฑ๊ธ€ ์ด๋ฏธ์ง€ ์‹คํ–‰์‹œํ‚ค๊ธฐ ๋ชจ๋ธ์„ ์‹œ๋„ํ•˜๊ธฐ ์œ„ํ•ด ์‹ฑ๊ธ€ ์ด๋ฏธ์ง€๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜์„ธ์š”.
import numpy as np import PIL.Image as Image grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg') grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE) grace_hopper grace_hopper = np.array(grace_hopper)/255.0 grace_hopper.sh...
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ฐจ์› ๋ฐฐ์น˜๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”, ๊ทธ๋ฆฌ๊ณ  ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋ธ์— ํ†ต๊ณผ์‹œํ‚ค์„ธ์š”.
result = classifier.predict(grace_hopper[np.newaxis, ...]) result.shape
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๊ทธ ๊ฒฐ๊ณผ๋Š” ๋กœ์ง€ํŠธ์˜ 1001 ์š”์†Œ ๋ฒกํ„ฐ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ๊ฐ๊ฐ์˜ ํด๋ž˜์Šค ํ™•๋ฅ ์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ํƒ‘ ํด๋ž˜์Šค์ธ ID๋Š” ์ตœ๋Œ€๊ฐ’์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:
predicted_class = np.argmax(result[0], axis=-1) predicted_class
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์˜ˆ์ธก ํ•ด๋…ํ•˜๊ธฐ ์šฐ๋ฆฌ๋Š” ํด๋ž˜์Šค ID๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ImageNet๋ผ๋ฒจ์„ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์ธก์„ ํ•ด๋…ํ•ฉ๋‹ˆ๋‹ค.
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) plt.imshow(grace_hopper) plt.axis('off') predicted_class_name = imagenet_labels[predicted_class] _ = plt.title("...
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฐ„๋‹จํ•œ ์ „์ด ํ•™์Šต ํ…์„œํ”Œ๋กœ ํ—ˆ๋ธŒ๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ, ์šฐ๋ฆฌ์˜ ๋ฐ์ดํ„ฐ์…‹์— ์žˆ๋Š” ํด๋ž˜์Šค๋“ค์„ ์ธ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ์ตœ์ƒ์œ„ ์ธต์„ ์žฌํ•™์Šต ์‹œํ‚ค๋Š” ๊ฒƒ์ด ์‰ฌ์›Œ์กŒ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ์ด ์˜ˆ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, ํ…์„œํ”Œ๋กœ์˜ flowers ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค:
data_root = tf.keras.utils.get_file( 'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True)
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์šฐ๋ฆฌ์˜ ๋ชจ๋ธ์— ์ด ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋กœ๋”ฉ ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ tf.keras.preprocessing.image.image.ImageDataGenerator๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด๊ณ , ๋ชจ๋“  ํ…์„œํ”Œ๋กœ ํ—ˆ๋ธŒ์˜ ์ด๋ฏธ์ง€ ๋ชจ๋“ˆ๋“ค์€ 0๊ณผ 1์‚ฌ์ด์˜ ์ƒ์ˆ˜๋“ค์˜ ์ž…๋ ฅ์„ ๊ธฐ๋Œ€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ๋งŒ์กฑ ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ImageDataGenerator์˜ rescale์ธ์ž๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๊ทธ ์ด๋ฏธ์ง€์˜ ์‚ฌ์ด์ฆˆ๋Š” ๋‚˜์ค‘์— ๋‹ค๋ค„์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค.
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255) image_data = image_generator.flow_from_directory(str(data_root), target_size=IMAGE_SHAPE)
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฒฐ๊ณผ๋กœ ๋‚˜์˜จ ์˜ค๋ธŒ์ ํŠธ๋Š” image_batch์™€ label_batch๋ฅผ ๊ฐ™์ด ๋ฆฌํ„ด ํ•˜๋Š” ๋ฐ˜๋ณต์ž์ž…๋‹ˆ๋‹ค.
for image_batch, label_batch in image_data: print("Image batch shape: ", image_batch.shape) print("Label batch shape: ", label_batch.shape) break
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ๋Œ€ํ•œ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์‹คํ–‰ํ•ด๋ณด์ž ์ด์ œ ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ๋Œ€ํ•œ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์‹คํ–‰ํ•ด๋ด…์‹œ๋‹ค.
result_batch = classifier.predict(image_batch) result_batch.shape predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)] predicted_class_names
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์˜ˆ์ธก๋“ค์ด ์ด๋ฏธ์ง€์— ๋งž๋Š”์ง€ ๊ฒ€ํ† ํ•ด๋ด…์‹œ๋‹ค:
plt.figure(figsize=(10,9)) plt.subplots_adjust(hspace=0.5) for n in range(30): plt.subplot(6,5,n+1) plt.imshow(image_batch[n]) plt.title(predicted_class_names[n]) plt.axis('off') _ = plt.suptitle("ImageNet predictions")
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ด๋ฏธ์ง€ ์†์„ฑ์„ ๊ฐ€์ง„ LICENSE.txt ํŒŒ์ผ์„ ๋ณด์„ธ์š”. ๊ฒฐ๊ณผ๊ฐ€ ์™„๋ฒฝ๊ณผ๋Š” ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์ง€๋งŒ, ๋ชจ๋ธ์ด ("daisy"๋ฅผ ์ œ์™ธํ•œ) ๋ชจ๋“  ๊ฒƒ์„ ๋Œ€๋น„ํ•ด์„œ ํ•™์Šต๋œ ํด๋ž˜์Šค๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•˜๋ฉด ํ•ฉ๋ฆฌ์ ์ž…๋‹ˆ๋‹ค. ํ—ค๋“œ๋ฆฌ์Šค ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜์„ธ์š” ํ…์„œํ”Œ๋กœ ํ—ˆ๋ธŒ๋Š” ๋งจ ์œ„ ๋ถ„๋ฅ˜์ธต์ด ์—†์–ด๋„ ๋ชจ๋ธ์„ ๋ถ„๋ฐฐ ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ „์ด ํ•™์Šต์„ ์‰ฝ๊ฒŒ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. fthub.dev์˜ ํ…์„œํ”Œ๋กœ 2.0๋ฒ„์ „์˜ ์–‘๋ฆฝ ๊ฐ€๋Šฅํ•œ ์ด๋ฏธ์ง€ ํŠน์„ฑ ๋ฒกํ„ฐ URL ์€ ๋ชจ๋‘ ์ด ๊ณณ์—์„œ ์ž‘๋™ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2" #@param {type:"string"}
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค.
feature_extractor_layer = hub.KerasLayer(feature_extractor_url, input_shape=(224,224,3))
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ๊ฒƒ์€ ๊ฐ๊ฐ์˜ ์ด๋ฏธ์ง€๋งˆ๋‹ค ๊ธธ์ด๊ฐ€ 1280์ธ ๋ฒกํ„ฐ๊ฐ€ ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค:
feature_batch = feature_extractor_layer(image_batch) print(feature_batch.shape)
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๊ณ„์ธต์— ์žˆ๋Š” ๋ณ€์ˆ˜๋“ค์„ ๊ตณํžˆ๋ฉด, ํ•™์Šต์€ ์˜ค์ง ์ƒˆ๋กœ์šด ๋ถ„๋ฅ˜ ๊ณ„์ธต๋งŒ ๋ณ€๊ฒฝ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
feature_extractor_layer.trainable = False
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๋ถ„๋ฅ˜ head๋ฅผ ๋ถ™์ด์„ธ์š”. ์ด์ œ tf.keras.Sequential ๋ชจ๋ธ์— ์žˆ๋Š” ํ—ˆ๋ธŒ ๊ณ„์ธต์„ ํฌ์žฅํ•˜๊ณ , ์ƒˆ๋กœ์šด ๋ถ„๋ฅ˜ ๊ณ„์ธต์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.
model = tf.keras.Sequential([ feature_extractor_layer, layers.Dense(image_data.num_classes, activation='softmax') ]) model.summary() predictions = model(image_batch) predictions.shape
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค์„ธ์š” ํ•™์Šต ๊ณผ์ • ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜๊ธฐ ์œ„ํ•ด ์ปดํŒŒ์ผ์„ ์‚ฌ์šฉํ•˜์„ธ์š”:
model.compile( optimizer=tf.keras.optimizers.Adam(), loss='categorical_crossentropy', metrics=['acc'])
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ด์ œ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๊ธฐ ์œ„ํ•ด .fit๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ์˜ˆ์ œ๋ฅผ ์งง๊ฒŒ ์œ ์ง€์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์˜ค๋กœ์ง€ 2์„ธ๋Œ€๋งŒ ํ•™์Šต์‹œํ‚ค์„ธ์š”. ํ•™์Šต ๊ณผ์ •์„ ์‹œ๊ฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด์„œ, ๋งž์ถคํ˜• ํšŒ์‹ ์„ ์‚ฌ์šฉํ•˜๋ฉด ์†์‹ค๊ณผ, ์„ธ๋Œ€ ํ‰๊ท ์ด ์•„๋‹Œ ๋ฐฐ์น˜ ๊ฐœ๋ณ„์˜ ์ •ํ™•๋„๋ฅผ ๊ธฐ๋กํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
class CollectBatchStats(tf.keras.callbacks.Callback): def __init__(self): self.batch_losses = [] self.batch_acc = [] def on_train_batch_end(self, batch, logs=None): self.batch_losses.append(logs['loss']) self.batch_acc.append(logs['acc']) self.model.reset_metrics() steps_per_epoch = np.ceil(im...
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ง€๊ธˆ๋ถ€ํ„ฐ, ๋‹จ์ˆœํ•œ ํ•™์Šต ๋ฐ˜๋ณต์ด์ง€๋งŒ, ์šฐ๋ฆฌ๋Š” ํ•ญ์ƒ ๋ชจ๋ธ์ด ํ”„๋กœ์„ธ์Šค๋ฅผ ๋งŒ๋“œ๋Š” ์ค‘์ด๋ผ๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
plt.figure() plt.ylabel("Loss") plt.xlabel("Training Steps") plt.ylim([0,2]) plt.plot(batch_stats_callback.batch_losses) plt.figure() plt.ylabel("Accuracy") plt.xlabel("Training Steps") plt.ylim([0,1]) plt.plot(batch_stats_callback.batch_acc)
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์˜ˆ์ธก์„ ํ™•์ธํ•˜์„ธ์š” ์ด ์ „์˜ ๊ณ„ํš์„ ๋‹ค์‹œํ•˜๊ธฐ ์œ„ํ•ด์„œ, ํด๋ž˜์Šค ์ด๋ฆ„๋“ค์˜ ์ •๋ ฌ๋œ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ฒซ๋ฒˆ์งธ๋กœ ์–ป์œผ์„ธ์š”:
class_names = sorted(image_data.class_indices.items(), key=lambda pair:pair[1]) class_names = np.array([key.title() for key, value in class_names]) class_names
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ํ†ตํ•ด ์ด๋ฏธ์ง€ ๋ฐฐ์น˜๋ฅผ ์‹คํ–‰์‹œํ‚ค์„ธ์š”. ๊ทธ๋ฆฌ๊ณ  ์ธ๋ฑ์Šค๋“ค์„ ํด๋ž˜์Šค ์ด๋ฆ„์œผ๋กœ ๋ฐ”๊พธ์„ธ์š”.
predicted_batch = model.predict(image_batch) predicted_id = np.argmax(predicted_batch, axis=-1) predicted_label_batch = class_names[predicted_id]
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฒฐ๊ณผ๋ฅผ ๊ณ„ํšํ•˜์„ธ์š”
label_id = np.argmax(label_batch, axis=-1) plt.figure(figsize=(10,9)) plt.subplots_adjust(hspace=0.5) for n in range(30): plt.subplot(6,5,n+1) plt.imshow(image_batch[n]) color = "green" if predicted_id[n] == label_id[n] else "red" plt.title(predicted_label_batch[n].title(), color=color) plt.axis('off') _ = p...
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹น์‹ ์˜ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„ธ์š” ๋‹น์‹ ์€ ๋ชจ๋ธ์„ ํ•™์Šต์‹œ์ผœ์™”๊ธฐ ๋•Œ๋ฌธ์—, ์ €์žฅ๋œ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„ธ์š”:
import time t = time.time() export_path = "/tmp/saved_models/{}".format(int(t)) model.save(export_path, save_format='tf') export_path
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
์ด์ œ ์šฐ๋ฆฌ๋Š” ๊ทธ๊ฒƒ์„ ์ƒˆ๋กญ๊ฒŒ ๋กœ๋”ฉ ํ•  ์ˆ˜ ์žˆ๊ณ , ์ด๋Š” ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์ค„ ๊ฒƒ์ž…๋‹ˆ๋‹ค:
reloaded = tf.keras.models.load_model(export_path) result_batch = model.predict(image_batch) reloaded_result_batch = reloaded.predict(image_batch) abs(reloaded_result_batch - result_batch).max()
site/ko/tutorials/images/transfer_learning_with_hub.ipynb
tensorflow/docs-l10n
apache-2.0
Wavelet reconstruction Can reconstruct the sequence by $$ \hat y = W \hat \beta. $$ The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
plt.plot(tse_soft[:,4]) high_idx = np.where(np.abs(tse_soft[:,5]) > .0001)[0] print(high_idx) fig, axs = plt.subplots(len(high_idx) + 1,1) for i, idx in enumerate(high_idx): axs[i].plot(W[:,idx]) plt.plot(tse_den['FTSE'],c='r')
lectures/lecture5/.ipynb_checkpoints/lecture5-checkpoint.ipynb
jsharpna/DavisSML
mit
Non-orthogonal design The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ does not have closed form for $X$ that is non-orthogonal. it is convex it is non-smooth (recall $|x|$) has tuning parameter $\lambda$ Compare to best subset selection...
# %load ../standard_import.txt import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing, model_selection, linear_model %matplotlib inline ## Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python ## which is based on the book by James et al. Intro t...
lectures/lecture5/.ipynb_checkpoints/lecture5-checkpoint.ipynb
jsharpna/DavisSML
mit
ะ—ะฐะณั€ัƒะทะบะฐ ะดะฐะฝะฝั‹ั…
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t') data.columns = ['banner_a', 'banner_b'] data.head() data.describe()
statistics/ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะต ะธะฝั‚ะตั€ะฒะฐะปั‹ ะดะปั ะดะฒัƒั… ะดะพะปะตะน stat.two_proportions_diff_test.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
ะ˜ะฝั‚ะตั€ะฒะฐะปัŒะฝั‹ะต ะพั†ะตะฝะบะธ ะดะพะปะตะน $$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
conf_interval_banner_a = proportion_confint(sum(data.banner_a), data.shape[0], method = 'wilson') conf_interval_banner_b = proportion_confint(sum(data.banner_b), data.shape[0], ...
statistics/ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะต ะธะฝั‚ะตั€ะฒะฐะปั‹ ะดะปั ะดะฒัƒั… ะดะพะปะตะน stat.two_proportions_diff_test.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Z-ะบั€ะธั‚ะตั€ะธะน ะดะปั ั€ะฐะทะฝะพัั‚ะธ ะดะพะปะตะน (ะฝะตะทะฐะฒะธัะธะผั‹ะต ะฒั‹ะฑะพั€ะบะธ) | $X_1$ | $X_2$ ------------- | -------------| 1 | a | b 0 | c | d $\sum$ | $n_1$| $n_2$ $$ \hat{p}_1 = \frac{a}{n_1}$$ $$ \hat{p}_2 = \frac{b}{n_2}$$ $$\text{ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะน ะธะฝั‚ะตั€ะฒะฐะป ะดะปั }p_1 - p_2\colon \;\; \hat{p}1 - \hat{p}_2 \pm z{1-\frac{\alpha}{2}}\s...
def proportions_diff_confint_ind(sample1, sample2, alpha = 0.05): z = scipy.stats.norm.ppf(1 - alpha / 2.) p1 = float(sum(sample1)) / len(sample1) p2 = float(sum(sample2)) / len(sample2) left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2)) ...
statistics/ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะต ะธะฝั‚ะตั€ะฒะฐะปั‹ ะดะปั ะดะฒัƒั… ะดะพะปะตะน stat.two_proportions_diff_test.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Z-ะบั€ะธั‚ะตั€ะธะน ะดะปั ั€ะฐะทะฝะพัั‚ะธ ะดะพะปะตะน (ัะฒัะทะฐะฝะฝั‹ะต ะฒั‹ะฑะพั€ะบะธ) $X_1$ \ $X_2$ | 1| 0 | $\sum$ ------------- | -------------| 1 | e | f | e + f 0 | g | h | g + h $\sum$ | e + g| f + h | n $$ \hat{p}_1 = \frac{e + f}{n}$$ $$ \hat{p}_2 = \frac{e + g}{n}$$ $$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$ $$\text{ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะน ะธะฝ...
def proportions_diff_confint_rel(sample1, sample2, alpha = 0.05): z = scipy.stats.norm.ppf(1 - alpha / 2.) sample = zip(sample1, sample2) n = len(sample) f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample]) g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample]) ...
statistics/ะ”ะพะฒะตั€ะธั‚ะตะปัŒะฝั‹ะต ะธะฝั‚ะตั€ะฒะฐะปั‹ ะดะปั ะดะฒัƒั… ะดะพะปะตะน stat.two_proportions_diff_test.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary The iterative reweighted TF-MxNE solver is a distributed inverse method based on the TF-MxNE solver, which promotes focal (sparse) sources :footcite:StrohmeierEtAl2015. The benefit of this approach is that: it is spatio-temporal without ass...
# Author: Mathurin Massias <mathurin.massias@gmail.com> # Yousra Bekhti <yousra.bekhti@gmail.com> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD (3-clause) import os.path as op import mne from mne.datasets import somato...
0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load somatosensory MEG data
data_path = somato.data_path() subject = '01' task = 'somato' raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg', 'sub-{}_task-{}_meg.fif'.format(subject, task)) fwd_fname = op.join(data_path, 'derivatives', 'sub-{}'.format(subject), 'sub-{}_task-{}-fwd.fif'.format(su...
0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Run iterative reweighted multidict TF-MxNE solver
alpha, l1_ratio = 20, 0.05 loose, depth = 1, 0.95 # Use a multiscale time-frequency dictionary wsize, tstep = [4, 16], [2, 4] n_tfmxne_iter = 10 # Compute TF-MxNE inverse solution with dipole output dipoles, residual = tf_mixed_norm( evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, n_tfmxne_iter=n_tfmxne...
0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generate stc from dipoles
stc = make_stc_from_dipoles(dipoles, forward['src']) plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1, fig_name="irTF-MxNE (cond %s)" % condition)
0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show the evoked response and the residual for gradiometers
ylim = dict(grad=[-300, 300]) evoked.pick_types(meg='grad') evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim, proj=True) residual.pick_types(meg='grad') residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim, proj=True)
0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To fully correct an arbitrary two-port, the device must be measured in two orientations, call these forward and reverse. Because there is no switch present, this requires the operator to physically flip the device, and save the pair of measurements. In on-wafer scenarios, one could fabricate two identical devices, one...
ls three_receiver_cal/data/
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
These files can be read by scikit-rf into Networks with the following.
import skrf as rf %matplotlib inline from pylab import * rf.stylely() raw = rf.read_all_networks('three_receiver_cal/data/') # list the raw measurements raw.keys()
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
Each Network can be accessed through the dictionary raw.
thru = raw['thru'] thru
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
If we look at the raw measurement of the flush thru, it can be seen that only $S_{11}$ and $S_{21}$ contain meaningful data. The other s-parameters are noise.
thru.plot_s_db()
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
Create Calibration In the code that follows a TwoPortOnePath calibration is created from corresponding measured and ideal responses of the calibration standards. The measured networks are read from disk, while their corresponding ideal responses are generated using scikit-rf. More information about using scikit-rf to ...
from skrf.calibration import TwoPortOnePath from skrf.media import RectangularWaveguide from skrf import two_port_reflect as tpr from skrf import mil # pull frequency information from measurements frequency = raw['short'].frequency # the media object wg = RectangularWaveguide(frequency=frequency, a=120*mil, z0=50) ...
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
Apply Correction There are two types of correction possible with a 3-receiver system. Full (TwoPortOnePath) Partial (EnhancedResponse) scikit-rf uses the same Calibration object for both, but employs different correction algorithms depending on the type of the DUT. The DUT used in this example is a WR-15 shim c...
Image('three_receiver_cal/pics/asymmetic DUT.jpg', width='75%')
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
Full Correction ( TwoPortOnePath) Full correction is achieved by measuring each device in both orientations, forward and reverse. To be clear, this means that the DUT must be physically removed, flipped, and re-inserted. The resulting pair of measurements are then passed to the apply_cal() function as a tuple. This ...
from pylab import * simulation = raw['simulation'] dutf = raw['wr15 shim and swg (forward)'] dutr = raw['wr15 shim and swg (reverse)'] corrected_full = cal.apply_cal((dutf, dutr)) corrected_partial = cal.apply_cal(dutf) # plot results f, ax = subplots(1,2, figsize=(8,4)) ax[0].set_title ('$S_{11}$') ax[...
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
jhillairet/scikit-rf
bsd-3-clause
Zadatci 1. Jednostavna regresija Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom sluฤaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom sluฤaju $4\times 1$), na sljedeฤ‡i naฤ...
X = np.array([[0],[1],[2],[4]]) y = np.array([4,1,2,5])
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(a) Prouฤite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor viลกe dimenzije (samo ฤ‡e svakom primjeru biti dodane dummy jedinice; $m=n+1$).
from sklearn.preprocessing import PolynomialFeatures Phi = PolynomialFeatures(1, False, True).fit_transform(X) print(Phi)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Upoznajte se s modulom linalg. Izraฤunajte teลพine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat moลพete dobiti izraฤunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{...
from numpy import linalg w = np.dot(np.dot(np.linalg.inv(np.dot(np.transpose(Phi), Phi)), np.transpose(Phi)), y) print(w) w2 = np.dot(np.linalg.pinv(Phi), y) print(w2)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Radi jasnoฤ‡e, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ oznaฤen kao $\tilde{\mathbf{x}}$. (c) Prikaลพite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izraฤunajte pogreลกku uฤenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde...
from sklearn.metrics import mean_squared_error h = np.dot(Phi, w) print (h) error = mean_squared_error(y, h) print (error) plt.plot(X, y, '+', X, h, linewidth = 1) plt.axis([-3, 6, -1, 7])
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(d) Uvjerite se da za primjere iz $\mathcal{D}$ teลพine $\mathbf{w}$ ne moลพemo naฤ‡i rjeลกavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, veฤ‡ da nam doista treba pseudoinverz. Q: Zaลกto je to sluฤaj? Bi li se problem mogao rijeลกiti preslikavanjem primjera u viลกu dimenziju? Ako da, bi li to uvijek funkcioniralo, ...
try: np.dot(np.linalg.inv(Phi), y) except LinAlgError as err: print(err)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(e) Prouฤite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su teลพine koje izraฤunava ta funkcija (dostupne pomoฤ‡u atributa coef_ i intercept_) jednake onima koje ste izraฤunali gore. Izraฤunajte predikcije modela (metoda predict) i uvjerite se da je pogreลกka uฤenja identiฤna onoj koju ste ranije...
from sklearn.linear_model import LinearRegression lr = LinearRegression().fit(Phi, y) w2 = [lr.intercept_, lr.coef_[1]] h2 = lr.predict(Phi) error2 = mean_squared_error(y, h) print ('staro: ') print (w) print (h) print (error) print('novo: ') print (w2) print (h2) print (error2)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
2. Polinomijalna regresija i utjecaj ลกuma (a) Razmotrimo sada regresiju na veฤ‡em broju primjera. Koristite funkciju make_labels(X, f, noise=0) koja uzima matricu neoznaฤenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathc...
from numpy.random import normal def make_labels(X, f, noise=0) : return map(lambda x : f(x) + (normal(0,noise) if noise>0 else 0), X) def make_instances(x1, x2, N) : return sp.array([np.array([x]) for x in np.linspace(x1,x2,N)]) N = 50 sigma = 200 fun = lambda x :5 + x - 2*x**2 - 5*x**3 x = make_instances(-5,...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Prikaลพite taj skup funkcijom scatter.
plt.figure(figsize=(10, 5)) plt.plot(x, fun(x), 'r', linewidth = 1) plt.scatter(x, y)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikaลพite nauฤeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za uฤenje. Izraฤunajte pogreลกku uฤenja modela.
from sklearn.preprocessing import PolynomialFeatures Phi = PolynomialFeatures(3).fit_transform(x.reshape(-1, 1)) w = np.dot(np.linalg.pinv(Phi), y) h = np.dot(Phi, w) error = mean_squared_error(y, h) print(error) plt.figure(figsize=(10,5)) plt.scatter(x, y) plt.plot(x, h, 'r', linewidth=1)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
3. Odabir modela (a) Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ razliฤite sloลพenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikaลพite na istome grafikonu skup za uฤenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporuฤujemo koristiti plot unutar for petl...
Phi_d = []; w_d = []; h_d = []; err_d = []; d = [1, 3, 5, 10, 20] for i in d: Phi_d.append(PolynomialFeatures(i).fit_transform(x.reshape(-1,1))) for i in range(0, len(d)): w_d.insert(i, np.dot(np.linalg.pinv(Phi_d[i]), y)) h_d.insert(i, np.dot(Phi_d[i], w_d[i])) for i in range(0, len(d)): err_...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Razdvojite skup primjera iz zadatka 2 pomoฤ‡u funkcije cross_validation.train_test_split na skup za uฤenja i skup za ispitivanje u omjeru 1:1. Prikaลพite na jednom grafikonu pogreลกku uฤenja i ispitnu pogreลกku za modele polinomijalne regresije $\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\in [1,2,\ldots,20]$....
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.5) err_train = []; err_test = []; d = range(0, 20) for i in d: Phi_train = PolynomialFeatures(i).fit_transform(X_train.reshape(-1, 1)) Phi_test = PolynomialFeatures(i).fit_transform(X_t...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(c) Toฤnost modela ovisi o (1) njegovoj sloลพenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) koliฤini ลกuma. Kako biste to analizirali, nacrtajte grafikone pogreลกaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i koliฤine ลกuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebit...
N2 = [100, 200, 1000]; sigma = [100, 200, 500]; X_train4c_temp = []; X_test4c_temp = []; y_train4c_temp = []; y_test4c_temp = []; x_tmp = np.linspace(-5, 5, 1000); X_train, X_test = train_test_split(x_tmp, test_size = 0.5) for i in range(0, 3): y_tmp_train = list(make_labels(X_train, fun, sigma[i])) y_t...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
4. Regularizirana regresija (a) U gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izraฤunajte teลพine $\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\mathbf{w}=(\mat...
lam = [0, 1, 10] y = np.array([4,1,2,5]) Phi4a = PolynomialFeatures(3).fit_transform(X) w_L2 = []; def w_reg(lam): t1 = np.dot(Phi4a.T, Phi4a) + np.dot(lam, np.eye(4)) t2 = np.dot(np.linalg.inv(t1), Phi4a.T) return np.dot(t2, y) for i in range(0, 3): w_L2.insert(i, w_reg(lam[i])) print (w_reg(la...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Prouฤite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispiลกite teลพine $\mathbf{w}$ (atributi coef_ i intercept_). Q: Jesu li teลพine identiฤne onima ...
from sklearn.linear_model import Ridge for i in lam: w = []; w_L22 = Ridge(alpha = i).fit(Phi4a, y) w.append(w_L22.intercept_) for i in range(0, len(w_L22.coef_[1:])): w.append(w_L22.coef_[i]) print (w)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
5. Regularizirana polinomijalna regresija (a) Vratimo se na sluฤaj $N=50$ sluฤajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno ฤetiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafi...
x5a = linspace(-5, 5, 50); f = (5 + x5a - 2*(x5a**2) - 5*(x5a**3)); y5a = f + normal(0, 200, 50); lamd = [0, 100] dd = [2, 10] h5a = [] for i in lamd: for j in dd: Phi5a = PolynomialFeatures(j).fit_transform(x5a.reshape(-1,1)) w_5a = np.dot(np.dot(np.linalg.inv(np.dot(Phi5a.T, Phi5a) + np.dot(i, n...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Kao u zadataku 3b, razdvojite primjere na skup za uฤenje i skup za ispitivanje u omjeru 1:1. Prikaลพite krivulje logaritama pogreลกke uฤenja i ispitne pogreลกke u ovisnosti za model $\mathcal{H}_{d=20,\lambda}$, podeลกavajuฤ‡i faktor regularizacije $\lambda$ u rasponu $\lambda\in{0,1,\dots,50}$. Q: Kojoj strani na grafi...
X5a_train, X5a_test, y5a_train, y5a_test = train_test_split(x5a, y5a, test_size = 0.5) err5a_train = []; err5a_test = []; d = 20; lambda5a = range(0, 50) for i in lambda5a: Phi5a_train = PolynomialFeatures(d).fit_transform(X5a_train.reshape(-1, 1)) Phi5a_test = PolynomialFeatures(d).fit_transform(X5a_test.resh...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
6. L1-regularizacija i L2-regularizacija Svrha regularizacije jest potiskivanje teลพina modela $\mathbf{w}$ prema nuli, kako bi model bio ลกto jednostavniji. Sloลพenost modela moลพe se okarakterizirati normom pripadnog vektora teลพina $\mathbf{w}$, i to tipiฤno L2-normom ili L1-normom. Za jednom trenirani model moลพemo izraฤ...
def nonzeroes(coef, tol=1e-6): return len(coef) - len(coef[sp.isclose(0, coef, atol=tol)])
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(a) Za ovaj zadatak upotrijebite skup za uฤenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=20$, mijenjajuฤ‡i hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izraฤunajte L{0,1,2}-norme vektora teลพina $\mathbf{w}$ te ih pri...
from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso lambda6a = range(1,100) d6a = 20 X6a_train, X6a_test, y6a_train, y6a_test = train_test_split(x6a, y6a, test_size = 0.5) Phi6a_train = PolynomialFeatures(d6a).fit_transform(X6a_train.reshape(-1,1)) L0 = []; L1 = []; L2 = []; L1_norm = lambd...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
(b) Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome ลกto L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge teลพine pritegnute na nulu. Pokaลพite da je to doista tako, ponovivลกi gornji eksperime...
L0 = []; L1 = []; L2 = []; for i in lambda6a: lass = Lasso(alpha = i, tol = 0.115).fit(Phi6a_train, y6a_train) w6b = lass.coef_ L0.append(nonzeroes(w6b)) L1.append(L1_norm(w6b)) L2.append(L2_norm(w6b)) plot(lambda6a, L0, lambda6a, L1, lambda6a, L2, linewidth = 1) legend(['L0', 'L1', 'L2'...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj znaฤajki (x-os). Iscrtajte dva odvojena grafa.
plt.figure() plot(exam_score, grades_y, 'r+') grid() plt.figure() plot(grade_in_highschool, grades_y, 'g+') grid()
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
b) Nauฤite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y:
w = Ridge(alpha = 0.01).fit(grades_X, grades_y).coef_ print(w)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
from sklearn.preprocessing import StandardScaler #grades_y.reshape(-1, 1) scaler = StandardScaler() scaler.fit(grades_X) grades_X_fixed = scaler.transform(grades_X) scaler2 = StandardScaler() scaler2.fit(grades_y.reshape(-1, 1)) grades_y_fixed = scaler2.transform(grades_y.reshape(-1, 1))
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Q: Gledajuฤ‡i grafikone iz podzadatka (a), koja znaฤajka bi trebala imati veฤ‡u magnitudu, odnosno vaลพnost pri predikciji prosjeka na studiju? Odgovaraju li teลพine Vaลกoj intuiciji? Objasnite. 8. Multikolinearnost i kondicija matrice a) Izradite skup podataka grades_X_fixed_colinear tako ลกto ฤ‡ete u skupu grades_X_fixed ...
grades_X_fixed_colinear = [[g[0],g[1],g[1]] for g in grades_X_fixed]
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Ponovno, nauฤite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
w = Ridge(alpha = 0.01).fit(grades_X_fixed_colinear, grades_y_fixed).coef_ print(w)
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Q: Usporedite iznose teลพina s onima koje ste dobili u zadatku 7b. ล to se dogodilo? b) Sluฤajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i nauฤite dva modela L2-regularizirane regresije, jedan s $\lambda=0.01$, a jedan s $\lambda=1000$. Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% ele...
w_001s = [] w_1000s = [] for i in range(10): X_001, X_1000, y_001, y_1000 = train_test_split(grades_X_fixed_colinear, grades_y_fixed, test_size = 0.5) w_001 = Ridge(alpha = 0.01).fit(X_001, y_001).coef_ w_1000 = Ridge(alpha = 0.01).fit(X_1000, y_1000).coef_ w_001s.append(w_001[0]) w_1000s.appe...
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Q: Kako regularizacija utjeฤe na stabilnost teลพina? Q: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zaลกto. c) Koristeฤ‡i numpy.linalg.cond izraฤunajte kondicijski broj matrice $\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I}$, gdje je $\mathbf{\Phi}$ matrica dizajna (grades_fixed_X_c...
lam = 0.01 phi = grades_X_fixed_colinear s = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a))) print(np.linalg.cond(s)) lam = 10 phi = grades_X_fixed_colinear s = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a))) print(np.linalg.cond(s))
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
DominikDitoIvosevic/Uni
mit
Problem data this algorithm has the same flavor as the thing I'd like to do, but actually converges very slowly will take a very long time to converge anything other than the smallest examples don't worry if convergence plots look flat when dealing with 100s of rows
As, bs, x_true, x0 = make_data(4, 2, seed=0) proj_data = list(map(factor, As, bs)) x = x0 r = [] for i in range(1000): z = (proj(d,x) for d in proj_data) x = average(*z) r.append(np.linalg.norm(x_true-x)) plt.semilogy(r) As, bs, x_true, x0 = make_data(4, 100, seed=0) proj_data = list(map(factor, As...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
I'll fix the test data to something large enough so that each iteration's computational task is significant just 10 iterations of the algorithm (along with the setup factorizations) in serial takes about a second on my laptop
As, bs, x_true, x0 = make_data(4, 1000, seed=0) %%time proj_data = list(map(factor, As, bs)) x = x0 r = [] for i in range(10): z = (proj(d,x) for d in proj_data) x = average(*z) r.append(np.linalg.norm(x_true-x))
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
parallel map
As, bs, x_true, x0 = make_data(4, 3000, seed=0) proj_data = list(map(factor, As, bs)) %%timeit -n1 -r50 a= list(map(lambda d: proj(d, x0), proj_data)) import concurrent.futures from multiprocessing.pool import ThreadPool ex = concurrent.futures.ThreadPoolExecutor(2) pool = ThreadPool(2) %timeit -n1 -r50 list(ex.map...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
Dask Solution I create a few weird functions to have pretty names in dask graphs
import dask from dask import do, value, compute, visualize, get from dask.imperative import Value from dask.dot import dot_graph from itertools import repeat def enum_values(vals, name=None): """Create values with a name and a subscript""" if not name: raise ValueError('Need a name.') return [valu...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
Visualize the setup step involving the matrix factorizations
lAs = enum_values(As, 'A') lbs = enum_values(bs, 'b') proj_data = enum_map(factor, lAs, lbs, name='proj_data') visualize(*proj_data)
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
visualize one iteration
pd_val = [pd.compute() for pd in proj_data] xk = value(x0,'x^k') xkk = step(pd_val, xk) xkk.visualize()
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
The setup step along with 3 iterations gives the following dask graph. (Which I'm showing mostly because it was satisfying to make.)
x = value(x0,'x^0') for k in range(3): x = step(proj_data, x, k+1) x.visualize()
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
Reuse dask graph Obviously, it's not efficient to make a huge dask graph, especially if I'll be doing thousands of iterations. I really just want to create the dask graph for computing $x^{k+1}$ from $x^k$ and re-apply it at every iteration. Is it more efficient to create that dask graph once and reuse it? Maybe that's...
proj_data = enum_map(factor, As, bs, name='proj_data') proj_data = compute(*proj_data) x = value(0,'x^k') x = step(proj_data, x) dsk_step = x.dask dot_graph(dsk_step) dask.set_options(get=dask.threaded.get) # multiple threads #dask.set_options(get=dask.async.get_sync) # single thread %%time # do one-time com...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
iterative projection algorithm thoughts I don't see any performance gain in using the threaded scheduler, but I don't see what I'm doing wrong here I don't see any difference in runtime switching between dask.set_options(get=dask.threaded.get) and dask.set_options(get=dask.async.get_sync); not sure if it's actually ch...
%%time # do one-time computation of factorizations proj_data = enum_map(factor, As, bs, name='proj_data') # realize the computations, so they aren't recomputed at each iteration proj_data = compute(*proj_data, get=dask.threaded.get, num_workers=2) # get dask graph for reuse x = value(x0,'x^k') x = step(proj_data, x)...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
Runtime error As I was experimenting and switching schedulers and between my first and second dask attempts, I would very often get the following "can't start new thread" error I would also occasionally get an "TypeError: get_async() got multiple values for argument 'num_workers'" even though I had thought I'd set das...
%%time # do one-time computation of factorizations proj_data = enum_map(factor, As, bs, name='proj_data') # realize the computations, so they aren't recomputed at each iteration proj_data = compute(*proj_data) # get dask graph for reuse x = value(x0,'x^k') x = step(proj_data, x) dsk_step = x.dask K = 100 r = [] for...
ipynb/dask-play.ipynb
ajfriend/cyscs
mit
Define the HMM model:
class HMM(object): def __init__(self, initial_prob, trans_prob, obs_prob): self.N = np.size(initial_prob) self.initial_prob = initial_prob self.trans_prob = trans_prob self.emission = tf.constant(obs_prob) assert self.initial_prob.shape == (self.N, 1) assert self.tra...
ch06_hmm/Concept01_forward.ipynb
BinRoot/TensorFlow-Book
mit
Define the forward algorithm:
def forward_algorithm(sess, hmm, observations): fwd = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs_idx: observations[0]}) for t in range(1, len(observations)): fwd = sess.run(hmm.forward_op(), feed_dict={hmm.obs_idx: observations[t], hmm.fwd: fwd}) prob = sess.run(tf.reduce_sum(fwd)) retur...
ch06_hmm/Concept01_forward.ipynb
BinRoot/TensorFlow-Book
mit
Let's try it out:
if __name__ == '__main__': initial_prob = np.array([[0.6], [0.4]]) trans_prob = np.array([[0.7, 0.3], [0.4, 0.6]]) obs_prob = np.array([[0.1, 0.4, 0.5], [0.6, 0.3, 0.1]]) hmm = HMM(initial_prob=initial_prob, trans_prob=trans_prob, obs_prob=obs_prob) observations = [0, 1, 1, 2, 1] with tf.Sessi...
ch06_hmm/Concept01_forward.ipynb
BinRoot/TensorFlow-Book
mit
Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.
print f(1,1) print f(10,-3)
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Printing of the graph You can print the graph for the above value of z. For details see: http://deeplearning.net/software/theano/library/printing.html http://deeplearning.net/software/theano/tutorial/printing_drawing.html To print the graph, futher libraries must be installed. In 99% of your development time you don't ...
#Graph for z theano.printing.pydotprint(z, outfile="pics/z_graph.png", var_with_name_simple=True) #Graph for function f (after optimization) theano.printing.pydotprint(f, outfile="pics/f_graph.png", var_with_name_simple=True)
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
The graph fo z: <img src="files/pics/z_graph.png"> The graph for f: <img src="files/pics/f_graph.png"> Simple matrix multiplications The following types for input variables are typically used: byte: bscalar, bvector, bmatrix, btensor3, btensor4 16-bit integers: wscalar, wvector, wmatrix, wtensor3, wtensor4 32-bit integ...
import theano import theano.tensor as T import numpy as np # Put your code here
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$
inputX = np.asarray([0.1, 0.2, 0.3], dtype='float32') inputW = np.asarray([[0.1,-0.2],[-0.4,0.5],[0.6,-0.7]], dtype='float32') inputB = np.asarray([0.1,0.2], dtype='float32') print "inputX.shape",inputX.shape print "inputW.shape",inputW.shape f(inputX, inputW, inputB)
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data. Shared Variables and Updates See: http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables Using shared variables, we can create an interna...
import theano import theano.tensor as T import numpy as np #Define my internal state init_value = 1 state = theano.shared(value=init_value, name='state') #Define my operation f(x) = 2*x x = T.lscalar('x') z = 2*x accumulator = theano.function(inputs=[], outputs=z, givens={x: state}) print accumulator() print accum...
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Shared Variables We use theano.shared() to share a variable (i.e. make it internally available for Theano) Internal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x For information on the borrow=True parameter see: ht...
#New accumulator function, now with an update # Put your code here to update the internal counter print accumulator(1) print accumulator(1) print accumulator(1)
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Plan Some data: look at some stock price series devise a model for stock price series: Geometric Brownian Motion (GBM) Example for a contingent claim: call option Pricing of a call option under the assumtpion of GBM Challenges Some data: look at some stock price series We import data from Yahoo finance: two examples ...
aapl = data.DataReader('AAPL', 'yahoo', '2000-01-01') print(aapl.head())
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
$\Rightarrow$ various different price series
plt.plot(aapl.Close)
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
$\Longrightarrow$ There was a stock split 7:1 on 06/09/2014. As we do not want to take care of things like that, we use the Adjusted close price!
ibm = data.DataReader('AAPl', 'yahoo', '2000-1-1') print(ibm['Adj Close'].head()) %matplotlib inline ibm['Adj Close'].plot(figsize=(10,6)) plt.ylabel('price') plt.xlabel('year') plt.title('Price history of IBM stock') ibm = data.DataReader('IBM', 'yahoo', '2000-1-1') print(ibm['Adj Close'].head()) %matplotlib inli...
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit