markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Under the Hood This section contains additional information about the functions we've used and pointers to their documentation. You don't need to know anything in these sections, so if you are already feeling overwhelmed, you might want to skip them. But if you are curious, read on. State and TimeSeries objects are ba...
source_code(flip)
python/soln/chap02.ipynb
AllenDowney/ModSim
gpl-2.0
HLT2 nbody classification did preselections: any sv.n, any sv.minpt sv.nlt16 < 2 Training channels (read data) We will use just 11114001, 11296013, 11874042, 12103035, 13246001, 13264021
sig_train_modes_names = [11114001, 11296013, 11874042, 12103035, 13246001, 13264021] bck_train_mode_name = 30000000 sig_train_files = ['mod_{}.csv'.format(name) for name in sig_train_modes_names] bck_train_files = 'mod_30000000.csv' folder = "datasets/prepared_hlt_body/" # concat all signal data if not os.path.exists(...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Counting events and svrs, that passed L0 and GoodGenB preselection (this data was generated by skim)
print 'Signal', statistic_length(signal_data) print 'Bck', statistic_length(bck_data) total_bck_events = statistic_length(bck_data)['Events'] + empty_events[bck_train_mode_name] total_signal_events_by_mode = dict() for mode in sig_train_modes_names: total_signal_events_by_mode[mode] = statistic_length(signal_data[...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
events distribution by mode
print 'Bck:', total_bck_events 'Signal:', total_signal_events_by_mode
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Define variables
variables = ["n", "mcor", "chi2", "eta", "fdchi2", "minpt", "nlt16", "ipchi2", "n1trk", "sumpt"]
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Counting events and svrs, which passed pass_nbody (equivalent Mike's preselections for nbody selection)
# hlt2 nbody selection signal_data = signal_data[(signal_data['pass_nbody'] == 1) & (signal_data['mcor'] <= 10e3)] bck_data = bck_data[(bck_data['pass_nbody'] == 1) & (bck_data['mcor'] <= 10e3)] print 'Signal', statistic_length(signal_data) print 'Bck', statistic_length(bck_data) total_signal_events_by_mode_presel = ...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
events distribution by mode
print 'Bck:', total_bck_events_presel 'Signal:', total_signal_events_by_mode_presel signal_data.head()
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Prepare train/test splitting Divide events which passed alll preselections into two equal parts randomly
ds_train_signal, ds_train_bck, ds_test_signal, ds_test_bck = prepare_data(signal_data, bck_data, 'unique')
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
train: counting events and svrs
print 'Signal', statistic_length(ds_train_signal) print 'Bck', statistic_length(ds_train_bck) train = pandas.concat([ds_train_bck, ds_train_signal])
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
test: counting events and svrs
print 'Signal', statistic_length(ds_test_signal) print 'Bck', statistic_length(ds_test_bck) test = pandas.concat([ds_test_bck, ds_test_signal])
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Define all total events in test samples (which passed just l0 and goodgenB) using also empty events. Suppose that events which didn't pass pass_nboby also were equal randomly divided into training and test samples
total_test_bck_events = (total_bck_events - total_bck_events_presel) // 2 + statistic_length(ds_test_bck)['Events'] total_test_signal_events = dict() for mode in sig_train_modes_names: total_not_passed_signal = total_signal_events_by_mode[mode] - total_signal_events_by_mode_presel[mode] total_test_signal_events...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Matrixnet training
from rep_ef.estimators import MatrixNetSkyGridClassifier
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Base model with 5000 trees
ef_base = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares', connection='skygrid', iterations=5000, sync=False) ef_base.fit(train, train['signal'])
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Base BBDT model
special_b = { 'n': [2.5, 3.5], 'mcor': [2000,3000,4000,5000,7500], # I want to remove splits too close the the B mass as I was looking in simulation and this could distort the mass peak (possibly) 'chi2': [1,2.5,5,7.5,10,100], # I also propose we add a cut to the pre-selection of chi2 < 1000. I don't want to put in ...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
BBDT-5, 6
ef_base_bbdt5 = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares', connection='skygrid', iterations=5000, sync=False, intervals=5) ef_base_bbdt5.fit(train, train['signal']) ef_base_bbdt6 = MatrixNetSkyGridClassif...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Pruning
from rep.data import LabeledDataStorage from rep.report import ClassificationReport report = ClassificationReport({'base': ef_base}, LabeledDataStorage(test, test['signal'])) report.roc()
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Minimize log_loss он же BinomialDeviance
%run pruning.py
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Training sample is cut to be aliquot 8
new_trainlen = (len(train) // 8) * 8 trainX = train[ef_base.features][:new_trainlen].values trainY = train['signal'][:new_trainlen].values trainW = numpy.ones(len(trainY)) trainW[trainY == 0] *= sum(trainY) / sum(1 - trainY) new_features, new_formula_mx, new_classifier = select_trees(trainX, trainY, sample_weight=tra...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Calculate thresholds on classifiers
thresholds = dict() test_bck = test[test['signal'] == 0] RATE = [2500., 4000.] events_pass = dict() for name, cl in estimators.items(): prob = cl.predict_proba(test_bck) thr, result = calculate_thresholds(test_bck, prob, total_test_bck_events, rates=RATE) for rate, val in result.items(): events_pass...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Final efficiencies for each mode
train_modes_eff, statistic = result_statistic(estimators, sig_train_modes_names, test[test['signal'] == 1], thresholds, RATE, total_test_signal_events) from rep.plotting import BarComparePlot xticks_labels = ['$B^0 \\to K^*\mu...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Classification report using events
plots = OrderedDict() for key, value in estimators.items(): plots[key] = plot_roc_events(value, test[test['signal'] == 1], test[test['signal'] == 0], key) bbdt_plots = plots.copy() bbdt_plots.pop('Prunned MN') bbdt_plots.pop('Prunned MN + forest') from rep.plotting import FunctionsPlot FunctionsPlot(bbdt_plots).p...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
all channels efficiencies
from collections import defaultdict all_channels = [] efficiencies = defaultdict(OrderedDict) for mode in empty_events.keys(): if mode in set(sig_train_modes_names) or mode == bck_train_mode_name: continue df = pandas.read_csv(os.path.join(folder , 'mod_{}.csv'.format(mode)), sep='\t') if len(df) <=...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
DIfferent rates
thresholds = OrderedDict() RATE = [2000., 2500., 3000., 3500., 4000.] for name, cl in estimators.items(): prob = cl.predict_proba(ds_test_bck) thr, result = calculate_thresholds(ds_test_bck, prob, total_test_bck_events, rates=RATE) thresholds[name] = thr print name, result train_modes_eff, statistic = ...
HLT2-TreesPruning.ipynb
yandexdataschool/LHCb-topo-trigger
apache-2.0
Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period.
sim.integrator = "whfast" sim.dt = 1. # in years. About 10% of Jupiter's period sim.move_to_com()
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-position...
Nout = 100000 tmax = 3.e5 Nplanets = 2 x = np.zeros((Nplanets,Nout)) ecc = np.zeros((Nplanets,Nout)) longitude = np.zeros((Nplanets,Nout)) varpi = np.zeros((Nplanets,Nout)) times = np.linspace(0.,tmax,Nout) ps = sim.particles for i,time in enumerate(times): sim.integrate(time) os = sim.calculate_orbits() ...
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
Let's see what the eccentricity evolution looks like with matplotlib:
%matplotlib inline labels = ["Jupiter", "Saturn"] import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) plt.plot(times,ecc[0],label=labels[0]) plt.plot(times,ecc[1],label=labels[1]) ax.set_xlabel("Time (yrs)", fontsize=20) ax.set_ylabel("Eccentricity", fontsize=20) ax.tick_params(labels...
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be us...
from scipy import signal Npts = 3000 logPmin = np.log10(10.) logPmax = np.log10(1.e5) Ps = np.logspace(logPmin,logPmax,Npts) ws = np.asarray([2*np.pi/P for P in Ps]) periodogram = signal.lombscargle(times,x[0],ws) fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xs...
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, whi...
fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([600,1600]) ax.set_ylim([0,0.003]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20)
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planet...
def zeroTo360(val): while val < 0: val += 2*np.pi while val > 2*np.pi: val -= 2*np.pi return val*180/np.pi
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs.
phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{5:2}$", fontsize=20) ax.tick_params(label...
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our si...
phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi2) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{2:1}$", fontsize=20) ax.tick_params(labelsize=...
ipython_examples/FourierSpectrum.ipynb
dchandan/rebound
gpl-3.0
Using interact for animation with data A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution: $$ \phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} ...
def soliton(x, t, c, a): """Return phi(x, t) for a soliton wave with constants c and a.""" return (0.5)*c*((1/np.cosh(((np.sqrt(c)*0.5)*(x-c*t-a)))**2)) assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
assignments/assignment05/InteractEx03.ipynb
brettavedisian/phys202-2015-work
mit
Compute a 2d NumPy array called phi: It should have a dtype of float. It should have a shape of (xpoints, tpoints). phi[i,j] should contain the value $\phi(x[i],t[j])$.
phi=np.zeros((xpoints,tpoints)) #worked with Hunter Thomas for i in x: for j in t: phi[i,j]=soliton(x[i],t[j],c,a) assert phi.shape==(xpoints, tpoints) assert phi.ndim==2 assert phi.dtype==np.dtype(float) assert phi[0,0]==soliton(x[0],t[0],c,a)
assignments/assignment05/InteractEx03.ipynb
brettavedisian/phys202-2015-work
mit
Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
def plot_soliton_data(i=0): """Plot the soliton data at t[i] versus x.""" plt.plot(soliton(x,t[i],c,a)) plt.xlabel('Time') plt.ylabel('Phi') plt.title('Solition wave vs. Time') plt.tick_params(axis='x', top='off', direction='out') plt.tick_params(axis='y', right='off', direction='out') plot...
assignments/assignment05/InteractEx03.ipynb
brettavedisian/phys202-2015-work
mit
Use interact to animate the plot_soliton_data function versus time.
interact(plot_soliton_data, i=(0.0,50.0,0.1)); assert True # leave this for grading the interact with plot_soliton_data cell
assignments/assignment05/InteractEx03.ipynb
brettavedisian/phys202-2015-work
mit
画像分類器を再トレーニングする <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_image_retraining"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td> <td> <a target="_blank" href="https://colab.research.google.com/...
import itertools import os import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub print("TF version:", tf.__version__) print("Hub version:", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
使用する TF2 SavedModel モジュールを選択する 手始めに、https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4 を使用します。同じ URL を、SavedModel を識別するコードに使用できます。またブラウザで使用すれば、そのドキュメントを表示することができます。(ここでは TF1 Hub 形式のモデルは機能しないことに注意してください。) 画像特徴量ベクトルを生成するその他の TF2 モデルは、こちらをご覧ください。 試すことのできるモデルはたくさんあります。下のセルから別のモデルを選択し、ノートブックの指示に従ってください。
model_name = "efficientnetv2-xl-21k" # @param ['efficientnetv2-s', 'efficientnetv2-m', 'efficientnetv2-l', 'efficientnetv2-s-21k', 'efficientnetv2-m-21k', 'efficientnetv2-l-21k', 'efficientnetv2-xl-21k', 'efficientnetv2-b0-21k', 'efficientnetv2-b1-21k', 'efficientnetv2-b2-21k', 'efficientnetv2-b3-21k', 'efficientnetv2-...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
Flowers データセットをセットアップする 入力は、選択されたモジュールに合わせてサイズ変更されます。データセットを拡張することで(読み取られるたびに画像をランダムに歪みを加える)、特にファインチューニング時のトレーニングが改善されます。
data_dir = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) def build_dataset(subset): return tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=.20, subset=subse...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
モデルを定義する Hub モジュールを使用して、線形分類器を feature_extractor_layer の上に配置するだけで定義できます。 高速化するため、トレーニング不可能な feature_extractor_layer から始めますが、ファインチューニングを実施して精度を高めることもできます。
do_fine_tuning = False #@param {type:"boolean"} print("Building model with", model_handle) model = tf.keras.Sequential([ # Explicitly define the input shape so the model can be properly # loaded by the TFLiteConverter tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)), hub.KerasLayer(model_handl...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
モデルをトレーニングする
model.compile( optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1), metrics=['accuracy']) steps_per_epoch = train_size // BATCH_SIZE validation_steps = valid_size // BATCH_SIZE hist = model.fit( train_ds, ...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
検証データの画像でモデルが機能するか試してみましょう。
x, y = next(iter(val_ds)) image = x[0, :, :, :] true_index = np.argmax(y[0]) plt.imshow(image) plt.axis('off') plt.show() # Expand the validation image to (1, 224, 224, 3) before predicting the label prediction_scores = model.predict(np.expand_dims(image, axis=0)) predicted_index = np.argmax(prediction_scores) print("...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
最後に次のようにして、トレーニングされたモデルを、TF Serving または TF Lite(モバイル)用に保存することができます。
saved_model_path = f"/tmp/saved_flowers_model_{model_name}" tf.saved_model.save(model, saved_model_path)
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
オプション: TensorFlow Lite にデプロイする TensorFlow Lite では、TensorFlow モデルをモバイルおよび IoT デバイスにデプロイすることができます。以下のコードには、トレーニングされたモデルを TF Lite に変換して、TensorFlow Model Optimization Toolkit のポストトレーニングツールを適用する方法が示されています。最後に、結果の質を調べるために、変換したモデルを TF Lite Interpreter で実行しています。 最適化せずに変換すると、前と同じ結果が得られます(丸め誤差まで)。 データなしで最適化して変換すると、モデルの重みを 8 ビット...
#@title Optimization settings optimize_lite_model = False #@param {type:"boolean"} #@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount. num_calibration_examples = 60 #@param {type:"slider", min:0, max:1000, step:1} representative_data...
site/ja/hub/tutorials/tf2_image_retraining.ipynb
tensorflow/docs-l10n
apache-2.0
Review: Forward / Backward Here are solutions to last hands-on lecture's coding problems along with example uses with a pre-defined A and B matrices. $\alpha_t(z_t) = B_{z_t,x_t} \sum_{z_{t-1}} \alpha_{t-1}(z_{t-1}) A_{z_{t-1}, z_t} $ $\beta(z_t) = \sum_{z_{t+1}} A_{z_t, z_{t+1}} B_{z_{t+1}, x_{t+1}} \beta_{t+1}(z_{t+1...
def forward(params, observations): pi, A, B = params N = len(observations) S = pi.shape[0] alpha = np.zeros((N, S)) # base case alpha[0, :] = pi * B[observations[0], :] # recursive case for i in range(1, N): for s2 in range(S): for s1 in range(S): ...
handsOn_lecture19_baum-welch-pgm-inference/handsOn_lecture19_baum-welch-pgm-inference-template.ipynb
eecs445-f16/umich-eecs445-f16
mit
Implementing Baum-welch With the forward and backward algorithm implementions ready, let's use them to implement baum-welch, EM for HMMs. In the M step, here's the parameters are updated: $ p(z_{t-1}, z_t | \X, \theta) = \frac{\alpha_{t-1}(z_{t-1}) \beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\sum_k \alpha_t(k)\beta_t(k...
# Some utitlities for tracing our implementation below def left_pad(i, s): return "\n".join(["{}{}".format(' '*i, l) for l in s.split("\n")]) def pad_print(i, s): print(left_pad(i, s)) def pad_print_args(i, **kwargs): pad_print(i, "\n".join(["{}:\n{}".format(k, kwargs[k]) for k in sorted(kwargs.keys(...
handsOn_lecture19_baum-welch-pgm-inference/handsOn_lecture19_baum-welch-pgm-inference-template.ipynb
eecs445-f16/umich-eecs445-f16
mit
Training with examples Let's try producing updated parameters to our HMM using a few examples. How did the A and B matrixes get updated with data? Was any confidence gained in the emission probabilities of nouns? Verbs?
pi2, A2, B2 = baum_welch([ [THE, DOG, WALKED, IN, THE, PARK, END, END], # END -> END needs at least one transition example [THE, DOG, RAN, IN, THE, PARK, END], [THE, CAT, WALKED, IN, THE, PARK, END], [THE, DOG, RAN, IN, THE, PARK, END]], pi, A, B, 10, trace=False) print("original A") pr...
handsOn_lecture19_baum-welch-pgm-inference/handsOn_lecture19_baum-welch-pgm-inference-template.ipynb
eecs445-f16/umich-eecs445-f16
mit
Tracing through the implementation Let's look at a trace of one iteration. Study the steps carefully and make sure you understand how we are updating the parameters, corresponding to these updates: $ p(z_{t-1}, z_t | \X, \theta) = \frac{\alpha_{t-1}(z_{t-1}) \beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\sum_k \alpha_t(k...
pi3, A3, B3 = baum_welch([ [THE, DOG, WALKED, IN, THE, PARK, END, END], [THE, CAT, RAN, IN, THE, PARK, END, END]], pi, A, B, 1, trace=True) print("\n\n") print_A(A3) print_B(B3)
handsOn_lecture19_baum-welch-pgm-inference/handsOn_lecture19_baum-welch-pgm-inference-template.ipynb
eecs445-f16/umich-eecs445-f16
mit
first just checking that the flattening and reshaping works as expected
test = np.random.randn(11,11,4,100) test.shape test_flat = test.flatten() test_flat.shape np.savetxt('test.txt', test_flat) test_back = np.loadtxt('test.txt').reshape((11,11,4,100)) test_back.shape np.mean(test - test_back)
bin/calculations/human_data/old/saving.ipynb
michaelneuder/image_quality_analysis
mit
Time Series Database This notebook demonstrates the persistent behavior of the database. Initialization Clear the file system for demonstration purposes.
# database parameters ts_length = 100 data_dir = '../db_files' db_name = 'default' dir_path = data_dir + '/' + db_name + '/' # clear file system for testing if not os.path.exists(dir_path): os.makedirs(dir_path) filelist = [dir_path + f for f in os.listdir(dir_path)] for f in filelist: os.remove(f)
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Load the database server.
# when running from the terminal # python go_server_persistent.py --ts_length 100 --db_name 'demo' # here we load the server as a subprocess for demonstration purposes server = subprocess.Popen(['python', '../go_server_persistent.py', '--ts_length', str(ts_length), '--data_dir', data_dir, '-...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Load the database webserver.
# when running from the terminal # python go_webserver.py # here we load the server as a subprocess for demonstration purposes webserver = subprocess.Popen(['python', '../go_webserver.py']) time.sleep(5) # make sure it loads completely
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Import the web interface and initialize it.
from webserver import * web_interface = WebInterface()
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Generate Data Let's create some dummy data to aid in our demonstration. You will need to import the timeseries package to work with the TimeSeries format. Note: the database is persistent, so can store data between sessions, but we will start with an empty database here for demonstration purposes.
from timeseries import * def tsmaker(m, s, j): ''' Helper function: randomly generates a time series for testing. Parameters ---------- m : float Mean value for generating time series data s : float Standard deviation value for generating time series data j : float ...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Insert Data Let's start by loading the data into the database, using the REST API web interface.
# check that the database is empty web_interface.select() # add stats trigger web_interface.add_trigger('stats', 'insert_ts', ['mean', 'std'], None) # insert the time series for k in tsdict: web_interface.insert_ts(k, tsdict[k]) # upsert the metadata for k in tsdict: web_interface.upsert_meta(k, metadict[k])...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Inspect Data Let's inspect the data, to make sure that all the previous operations were successful.
# select all database entries; all metadata fields results = web_interface.select(fields=[]) # we have the right number of database entries assert len(results) == num_ts # we have all the right primary keys assert sorted(results.keys()) == ts_keys # check that all the time series and metadata matches for k in tsdict...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Let's generate an additional time series for similarity searches. We'll store the time series and the results of the similarity searches, so that we can compare against them after reloading the database.
_, query = tsmaker(np.random.uniform(low=0.0, high=1.0), np.random.uniform(low=0.05, high=0.4), np.random.uniform(low=0.05, high=0.2)) results_vp = web_interface.vp_similarity_search(query, 1) results_vp results_isax = web_interface.isax_similarity_search(query) results_isax
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Finally, let's store our iSAX tree representation.
results_tree = web_interface.isax_tree() print(results_tree)
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Terminate and Reload Database Now that we know that everything is loaded, let's close the database and re-open it.
os.kill(server.pid, signal.SIGINT) time.sleep(5) # give it time to terminate os.kill(webserver.pid, signal.SIGINT) time.sleep(5) # give it time to terminate web_interface = None server = subprocess.Popen(['python', '../go_server_persistent.py', '--ts_length', str(ts_length), '--data_dir', ...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Inspect Data Let's repeat the previous tests to check whether our persistence architecture worked.
# select all database entries; all metadata fields results = web_interface.select(fields=[]) # we have the right number of database entries assert len(results) == num_ts # we have all the right primary keys assert sorted(results.keys()) == ts_keys # check that all the time series and metadata matches for k in tsdict...
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
We have successfully reloaded all of the database components from disk!
# terminate processes before exiting os.kill(server.pid, signal.SIGINT) time.sleep(5) # give it time to terminate web_interface = None webserver.terminate()
docs/persistence_demo.ipynb
Mynti207/cs207project
mit
Above, we see 5 points. These indicate the centroids of astrocytes. We will use the origin, i.e. $(0,0)$, as the target astrocyte. Thus we will construct the voronoi region around the origin.
plt.scatter((0),(0), color="red") plt.scatter(xpoints, ypoints) for pt in zip(xpoints,ypoints): plt.plot((0,pt[0]),(0,pt[1]),color="black") plt.xlim((-5,5)); plt.ylim((-5,5))
Voronoi The Astrocytes.ipynb
mathnathan/notebooks
mit
The target astrocyte is drawn in red. To help us visualize how the voronoi mesh is constructed we drew a line connecting the target astrocyte to each of its neighbors. Next we will mark the midpoint of each line with a green dot. These will form the vertices of the voronoi mesh.
plt.scatter((0),(0), color="red") plt.scatter(xpoints, ypoints) vpts = [] for pt in zip(xpoints,ypoints): plt.plot((0,pt[0]),(0,pt[1]),color="black") plt.scatter((pt[0]/2),(pt[1]/2),color="green") plt.xlim((-5,5)); plt.ylim((-5,5))
Voronoi The Astrocytes.ipynb
mathnathan/notebooks
mit
Lastly, by connecting all of these vertices with green lines we will have the voronoi region around the origin marked off by the perimeter of green lines.
plt.scatter((0),(0), color="red") plt.scatter(xpoints, ypoints) for i,pt in enumerate(zip(xpoints,ypoints)): plt.plot((0,pt[0]),(0,pt[1]),color="black") plt.scatter((pt[0]/2),(pt[1]/2),color="green") plt.plot((pt[0]/2,xpoints[i-1]/2),(pt[1]/2,ypoints[i-1]/2),color="green") plt.xlim((-5,5)); plt.ylim((-5,5))
Voronoi The Astrocytes.ipynb
mathnathan/notebooks
mit
Questões 1. Rode o mesmo programa nos dados contendo anos de escolaridade (primeira coluna) versus salário (segunda coluna). Baixe os dados aqui. Esse exemplo foi trabalhado em sala de aula em várias ocasiões. Os itens a seguir devem ser respondidos usando esses dados. RESOLUÇÃO: Arquivo baixado, encontra-se no diretór...
## Show figure # @param data Data to show in the graphic. # @param xlabel Text to be shown in abscissa axis. # @param ylabel Text to be shown in ordinate axis. def show_figure(data, xlabel, ylabel): plt.plot(data) plt.xlabel(xlabel) plt.ylabel(ylabel)
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
3. O que acontece com o RSS ao longo das iterações (aumenta ou diminui) se você usar 1000 iterações e um learning_rate (tamanho do passo do gradiente) de 0.001? Por que você acha que isso acontece?
points = genfromtxt("income.csv", delimiter=",") x = points[:,0] y = points[:,1] starting_w0 = 0 starting_w1 = 0 learning_rate = 0.001 iterations_number = 50 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) show_figure(rss_total, "Iteraction...
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
Com esse gráfico é possível observar que: Quanto maior o Learning Rate, maior o número de iterações necessárias para se atingir um mesmo erro.
learning_rate = 0.001 iterations_number = 1000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("RSS na última iteração: %.2f" % rss_total[-1]) iterations_number = 10000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, st...
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
Ao observar os valores de RSS calculados quando o número de iterações aumenta, é possível observar que o RSS obtido diminui cada vez mais. 4. Teste valores diferentes do número de iterações e learning_rate até que w0 e w1 sejam aproximadamente iguais a -39 e 5 respectivamente. Reporte os valores do número de iterações ...
learning_rate = 0.0025 iterations_number = 20000 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("W0: %.2f" % w0) print("W1: %.2f" % w1) print("RSS na última iteração: %.2f" % rss_total[-1])
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
5. O algoritmo do vídeo usa o número de iterações como critério de parada. Mude o algoritmo para considerar um critério de tolerância que é comparado ao tamanho do gradiente (como no algoritmo dos slides apresentados em sala). A metodologia aplicada foi a seguinte: quando não se fornece o número de iterações por parâme...
learning_rate = 0.0025 iterations_number = 0 [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) print("W0: %.2f" % w0) print("W1: %.2f" % w1) print("RSS na última iteração: %.2f" % rss_total[-1])
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
6. Ache um valor de tolerância que se aproxime dos valores dos parâmetros do item 4 acima. Que valor foi esse? O valor utilizado, conforme descrito na questão anterior, foi 0,001. Ou seja, quando o tamanho do gradiente for menor que 0,001, então, o algoritmo entenderá que a aproximação convergiu e terminará o processam...
import time start_time = time.time() [w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number) gradient_time = float(time.time()-start_time) print("Tempo para calcular os coeficientes pelo gradiente descendente: %.2f s." % gradient_time) start_time = ...
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
italoPontes/Machine-learning
lgpl-3.0
Import pandas We are using the very handy pandas library for dataframes.
import pandas as pd
oldsitejekyll/markdown_generator/publications.ipynb
manparvesh/manparvesh.github.io
mit
Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can mod...
publications = pd.read_csv("publications.tsv", sep="\t", header=0) publications
oldsitejekyll/markdown_generator/publications.ipynb
manparvesh/manparvesh.github.io
mit
Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): """Produce entities within text.""" return "".join(html_escape_table.get(c,c) for c in text)
oldsitejekyll/markdown_generator/publications.ipynb
manparvesh/manparvesh.github.io
mit
Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
import os for row, item in publications.iterrows(): md_filename = str(item.pub_date) + "-" + item.url_slug + ".md" html_filename = str(item.pub_date) + "-" + item.url_slug year = item.pub_date[:4] ## YAML variables md = "---\ntitle: \"" + item.title + '"\n' md += """collect...
oldsitejekyll/markdown_generator/publications.ipynb
manparvesh/manparvesh.github.io
mit
These files are in the publications directory, one directory below where we're working from.
!ls ../_publications/ !cat ../_publications/2009-10-01-paper-title-number-1.md
oldsitejekyll/markdown_generator/publications.ipynb
manparvesh/manparvesh.github.io
mit
Aim Motive of the notebook is to give a brief overview as to how to use the evolutionary sampling powered ensemble models as part of the EvoML research project. Will make the notebook more verbose if time permits. Priority will be to showcase the flexible API of the new estimators which encourage research and tinkerin...
from evoml.subsampling import BasicSegmenter_FEMPO, BasicSegmenter_FEGT, BasicSegmenter_FEMPT df = pd.read_csv('datasets/ozone.csv') df.head(2) X, y = df.iloc[:,:-1], df['output'] print(BasicSegmenter_FEGT.__doc__) from sklearn.tree import DecisionTreeRegressor clf_dt = DecisionTreeRegressor(max_depth=3) clf = Bas...
EvoML - Example Usage.ipynb
EvoML/EvoML
gpl-3.0
2. Subspacing - sampling in the domain of features - evolving and mutating columns
from evoml.subspacing import FeatureStackerFEGT, FeatureStackerFEMPO print(FeatureStackerFEGT.__doc__) clf = FeatureStackerFEGT(ngen=30) clf.fit(X, y) clf.score(X, y) ## Get the Hall of Fame individual hof = clf.segment[0] sampled_datasets = [eg.get_data() for eg in hof] [data.columns.tolist() for data in sample...
EvoML - Example Usage.ipynb
EvoML/EvoML
gpl-3.0
Set up training data In the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task. We'll have two functions - a sine function ...
train_x1 = torch.rand(50) train_x2 = torch.rand(50) train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2 train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
jrg365/gpytorch
mit
Set up a Hadamard multitask model The model should be somewhat similar to the ExactGP model in the simple regression example. The differences: The model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for. Rather than just using a RBFKernel, we're using that in conjuncti...
class MultitaskGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.RBFKernel() # We lear...
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
jrg365/gpytorch
mit
Training the model In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process. See the simple regression example for more info on this step.
# this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iterations = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes Gau...
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
jrg365/gpytorch
mit
Make predictions with the model
# Set into eval mode model.eval() likelihood.eval() # Initialize plots f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3)) # Test points every 0.02 in [0,1] test_x = torch.linspace(0, 1, 51) tast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0) test_i_task2 = torch.full_like(test_x, dtype=torch.lo...
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
jrg365/gpytorch
mit
La función Esta ecuación por Glass y Pasternack (1978) sirve para modelar redes neuronales y de interacción génica. $$x_{t+1}=\frac{\alpha x_{t}}{1+\beta x_{t}}$$ Donde $\alpha$ y $\beta$ son números positivos y $x_{t}\geq0$.
def g(x, alpha, beta): assert alpha >= 0 and beta >= 0 return (alpha*x)/(1 + (beta * x)) def plot_cobg(x, alpha, beta): y = np.linspace(x[0],x[1],300) g_y = g(y, alpha, beta) cobweb(lambda x: g(x, alpha, beta), y, g_y) # configura gráfica interactiva interact(plot_cobg, x=widgets.FloatR...
Glass_Pasternack.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
Búsqueda algebráica de puntos fijos A continuación sustituiremos f(x) en x reiteradamente hasta obtener la cuarta iterada de f.
# primera iterada f0 = (alpha*x)/(1+beta*x) Eq(f(x),f0) # segunda iterada # subs-tituye f0 en la x de f0 para generar f1 f1 = simplify(f0.subs(x, f0)) Eq(f(f(x)), f1) # tercera iterada f2 = simplify(f1.subs(x, f1)) Eq(f(f(f(x))), f2) # cuarta iterada f3 = simplify(f2.subs(x, f2)) Eq(f(f(f(f(x)))), f3) # puntos fij...
Glass_Pasternack.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
Punto fijo oscilatorio Al configurar $$\alpha, \beta$$ de modo que haya un punto fijo la serie de tiempo revela una oscilación entre cero y el punto fijo.
def solve_g(a, b): y = list(np.linspace(0,float(list(solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)).pop()),2)) for t in range(30): y.append(g(y[t], a, b)) zoom = plt.plot(y) print("ultimos 15 de la serie:") pprint(y[-15:]) print("\npuntos fijos:") return solveset(Eq(f1.subs(a...
Glass_Pasternack.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
¿Qué pasará con infinitas iteraciones? Todo parece indicar que la función converge a 1 si $\alpha=1$ y $\beta=1$. Si no, converge a $\frac{\alpha}{\beta}$
# con alfa=1 y beta=1 Eq(collect(f3, x), x/(x+1)) def plot_g(x, alpha, beta): pprint(x) y = np.linspace(x[0],x[1],300) g_y = g(y, alpha, beta) fig1 = plt.plot(y, g_y) fig1 = plt.plot(y, y, color='red') plt.axis('equal') interact(plot_g, x=widgets.FloatRangeSlider(min=0, max=30, step...
Glass_Pasternack.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
<a id='step1'></a> 1. Making Collectors for each number panel and xgap case
x = 2 y = 1 ygap = 0.1524 # m = 6 in zgap = 0.002 # m, veyr little gap to torquetube. tubeParams = {'diameter':0.15, 'tubetype':'square', 'material':'Metal_Grey', 'axisofrotation':True, 'visible': True} ft2m = 0.3048 xgaps = [3, 4, 6, 9, 12, 15, 18, 21] numpane...
docs/tutorials/16 - AgriPV - 3-up and 4-up collector optimization.ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step2'></a> 2. Build the Scene so it can be viewed with rvu
xgaps = np.round(np.array([3, 4, 6, 9, 12, 15, 18, 21]) * ft2m,1) numpanelss = [3, 4] sensorsxs = np.array(list(range(0, 201))) # Select CASE: xgap = np.round(xgaps[-1],1) numpanels = 4 # All the rest ft2m = 0.3048 hub_height = 8.0 * ft2m y = 1 pitch = 0.001 # If I recall, it doesn't like when pitch is 0 even if ...
docs/tutorials/16 - AgriPV - 3-up and 4-up collector optimization.ipynb
NREL/bifacial_radiance
bsd-3-clause
To View the generated Scene, you can navigate to the testfolder on a terminal and use: <b>front view:<b> rvu -vf views\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 makemod.oct <b> top view: </b> rvu -vf views\front.vp -e .0265652 -vp 5 0 70 -vd 0 0.0001 -1 makemod.oct Or run it directly from Jupyter by removing the...
## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal. ## Simulation will stop until you close the rvu window #!rvu -vf views\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 makemod.oct #!rvu -vf views\front.vp -e .0265652 -vp 5 0 70 -vd 0 0.0001 -1 makemod.oct
docs/tutorials/16 - AgriPV - 3-up and 4-up collector optimization.ipynb
NREL/bifacial_radiance
bsd-3-clause
Load cloudmlmagic extension
%load_ext cloudmlmagic
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Initialize and setup ML Engine parameters. <font color="red">Change PROJECTID and BUCKET</font> Following dict will be written in setup.py of your package, so list up neccesary packages of your code.
%%ml_init -projectId PROJECTID -bucket BUCKET -scaleTier BASIC_GPU -region asia-east1 -runtimeVersion 1.2 {'install_requires': ['keras', 'h5py', 'Pillow']}
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Load InceptionV3 model
%%ml_code from keras.applications.inception_v3 import InceptionV3 model = InceptionV3(weights='imagenet')
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Load dataset
%%ml_code from keras.preprocessing import image from keras.applications.inception_v3 import preprocess_input, decode_predictions from io import BytesIO import numpy as np import pandas as pd import requests url = 'https://github.com/hayatoy/deep-learning-datasets/releases/download/v0.1/tl_opera_capitol.npz' response ...
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Split dataset for train and test
%%ml_code from keras.utils import np_utils from sklearn.model_selection import train_test_split X_dataset = preprocess_input(X_dataset) y_dataset = np_utils.to_categorical(y_dataset) X_train, X_test, y_train, y_test = train_test_split( X_dataset, y_dataset, test_size=0.2, random_state=42)
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
The code cell above won't be included in the package being deployed on ML Engine. Just to clarify that normal InceptionV3 model cannot predict correctly with the Opera/Capitol dataset.
x = X_dataset[0] x = np.expand_dims(x, axis=0) preds = model.predict(x) print('Predicted:') for p in decode_predictions(preds, top=5)[0]: print("Score {}, Label {}".format(p[2], p[1]))
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Visualize last layers of InceptionV3
pd.DataFrame(model.layers).tail() %ml_code from keras.models import Model # Intermediate layer intermediate_layer_model = Model(inputs=model.input, outputs=model.layers[311].output)
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Extract intermediate features
x = np.expand_dims(X_dataset[0], axis=0) feature = intermediate_layer_model.predict(x) pd.DataFrame(feature.reshape(-1,1)).plot(figsize=(12, 3))
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Append dense layer at the last
%%ml_code from keras.layers import Dense # Append dense layer x = intermediate_layer_model.output x = Dense(1024, activation='relu')(x) predictions = Dense(2, activation='softmax')(x) # Transfer learning model, all layers are trainable at this moment transfer_model = Model(inputs=intermediate_layer_model.input, outp...
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Create Model and Version for Online Prediction
# !gcloud ml-engine models create OperaCapitol !gcloud ml-engine versions create v1 --model OperaCapitol --runtime-version 1.2 --origin gs://BUCKET/keras-mlengine/savedmodel
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Let's classify this image! This must be class 0.. <img src="opera.jpg">
from oauth2client.client import GoogleCredentials from googleapiclient import discovery from googleapiclient import errors PROJECTID = 'PROJECTID' projectID = 'projects/{}'.format(PROJECTID) modelName = 'OperaCapitol' modelID = '{}/models/{}'.format(projectID, modelName) credentials = GoogleCredentials.get_applicatio...
examples/Keras_Fine_Tuning.ipynb
hayatoy/cloudml-magic
mit
Loading the EIA Data, the path may need to be updated... This will take a few minutes to run.
#Iterate through the directory to find all the files to import #Modified so that it also works on macs path = os.path.join('EIA Data', '923-No_Header') full_path = os.path.join(path, '*.*') eiaNames = os.listdir(path) #Rename the keys for easier merging later fileNameMap = {'EIA923 SCHEDULES 2_3_4_5 Final 2010.xls':...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
The excel documents have different column names so we need to standardize them all
#Dict of values to replace to standardize column names across all dataframes monthDict = {"JANUARY":"JAN", "FEBRUARY":"FEB", "MARCH":"MAR", "APRIL":"APR", "MAY":"MAY", "JUNE":"JUN", "JULY":"JUL", "AUGUST":"AUG", "SEPTEMBER":"SEP", ...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit