markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Complexity ? $\implies$ The function lempel_ziv_complexity_cython seems to be indeed (almost) linear in $n$, the length of the binary sequence $S$. But let check more precisely, as it could also have a complexity of $\mathcal{O}(n \log n)$.
import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set(context="notebook", style="darkgrid", palette="hls", font="sans-serif", font_scale=1.4) import numpy as np import timeit sizes = np.array(np.trunc(np.logspace(1, 6, 30)), dtype=int) times = np.array([ timeit.timeit( stmt="lempel_ziv_complexity_cython(random_string({}))".format(n), globals=globals(), number=10, ) for n in sizes ]) plt.figure(figsize=(15, 10)) plt.plot(sizes, times, 'o-') plt.xlabel("Length $n$ of the binary sequence $S$") plt.ylabel(r"Time in $\mu\;\mathrm{s}$") plt.title("Time complexity of Lempel-Ziv complexity") plt.show() plt.figure(figsize=(15, 10)) plt.loglog(sizes, times, 'o-') plt.xlabel("Length $n$ of the binary sequence $S$") plt.ylabel(r"Time in $\mu\;\mathrm{s}$") plt.title("Time complexity of Lempel-Ziv complexity, loglog scale") plt.show()
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
It is linear in $\log\log$ scale, so indeed the algorithm seems to have a linear complexity. To sum-up, for a sequence $S$ of length $n$, it takes $\mathcal{O}(n)$ basic operations to compute its Lempel-Ziv complexity $\mathrm{Lempel}-\mathrm{Ziv}(S)$. Conclusion The Lempel-Ziv complexity is not too hard to implement, and it indeed represents a certain complexity of a binary sequence, capturing the regularity and reproducibility of the sequence. Using the Cython was quite useful to have a $\simeq \times 100$ speed up on our manual naive implementation ! The algorithm is not easy to analyze, we have a trivial $\mathcal{O}(n^2)$ bound but experiments showed it is more likely to be $\mathcal{O}(n \log n)$ in the worst case, and $\mathcal{O}(n)$ in practice for "not too complicated sequences" (or in average, for random sequences). (Experimental) Julia implementation I want to (quickly) try to see if I can use Julia to write a faster version of this function. See issue #1.
%%time %%script julia """Lempel-Ziv complexity for a sequence, in simple Julia code.""" function lempel_ziv_complexity(sequence) sub_strings = Set() n = length(sequence) ind = 1 inc = 1 while true if ind + inc > n break end sub_str = sequence[ind : ind + inc] if sub_str in sub_strings inc += 1 else push!(sub_strings, sub_str) ind += inc inc = 1 end end return length(sub_strings) end s = "1001111011000010" lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010 M = 1000; N = 10000; for _ in 1:M s = join(rand(0:1, N)); lempel_ziv_complexity(s); end lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
And to compare it fairly, let us use Pypy for comparison.
%%time %%pypy def lempel_ziv_complexity(sequence): """Lempel-Ziv complexity for a binary sequence, in simple Python code.""" sub_strings = set() n = len(sequence) ind = 0 inc = 1 while True: if ind + inc > len(sequence): break sub_str = sequence[ind : ind + inc] if sub_str in sub_strings: inc += 1 else: sub_strings.add(sub_str) ind += inc inc = 1 return len(sub_strings) s = "1001111011000010" lempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010 from random import random M = 1000 N = 10000 for _ in range(M): s = ''.join(str(int(random() < 0.5)) for _ in range(N)) lempel_ziv_complexity(s)
Short_study_of_the_Lempel-Ziv_complexity.ipynb
Naereen/Lempel-Ziv_Complexity
mit
Load learner object Note: you don't have to do this over and over again, you just have to call learn.export() to save the learner after you have loaded everything.
data_lm = load_data(data_path, bs=96) learn = language_model_learner(data=data_lm, arch=AWD_LSTM, model_dir=model_path, pretrained=False)
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
Load weights of trained model
learn.load('best_22zkdqlr')
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
Export Minimal Model State For Inference
learn.export('trained_model_22zkdqlr.hdf') learn.save_encoder('trained_model_encoder_22zkdqlr')
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
The data is very large so if you are running this notebook best to release memory by deleting these objects and loading the more lightweight inference artifacts that we just saved.
del learn del data_lm
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
Part II: Load Minimal Model For Inference
from inference import InferenceWrapper, pass_through
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
Create an InferenceWrapper object
wrapper = InferenceWrapper(model_path='/ds/lang_model/models_22zkdqlr/', ) issue_string = '# hello abacadabra world \nA second line **something bold**.' pooledfeat = wrapper.get_pooled_features(issue_string) print(pooledfeat) print(pooledfeat.shape) rawfeat = wrapper.get_raw_features(issue_string) print(rawfeat) print(rawfeat.shape)
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
Predict the next 5 words We don't actually use this functionality, but it is interesting to see for those who are curious what the output of a langauge model is. Recall that we are using the encoder of the language model to extract features from GitHub issues.
wrapper.learn.predict('I am having trouble opening a', 5)
Issue_Embeddings/notebooks/04_Inference.ipynb
kubeflow/code-intelligence
mit
<a id='twotime'></a> Two-Time Correlation Functions With the QuTiP time-evolution functions (for example mesolve and mcsolve), a state vector or density matrix can be evolved from an initial state at :math:t_0 to an arbitrary time $t$, $\rho(t)=V(t, t_0)\left{\rho(t_0)\right}$, where $V(t, t_0)$ is the propagator defined by the equation of motion. The resulting density matrix can then be used to evaluate the expectation values of arbitrary combinations of same-time operators. To calculate two-time correlation functions on the form $\left<A(t+\tau)B(t)\right>$, we can use the quantum regression theorem to write $$ \left<A(t+\tau)B(t)\right> = {\rm Tr}\left[A V(t+\tau, t)\left{B\rho(t)\right}\right] = {\rm Tr}\left[A V(t+\tau, t)\left{BV(t, 0)\left{\rho(0)\right}\right}\right] $$ We therefore first calculate $\rho(t)=V(t, 0)\left{\rho(0)\right}$ using one of the QuTiP evolution solvers with $\rho(0)$ as initial state, and then again use the same solver to calculate $V(t+\tau, t)\left{B\rho(t)\right}$ using $B\rho(t)$ as the initial state. Note that if the intial state is the steady state, then $\rho(t)=V(t, 0)\left{\rho_{\rm ss}\right}=\rho_{\rm ss}$ and $$ \left<A(t+\tau)B(t)\right> = {\rm Tr}\left[A V(t+\tau, t)\left{B\rho_{\rm ss}\right}\right] = {\rm Tr}\left[A V(\tau, 0)\left{B\rho_{\rm ss}\right}\right] = \left<A(\tau)B(0)\right>, $$ which is independent of $t$, so that we only have one time coordinate $\tau$. QuTiP provides a family of functions that assists in the process of calculating two-time correlation functions. The available functions and their usage is show in the table below. Each of these functions can use one of the following evolution solvers: Master-equation, Exponential series and the Monte-Carlo. The choice of solver is defined by the optional argument solver. <table> <tr> <th>QuTiP Function</th> <th>Correlation Function Type</th> </tr> <tr> <td>`correlation` or `correlation_2op_2t`</td> <td>$\left<A(t+\tau)B(t)\right>$ or $\left<A(t)B(t+\tau)\right>$. </td> </tr> <tr> <td>`correlation_ss` or `correlation_2op_1t`</td> <td>$\left<A(\tau)B(0)\right>$ or $\left<A(0)B(\tau)\right>$.</td> </tr> <tr> <td>`correlation_3op_1t`</td> <td>$\left<A(0)B(\tau)C(0)\right>$.</td> </tr> <tr> <td>`correlation_3op_2t`</td> <td>$\left<A(t)B(t+\tau)C(t)\right>$.</td> </tr> <tr> <td>`correlation_4op_1t` <font color='red'>(Depreciated)</font></td> <td>$\left<A(0)B(\tau)C(\tau)D(0)\right>$</td> </tr> <tr> <td>`correlation_4op_2t` <font color='red'>(Depreciated)</font></td> <td style='min-width:200px'>$\left<A(t)B(t+\tau)C(t+\tau)D(t)\right>$ </td> </tr> </table> The most common use-case is to calculate correlation functions of the kind $\left<A(\tau)B(0)\right>$, in which case we use the correlation function solvers that start from the steady state, e.g., the correlation_2op_1t function. These correlation function solvers return a vector or matrix (in general complex) with the correlations as a function of the delay times. <a id='steady'></a> Steady State Correlation Function The following code demonstrates how to calculate the $\left<x(t)x(0)\right>$ correlation for a leaky cavity with three different relaxation rates.
times = np.linspace(0,10.0,200) a = destroy(10) x = a.dag() + a H = a.dag() * a corr1 = correlation_2op_1t(H, None, times, [np.sqrt(0.5) * a], x, x) corr2 = correlation_2op_1t(H, None, times, [np.sqrt(1.0) * a], x, x) corr3 = correlation_2op_1t(H, None, times, [np.sqrt(2.0) * a], x, x) plot(times, np.real(corr1), times, np.real(corr2), times, np.real(corr3)) legend(['0.5','1.0','2.0']) xlabel(r'Time $t$') ylabel(r'Correlation $\left<x(t)x(0)\right>$') show()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
<a id='emission'></a> Emission Spectrum Given a correlation function $\left<A(\tau)B(0)\right>$ we can define the corresponding power spectrum as $$ S(\omega) = \int_{-\infty}^{\infty} \left<A(\tau)B(0)\right> e^{-i\omega\tau} d\tau. $$ In QuTiP, we can calculate $S(\omega)$ using either spectrum, which first calculates the correlation function using the essolve solver and then performs the Fourier transform semi-analytically, or we can use the function spectrum_correlation_fft to numerically calculate the Fourier transform of a given correlation data using FFT. The following example demonstrates how these two functions can be used to obtain the emission power spectrum.
N = 4 # number of cavity fock states wc = wa = 1.0 * 2 * np.pi # cavity and atom frequency g = 0.1 * 2 * np.pi # coupling strength kappa = 0.75 # cavity dissipation rate gamma = 0.25 # atom dissipation rate # Jaynes-Cummings Hamiltonian a = tensor(destroy(N), qeye(2)) sm = tensor(qeye(N), destroy(2)) H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag()) # collapse operators n_th = 0.25 c_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag(), np.sqrt(gamma) * sm] # calculate the correlation function using the mesolve solver, and then fft to # obtain the spectrum. Here we need to make sure to evaluate the correlation # function for a sufficient long time and sufficiently high sampling rate so # that the discrete Fourier transform (FFT) captures all the features in the # resulting spectrum. tlist = np.linspace(0, 100, 5000) corr = correlation_2op_1t(H, None, tlist, c_ops, a.dag(), a) wlist1, spec1 = spectrum_correlation_fft(tlist, corr) # calculate the power spectrum using spectrum, which internally uses essolve # to solve for the dynamics (by default) wlist2 = np.linspace(0.25, 1.75, 200) * 2 * np.pi spec2 = spectrum(H, wlist2, c_ops, a.dag(), a) # plot the spectra fig, ax = subplots(1, 1) ax.plot(wlist1 / (2 * np.pi), spec1, 'b', lw=2, label='eseries method') ax.plot(wlist2 / (2 * np.pi), spec2, 'r--', lw=2, label='me+fft method') ax.legend() ax.set_xlabel('Frequency') ax.set_ylabel('Power spectrum') ax.set_title('Vacuum Rabi splitting') ax.set_xlim(wlist2[0]/(2*np.pi), wlist2[-1]/(2*np.pi)) show()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
<a id='nonsteady'></a> Non-Steady State Correlation Function More generally, we can also calculate correlation functions of the kind $\left<A(t_1+t_2)B(t_1)\right>$, i.e., the correlation function of a system that is not in its steadystate. In QuTiP, we can evoluate such correlation functions using the function correlation_2op_2t. The default behavior of this function is to return a matrix with the correlations as a function of the two time coordinates ($t_1$ and $t_2$).
times = np.linspace(0, 10.0, 200) a = destroy(10) x = a.dag() + a H = a.dag() * a alpha = 2.5 rho0 = coherent_dm(10, alpha) corr = correlation_2op_2t(H, rho0, times, times, [np.sqrt(0.25) * a], x, x) pcolor(np.real(corr)) colorbar() xlabel(r'Time $t_2$') ylabel(r'Time $t_1$') title(r'Correlation $\left<x(t)x(0)\right>$') show()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
However, in some cases we might be interested in the correlation functions on the form $\left<A(t_1+t_2)B(t_1)\right>$, but only as a function of time coordinate $t_2$. In this case we can also use the correlation_2op_2t function, if we pass the density matrix at time $t_1$ as second argument, and None as third argument. The correlation_2op_2t function then returns a vector with the correlation values corresponding to the times in taulist (the fourth argument). Ex: First-Order Optical Coherence Function This example demonstrates how to calculate a correlation function on the form $\left<A(\tau)B(0)\right>$ for a non-steady initial state. Consider an oscillator that is interacting with a thermal environment. If the oscillator initially is in a coherent state, it will gradually decay to a thermal (incoherent) state. The amount of coherence can be quantified using the first-order optical coherence function $$ g^{(1)}(\tau) = \frac{\left<a^\dagger(\tau)a(0)\right>}{\sqrt{\left<a^\dagger(\tau)a(\tau)\right>\left<a^\dagger(0)a(0)\right>}}. $$ For a coherent state $|g^{(1)}(\tau)| = 1$, and for a completely incoherent (thermal) state $g^{(1)}(\tau) = 0$. The following code calculates and plots $g^{(1)}(\tau)$ as a function of $\tau$.
N = 15 taus = np.linspace(0,10.0,200) a = destroy(N) H = 2 * np.pi * a.dag() * a # collapse operator G1 = 0.75 n_th = 2.00 # bath temperature in terms of excitation number c_ops = [np.sqrt(G1 * (1 + n_th)) * a, np.sqrt(G1 * n_th) * a.dag()] # start with a coherent state rho0 = coherent_dm(N, 2.0) # first calculate the occupation number as a function of time n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0] # calculate the correlation function G1 and normalize with n to obtain g1 G1 = correlation_2op_2t(H, rho0, None, taus, c_ops, a.dag(), a) g1 = G1 / np.sqrt(n[0] * n) plot(taus, np.real(g1), 'b') plot(taus, n, 'r') title('Decay of a coherent state to an incoherent (thermal) state') xlabel(r'$\tau$') legend((r'First-order coherence function $g^{(1)}(\tau)$', r'occupation number $n(\tau)$')) show()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
For convenience, the steps for calculating the first-order coherence function have been collected in the function coherence_function_g1. Example: Second-Order Optical Coherence Function The second-order optical coherence function, with time-delay $\tau$, is defined as $$ \displaystyle g^{(2)}(\tau) = \frac{\langle a^\dagger(0)a^\dagger(\tau)a(\tau)a(0)\rangle}{\langle a^\dagger(0)a(0)\rangle^2} $$ For a coherent state $g^{(2)}(\tau) = 1$, for a thermal state $g^{(2)}(\tau=0) = 2$ and it decreases as a function of time (bunched photons, they tend to appear together), and for a Fock state with $n$ photons $g^{(2)}(\tau = 0) = n(n - 1)/n^2 < 1$ and it increases with time (anti-bunched photons, more likely to arrive separated in time). To calculate this type of correlation function with QuTiP, we could use correlation_4op_1t, which computes a correlation function of the form $\left<A(0)B(\tau)C(\tau)D(0)\right>$ (four operators, one delay-time vector). However, the middle pair of operators are evaluated at the same time $\tau$, and thus can be simplified to a single operator $E(\tau)=B(\tau)C(\tau)$, and we can instead call the correlation_3op_1t function to compute $\left<A(0)E(\tau)D(0)\right>$. This simplification is done automatically inside the depreciated correlation_4op_1t function that calls correlation_3op_1t internally. The following code calculates and plots $g^{(2)}(\tau)$ as a function of $\tau$ for coherent, thermal and fock states.
N = 25 taus = np.linspace(0, 25.0, 200) a = destroy(N) H = 2 * np.pi * a.dag() * a kappa = 0.25 n_th = 2.0 # bath temperature in terms of excitation number c_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag()] states = [{'state': coherent_dm(N, np.sqrt(2)), 'label': "coherent state"}, {'state': thermal_dm(N, 2), 'label': "thermal state"}, {'state': fock_dm(N, 2), 'label': "Fock state"}] fig, ax = subplots(1, 1) for state in states: rho0 = state['state'] # first calculate the occupation number as a function of time n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0] # calculate the correlation function G2 and normalize with n(0)n(t) to # obtain g2 G2 = correlation_3op_1t(H, rho0, taus, c_ops, a.dag(), a.dag() * a, a) g2 = G2 / (n[0] * n) ax.plot(taus, np.real(g2), label=state['label'], lw=2) ax.legend(loc=0) ax.set_xlabel(r'$\tau$') ax.set_ylabel(r'$g^{(2)}(\tau)$') show()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
For convenience, the steps for calculating the second-order coherence function have been collected in the function coherence_function_g2.
from IPython.core.display import HTML def css_styling(): styles = open("../styles/guide.css", "r").read() return HTML(styles) css_styling()
docs/guide/CorrelationFunctions.ipynb
qutip/qutip-notebooks
lgpl-3.0
As always, let's do imports and initialize a logger and a new bundle.
import phoebe from phoebe import u import numpy as np import matplotlib.pyplot as plt phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet logger = phoebe.logger() b = phoebe.default_binary() b['q'] = 0.7 b['requiv@secondary'] = 0.7
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets and Compute Options
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01') b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn') b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching') b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we add compute options for the 'legacy' backend.
b.add_compute('legacy', compute='phoebe1', irrad_method='none')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
And set the two RV datasets to use the correct methods (for both compute options)
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical') b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's use the external atmospheres available for both phoebe1 and phoebe2
b.set_value_all('atm', 'extern_planckint')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
b.set_value_all('gridsize', 30)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's also disable other special effect such as heating, gravity, and light-time effects.
b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'logarithmic') b.set_value_all('ld_coeffs', [0.,0.]) b.set_value_all('rv_grav', False) b.set_value_all('ltte', False)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Finally, let's compute all of our models
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel') b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel') b.run_compute(compute='phoebe1', model='phoebe1model')
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting Light Curve
colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'} afig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's plot the residuals between these two models
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-') artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-') artist = plt.axhline(0.0, linestyle='dashed', color='k') ylim = plt.ylim(-0.003, 0.003)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Dynamical RVs Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':') artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.') artist = plt.axhline(0.0, linestyle='dashed', color='k') ylim = plt.ylim(-1.5e-12, 1.5e-12)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Numerical (flux-weighted) RVs
afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True) artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':') artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.') artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':') artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.') artist = plt.axhline(0.0, linestyle='dashed', color='k') ylim = plt.ylim(-1e-2, 1e-2)
2.3/examples/legacy.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
The above parameters are explained in the manuscript.
hinge_points = [(0.4, 0), (1, 0.7), (0.5, 0.4)] metric = LpNormCurve(hinge_points[0][0], hinge_points[1][1], hinge_points[2][0], hinge_points[2][1]) et = EffTox(real_doses, efftox_priors, tox_cutoff, eff_cutoff, tox_certainty, eff_certainty, metric, trial_size, first_dose)
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
The EffTox class is an object-oriented implementation of the trial design by Thall & Cook (Thall, P. F., & Cook, J. D. (2004). Dose-Finding Based on Efficacy-Toxicity Trade-Offs. Biometrics, 60(3), 684โ€“693.) After observing outcomes 3NTE Outcomes for a patient are represented by a three item tuple, where: first item is 1-based dose-index give (i.e. 3 is dose-level 3); second item is 1 if toxicity happened, else 0; third item is 1 if efficacy happened, else 0. Outcomes for several patients are represented as lists:
outcomes1 = [(3, 0, 0), (3, 1, 0), (3, 0, 1)] np.random.seed(123) et.update(outcomes1, n=10**6)
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
In this instance, escalation to dose-level 4 is recommended.
et.tabulate()
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
We see that all doses are admissible in this instance, and that the utilities of dose-levels 3 and 4 are very similar. Dose Ambivalence is the likely result, i.e. after observing 3NTE in the Matchpoint trial, the design would have recommended dose 3 or dose 4. The reason is made plain by the plot below.
et.plot_posterior_utility_density(include_doses=[3,4], boot_samps=1000)
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
The posterior distributions of the utility of doses 3 and 4 largely occupy the same space so picking between them is difficult. In the Ambivalence.ipynb tutorial, we demonstrate a method for dealing with dose ambivalence. The plot above is similar (but not identical) to Figure 2 in the publication. I used the R package ggplot2 to produce the plots for the paper because the R package is more mature than the Python version. For instance, I could not get a legend to appear in Python. After observing outcomes 2NNN 3ENN 4EBE 3TEE 4NEE
outcomes2 = [ (2, 0, 0), (2, 0, 0), (2, 0, 0), (3, 0, 1), (3, 0, 0), (3, 0, 0), (4, 0, 1), (4, 1, 1), (4, 0, 1), (3, 1, 0), (3, 0, 1), (3, 0, 1), (4, 0, 0), (4, 0, 1), (4, 0, 1), ] et.reset() et.update(outcomes2, n=10**6) et.tabulate()
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
Dose 4 is now clearly the preferable dose.
et.plot_posterior_utility_density(include_doses=[3,4], boot_samps=1000)
tutorials/matchpoint/Utility.ipynb
brockk/clintrials
gpl-3.0
Model.fit์˜ ๋™์ž‘ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์—์„œ ์‹คํ–‰</a></td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œ</a></td> </table> ์‹œ์ž‘ํ•˜๊ธฐ ๊ฐ๋… ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•  ๋•Œ fit()๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ชจ๋“  ๊ฒƒ์ด ์›ํ™œํ•˜๊ฒŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒ˜์Œ๋ถ€ํ„ฐ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, GradientTape๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ ํ›ˆ๋ จ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ํ•„์š”ํ•˜์ง€๋งŒ ์ฝœ๋ฐฑ, ๋‚ด์žฅ ๋ฐฐํฌ ์ง€์› ๋˜๋Š” ๋‹จ๊ณ„ ์œตํ•ฉ๊ณผ ๊ฐ™์€ fit()์˜ ํŽธ๋ฆฌํ•œ ํŠน์„ฑ์„ ๊ณ„์† ํ™œ์šฉํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ์š”? Keras์˜ ํ•ต์‹ฌ ์›์น™์€ ๋ณต์žก์„ฑ์˜ ์ ์ง„์ ์ธ ๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. ํ•ญ์ƒ ์ ์ง„์ ์œผ๋กœ ์ €์ˆ˜์ค€ ์›Œํฌํ”Œ๋กœ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋†’์€ ์ˆ˜์ค€์˜ ๊ธฐ๋Šฅ์ด ์ž์‹ ์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€์™€ ์ •ํ™•ํ•˜๊ฒŒ ์ผ์น˜ํ•˜์ง€ ์•Š๋‹ค๊ณ  ํ•ด์„œ ์ ˆ๋งํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ ์ ˆํ•œ ์ˆ˜์ค€์˜ ๊ณ ์ˆ˜์ค€ ํŽธ์˜๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ ์ž‘์€ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ๋ณด๋‹ค ํšจ๊ณผ์ ์œผ๋กœ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. fit()๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, Model ํด๋ž˜์Šค์˜ ํ›ˆ๋ จ ๋‹จ๊ณ„ ํ•จ์ˆ˜๋ฅผ ์žฌ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜์— ๋Œ€ํ•ด fit()์— ์˜ํ•ด ํ˜ธ์ถœ๋˜๋Š” ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ‰์†Œ์™€ ๊ฐ™์ด fit()์„ ํ˜ธ์ถœ ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ž์ฒด ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŒจํ„ด์€ Functional API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐ ๋ฐฉํ•ด๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. Sequential ๋ชจ๋ธ, Functional API ๋ชจ๋ธ, ๋˜๋Š” ํ•˜์œ„ ํด๋ž˜์Šคํ™”๋œ ๋ชจ๋ธ๊ณผ ๊ด€๊ณ„์—†์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋–ป๊ฒŒ ๋™์ž‘ํ•˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์„ค์ • TensorFlow 2.2 ์ด์ƒ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.
import tensorflow as tf from tensorflow import keras
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
์ฒซ ๋ฒˆ์งธ ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. keras.Model์„ ํ•˜์œ„ ํด๋ž˜์Šคํ™”ํ•˜๋Š” ์ƒˆ ํด๋ž˜์Šค๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. train_step(self, data) ๋ฉ”์„œ๋“œ๋ฅผ ์žฌ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์†์‹ค์„ ํฌํ•จํ•˜์—ฌ ์‚ฌ์ „ ๋งคํ•‘ ๋ฉ”ํŠธ๋ฆญ ์ด๋ฆ„์„ ํ˜„์žฌ ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ธ์ˆ˜ data๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋งž๊ฒŒ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. fit(x, y, ...)๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ Numpy ๋ฐฐ์—ด์„ ์ „๋‹ฌํ•˜๋ฉด data๋Š” ํŠœํ”Œ (x, y)๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. tf.data.Dataset๋ฅผ ์ „๋‹ฌํ•˜๋Š” ๊ฒฝ์šฐ, fit(dataset, ...)๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ data๊ฐ€ ๊ฐ ๋ฐฐ์น˜์—์„œ dataset์— ์˜ํ•ด ์‚ฐ์ถœ๋ฉ๋‹ˆ๋‹ค. train_step ๋ฉ”์„œ๋“œ์˜ ๋ณธ๋ฌธ์—์„œ ์ด๋ฏธ ์ต์ˆ™ํ•œ ๊ฒƒ๊ณผ ์œ ์‚ฌํ•œ ์ •๊ธฐ์ ์ธ ํ›ˆ๋ จ ์—…๋ฐ์ดํŠธ๋ฅผ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ์ค‘์š”ํ•œ ๊ฒƒ์€ self.compiled_loss๋ฅผ ํ†ตํ•ด ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ compile()๋กœ ์ „๋‹ฌ๋œ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๋ž˜ํ•‘ํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, self.compiled_metrics.update_state(y, y_pred)๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ compile()์— ์ „๋‹ฌ๋œ ๋ฉ”ํŠธ๋ฆญ์˜ ์ƒํƒœ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ณ , ๋งˆ์ง€๋ง‰์— self.metrics์˜ ๊ฒฐ๊ณผ๋ฅผ ์ฟผ๋ฆฌํ•˜์—ฌ ํ˜„์žฌ ๊ฐ’์„ ๊ฒ€์ƒ‰ํ•ฉ๋‹ˆ๋‹ค.
class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics}
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์„ ์‹œ๋„ํ•ด๋ด…์‹œ๋‹ค.
import numpy as np # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # Just use `fit` as usual x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=3)
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
๋” ๋‚ฎ์€ ์ˆ˜์ค€์œผ๋กœ ๊ตฌ์„ฑํ•˜๊ธฐ ๋‹น์—ฐํžˆ compile()์—์„œ ์†์‹ค ํ•จ์ˆ˜์˜ ์ „๋‹ฌ์„ ๊ฑด๋„ˆ๋›ฐ๊ณ , ๋Œ€์‹  <code>train_step</code>์—์„œ <em>์ˆ˜๋™์œผ๋กœ</em> ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฉ”ํŠธ๋ฆญ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ๊ตฌ์„ฑํ•˜๊ธฐ ์œ„ํ•ด compile()๋งŒ ์‚ฌ์šฉํ•˜๋Š” ํ•˜์œ„ ์ˆ˜์ค€์˜ ์˜ˆ์ž…๋‹ˆ๋‹ค. ๋จผ์ € ์†์‹ค๊ณผ MAE ์ ์ˆ˜๋ฅผ ์ถ”์ ํ•˜๊ธฐ ์œ„ํ•ด Metric ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. (๋ฉ”ํŠธ๋ฆญ์— ๋Œ€ํ•œ update_state()๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ) ๋ฉ”ํŠธ๋ฆญ์˜ ์ƒํƒœ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๋Š” ์‚ฌ์šฉ์ž ์ •์˜train_step()์„ ๊ตฌํ˜„ํ•œ ๋‹ค์Œ, ์ฟผ๋ฆฌํ•˜์—ฌ(result()๋ฅผ ํ†ตํ•ด) ํ˜„์žฌ ํ‰๊ท  ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•˜์—ฌ ์ง„ํ–‰๋ฅ  ํ‘œ์‹œ์ค„์— ํ‘œ์‹œ๋˜๊ณ  ๋ชจ๋“  ์ฝœ๋ฐฑ์— ์ „๋‹ฌ๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ epoch ์‚ฌ์ด์˜ ๋ฉ”ํŠธ๋ฆญ์— ๋Œ€ํ•ด reset_states()๋ฅผ ํ˜ธ์ถœํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด, result()๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ํ›ˆ๋ จ ์‹œ์ž‘ ์ดํ›„๋ถ€ํ„ฐ ํ‰๊ท ์ด ๋ฐ˜ํ™˜๋˜์ง€๋งŒ, ์ผ๋ฐ˜์ ์œผ๋กœ epoch๋‹น ํ‰๊ท ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹คํ–‰ํžˆ๋„ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์žฌ์„ค์ •ํ•˜๋ ค๋Š” ๋งคํŠธ๋ฆญ์„ ๋ชจ๋ธ์˜ metrics ์†์„ฑ์— ๋‚˜์—ดํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๊ฐ fit() epoch๊ฐ€ ์‹œ์ž‘๋  ๋•Œ ๋˜๋Š” evaluate() ํ˜ธ์ถœ์ด ์‹œ์ž‘๋  ๋•Œ ์—ฌ๊ธฐ์— ๋‚˜์—ด๋œ ๋ชจ๋“  ๊ฐ์ฒด์— ๋Œ€ํ•ด reset_states()๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.
loss_tracker = keras.metrics.Mean(name="loss") mae_metric = keras.metrics.MeanAbsoluteError(name="mae") class CustomModel(keras.Model): def train_step(self, data): x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute our own loss loss = keras.losses.mean_squared_error(y, y_pred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Compute our own metrics loss_tracker.update_state(loss) mae_metric.update_state(y, y_pred) return {"loss": loss_tracker.result(), "mae": mae_metric.result()} @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. # If you don't implement this property, you have to call # `reset_states()` yourself at the time of your choosing. return [loss_tracker, mae_metric] # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) # We don't passs a loss or metrics here. model.compile(optimizer="adam") # Just use `fit` as usual -- you can use callbacks, etc. x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=5)
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
sample_weight ๋ฐ class_weight ์ง€์›ํ•˜๊ธฐ ์ฒซ ๋ฒˆ์งธ ๊ธฐ๋ณธ ์˜ˆ์ œ์—์„œ๋Š” ์ƒ˜ํ”Œ ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด ์–ธ๊ธ‰ํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. fit() ์ธ์ˆ˜ sample_weight ๋ฐ class_weight๋ฅผ ์ง€์›ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. data ์ธ์ˆ˜์—์„œ sample_weight ํŒจํ‚ค์ง€๋ฅผ ํ’‰๋‹ˆ๋‹ค. compiled_loss ๋ฐ compiled_metrics์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค(์†์‹ค ๋ฐ ๋ฉ”ํŠธ๋ฆญ์„ ์œ„ํ•ด compile()์— ์˜์กดํ•˜์ง€ ์•Š๋Š”๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ ์ ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค). ๋‹ค์Œ์€ ๊ทธ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค.
class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. if len(data) == 3: x, y, sample_weight = data else: sample_weight = None x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value. # The loss function is configured in `compile()`. loss = self.compiled_loss( y, y_pred, sample_weight=sample_weight, regularization_losses=self.losses, ) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics. # Metrics are configured in `compile()`. self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # You can now use sample_weight argument x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) sw = np.random.random((1000, 1)) model.fit(x, y, sample_weight=sw, epochs=3)
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
์ž์‹ ๋งŒ์˜ ํ‰๊ฐ€ ๋‹จ๊ณ„ ์ œ๊ณตํ•˜๊ธฐ model.evaluate() ํ˜ธ์ถœ์— ๋Œ€ํ•ด ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ์š”? ์ •ํ™•ํžˆ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ test_step์„ ์žฌ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.
class CustomModel(keras.Model): def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_pred = self(x, training=False) # Updates the metrics tracking the loss self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Update the metrics. self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(loss="mse", metrics=["mae"]) # Evaluate with our custom test_step x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.evaluate(x, y)
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
๋งˆ๋ฌด๋ฆฌ: ์—”๋“œ-ํˆฌ-์—”๋“œ GAN ์˜ˆ์ œ ๋ฐฉ๊ธˆ ๋ฐฐ์šด ๋ชจ๋“  ๋‚ด์šฉ์„ ํ™œ์šฉํ•˜๋Š” ์—”๋“œ ํˆฌ ์—”๋“œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ๊ณ ๋ คํ•ฉ๋‹ˆ๋‹ค. ์ƒ์„ฑ๊ธฐ ๋„คํŠธ์›Œํฌ๋Š” 28x28x1 ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. discriminator ๋„คํŠธ์›Œํฌ๋Š” 28x28x1 ์ด๋ฏธ์ง€๋ฅผ ๋‘ ๊ฐœ์˜ ํด๋ž˜์Šค("false" ๋ฐ "real")๋กœ ๋ถ„๋ฅ˜ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ๊ฐ ํ•˜๋‚˜์˜ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. discriminator๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ์†์‹ค ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค.
from tensorflow.keras import layers # Create the discriminator discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) # Create the generator latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", )
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
๋‹ค์Œ์€ ์ž์‹ ๋งŒ์˜ ์„œ๋ช…์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด compile()์„ ์žฌ์ •์˜ํ•˜๊ณ  train_step 17์ค„๋กœ ์ „์ฒด GAN ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌํ˜„ํ•˜๋Š” ํŠน์„ฑ ์™„๋ฃŒํ˜• GAN ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค.
class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(GAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim def compile(self, d_optimizer, g_optimizer, loss_fn): super(GAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] # Sample random points in the latent space batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Decode them to fake images generated_images = self.generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(tf.shape(labels)) # Train the discriminator with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = self.discriminator(self.generator(random_latent_vectors)) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) return {"d_loss": d_loss, "g_loss": g_loss}
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
ํ…Œ์ŠคํŠธํ•ด ๋ด…์‹œ๋‹ค.
# Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim) gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) # To limit the execution time, we only train on 100 batches. You can train on # the entire dataset. You will need about 20 epochs to get nice results. gan.fit(dataset.take(100), epochs=1)
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
tensorflow/docs-l10n
apache-2.0
Calculate gradients <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/gradients"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits. Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write downโ€”unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
!pip install tensorflow==2.7.0
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Install TensorFlow Quantum:
!pip install tensorflow-quantum # Update package resources to account for version changes. import importlib, pkg_resources importlib.reload(pkg_resources)
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Now import TensorFlow and the module dependencies:
import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
1. Preliminary Let's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
qubit = cirq.GridQubit(0, 0) my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha')) SVGCircuit(my_circuit)
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Along with an observable:
pauli_x = cirq.X(qubit) pauli_x
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Looking at this operator you know that $โŸจY(\alpha)| X | Y(\alpha)โŸฉ = \sin(\pi \alpha)$
def my_expectation(op, alpha): """Compute โŸจY(alpha)| `op` | Y(alpha)โŸฉ""" params = {'alpha': alpha} sim = cirq.Simulator() final_state_vector = sim.simulate(my_circuit, params).final_state_vector return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real my_alpha = 0.3 print("Expectation=", my_expectation(pauli_x, my_alpha)) print("Sin Formula=", np.sin(np.pi * my_alpha))
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
and if you define $f_{1}(\alpha) = โŸจY(\alpha)| X | Y(\alpha)โŸฉ$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
def my_grad(obs, alpha, eps=0.01): grad = 0 f_x = my_expectation(obs, alpha) f_x_prime = my_expectation(obs, alpha + eps) return ((f_x_prime - f_x) / eps).real print('Finite difference:', my_grad(pauli_x, my_alpha)) print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
2. The need for a differentiator With larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the tfq.differentiators.Differentiator class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
expectation_calculation = tfq.layers.Expectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) expectation_calculation(my_circuit, operators=pauli_x, symbol_names=['alpha'], symbol_values=[[my_alpha]])
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
sampled_expectation_calculation = tfq.layers.SampledExpectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) sampled_expectation_calculation(my_circuit, operators=pauli_x, repetitions=500, symbol_names=['alpha'], symbol_values=[[my_alpha]])
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
This can quickly compound into a serious accuracy problem when it comes to gradients:
# Make input_points = [batch_size, 1] array. input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32) exact_outputs = expectation_calculation(my_circuit, operators=pauli_x, symbol_names=['alpha'], symbol_values=input_points) imperfect_outputs = sampled_expectation_calculation(my_circuit, operators=pauli_x, repetitions=500, symbol_names=['alpha'], symbol_values=input_points) plt.title('Forward Pass Values') plt.xlabel('$x$') plt.ylabel('$f(x)$') plt.plot(input_points, exact_outputs, label='Analytic') plt.plot(input_points, imperfect_outputs, label='Sampled') plt.legend() # Gradients are a much different story. values_tensor = tf.convert_to_tensor(input_points) with tf.GradientTape() as g: g.watch(values_tensor) exact_outputs = expectation_calculation(my_circuit, operators=pauli_x, symbol_names=['alpha'], symbol_values=values_tensor) analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor) with tf.GradientTape() as g: g.watch(values_tensor) imperfect_outputs = sampled_expectation_calculation( my_circuit, operators=pauli_x, repetitions=500, symbol_names=['alpha'], symbol_values=values_tensor) sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor) plt.title('Gradient Values') plt.xlabel('$x$') plt.ylabel('$f^{\'}(x)$') plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic') plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled') plt.legend()
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
# A smarter differentiation scheme. gradient_safe_sampled_expectation = tfq.layers.SampledExpectation( differentiator=tfq.differentiators.ParameterShift()) with tf.GradientTape() as g: g.watch(values_tensor) imperfect_outputs = gradient_safe_sampled_expectation( my_circuit, operators=pauli_x, repetitions=500, symbol_names=['alpha'], symbol_values=values_tensor) sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor) plt.title('Gradient Values') plt.xlabel('$x$') plt.ylabel('$f^{\'}(x)$') plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic') plt.plot(input_points, sampled_param_shift_gradients, label='Sampled') plt.legend()
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observables Let's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
pauli_z = cirq.Z(qubit) pauli_z
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = โŸจY(\alpha)| Z | Y(\alpha)โŸฉ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
test_value = 0. print('Finite difference:', my_grad(pauli_z, test_value)) print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
It's a match (close enough). Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$. This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
sum_of_outputs = tfq.layers.Expectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=[[test_value]])
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
test_value_tensor = tf.convert_to_tensor([[test_value]]) with tf.GradientTape() as g: g.watch(test_value_tensor) outputs = sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=test_value_tensor) sum_of_gradients = g.gradient(outputs, test_value_tensor) print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value)) print(sum_of_gradients.numpy())
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usage All differentiators that exist inside of TensorFlow Quantum subclass tfq.differentiators.Differentiator. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement get_gradient_circuits, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload differentiate_analytic and differentiate_sampled; the class tfq.differentiators.Adjoint takes this route. The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting. Recall the circuit you defined above, $|\alphaโŸฉ = Y^{\alpha}|0โŸฉ$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = โŸจ\alpha|X|\alphaโŸฉ$. Using parameter shift rules, for this circuit, you can find that the derivative is $$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$ The get_gradient_circuits function returns the components of this derivative.
class MyDifferentiator(tfq.differentiators.Differentiator): """A Toy differentiator for <Y^alpha | X |Y^alpha>.""" def __init__(self): pass def get_gradient_circuits(self, programs, symbol_names, symbol_values): """Return circuits to compute gradients for given forward pass circuits. Every gradient on a quantum computer can be computed via measurements of transformed quantum circuits. Here, you implement a custom gradient for a specific circuit. For a real differentiator, you will need to implement this function in a more general way. See the differentiator implementations in the TFQ library for examples. """ # The two terms in the derivative are the same circuit... batch_programs = tf.stack([programs, programs], axis=1) # ... with shifted parameter values. shift = tf.constant(1/2) forward = symbol_values + shift backward = symbol_values - shift batch_symbol_values = tf.stack([forward, backward], axis=1) # Weights are the coefficients of the terms in the derivative. num_program_copies = tf.shape(batch_programs)[0] batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]), [num_program_copies, 1, 1]) # The index map simply says which weights go with which circuits. batch_mapper = tf.tile( tf.constant([[[0, 1]]]), [num_program_copies, 1, 1]) return (batch_programs, symbol_names, batch_symbol_values, batch_weights, batch_mapper)
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
The Differentiator base class uses the components returned from get_gradient_circuits to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing tfq.layer objects:
custom_dif = MyDifferentiator() custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif) # Now let's get the gradients with finite diff. with tf.GradientTape() as g: g.watch(values_tensor) exact_outputs = expectation_calculation(my_circuit, operators=[pauli_x], symbol_names=['alpha'], symbol_values=values_tensor) analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor) # Now let's get the gradients with custom diff. with tf.GradientTape() as g: g.watch(values_tensor) my_outputs = custom_grad_expectation(my_circuit, operators=[pauli_x], symbol_names=['alpha'], symbol_values=values_tensor) my_gradients = g.gradient(my_outputs, values_tensor) plt.subplot(1, 2, 1) plt.title('Exact Gradient') plt.plot(input_points, analytic_finite_diff_gradients.numpy()) plt.xlabel('x') plt.ylabel('f(x)') plt.subplot(1, 2, 2) plt.title('My Gradient') plt.plot(input_points, my_gradients.numpy()) plt.xlabel('x')
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
This new differentiator can now be used to generate differentiable ops. Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
# Create a noisy sample based expectation op. expectation_sampled = tfq.get_sampled_expectation_op( cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01))) # Make it differentiable with your differentiator: # Remember to refresh the differentiator before attaching the new op custom_dif.refresh() differentiable_op = custom_dif.generate_differentiable_op( sampled_op=expectation_sampled) # Prep op inputs. circuit_tensor = tfq.convert_to_tensor([my_circuit]) op_tensor = tfq.convert_to_tensor([[pauli_x]]) single_value = tf.convert_to_tensor([[my_alpha]]) num_samples_tensor = tf.convert_to_tensor([[5000]]) with tf.GradientTape() as g: g.watch(single_value) forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value, op_tensor, num_samples_tensor) my_gradients = g.gradient(forward_output, single_value) print('---TFQ---') print('Foward: ', forward_output.numpy()) print('Gradient:', my_gradients.numpy()) print('---Original---') print('Forward: ', my_expectation(pauli_x, my_alpha)) print('Gradient:', my_grad(pauli_x, my_alpha))
site/en-snapshot/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
2. Check for errors Don't assume that the data is always good. Create tests to make sure.
# We start with the assumption that the data is news worthy. IsNewsWorthy = True # Are there errors in the data? if data["Richter"] > 100: # A earthquake with magnitue of 100 seems unlikely, so we ignore it. IsNewsWorthy = False # We are only interested in earthquakes in Sweden. if data["Country"] != "Sweden": IsNewsWorthy = False
3 News robot/Earthquake news robot.ipynb
peterdalle/mij
gpl-3.0
3. Create text Basically, it's just a lot of if-statements. How to make the code easier to read: Don't nest if-statements inside each other too much. Use elif instead. Use multiline strings, with """ around them. Use .format on strings. text = "My name is {0} and I am {1} years old" text = text.format(Name, Age) Which is the same as this: text = "My name is " + Name + " and I am " + str(Age) + " years old"
# If the earthquake is deemed news worthy, then create a journalistic text. text = "" if IsNewsWorthy: if (data["Richter"] > 6): # Text for large quake (6+). text = """BREAKING: Major earthquake in {0} Today at {1} there was a severe earthquake in {2}, {3}, with a magnitude of {4} on the Richter scale. """ text = text.format(data["City"], data["Datetime"], data["City"], data["Country"], data["Richter"]) elif (data["Richter"] < 6 or data["Richter"] >= 3): # Text for medium quake (3-5). text = """Earthquake in {0} Today at {1} there was a earthquake in {2}, {3}, with a magnitude of {4} on the Richter scale. """ text = text.format(data["City"], data["Datetime"], data["City"], data["Country"], data["Richter"]) # Add this at the end of all texts. text = text + "Published " + datetime.now().strftime("%Y-%m-%d %H:%M") + " by Ada the news robot" # Look at the text print(text)
3 News robot/Earthquake news robot.ipynb
peterdalle/mij
gpl-3.0
4. Save the results to a text file Lets create a function that saves the text to a file.
# Function to save the text to a file. def savefile(filename, text): f = open(filename, mode="w") # Open file for writing (w = writing, a = append, r = reading) f.write(text) f.close() # Only save as text file if there is some text. if text != "": savefile("newsrobot-earthquake.txt", text) print("Text published!") else: print("Text is not published.")
3 News robot/Earthquake news robot.ipynb
peterdalle/mij
gpl-3.0
5. Present the results (read from text file) Lets create a function that reads the text from the text file.
# Function that reads text from a file. def readfile(filename): f = open(filename, mode="r") # Open file for reading (w = writing, a = append, r = reading) lines = f.read() f.close() return(lines) # Read the file created earlier by the news robot. text = readfile("newsrobot-earthquake.txt") # Look at the text from the file. print(text)
3 News robot/Earthquake news robot.ipynb
peterdalle/mij
gpl-3.0
Preparation of data and model To create a multi-label model in shogun, we'll first create an instance of MultilabelModel and initialize it by the features and labels. The labels should be MultilabelSOLables. It should be initialized by providing with the n_labels (number of examples) and n_classes (total number of classes) and then individually adding a label using set_sparse_label() method.
def create_features(X, constant): feats = sg.create_features( np.c_[X, constant * np.ones(X.shape[0])].T) return feats def create_labels(Y, n_classes): try: n_samples = Y.shape[0] except AttributeError: n_samples = len(Y) labels = sg.MultilabelSOLabels(n_samples, n_classes) for i, sparse_label in enumerate(Y): try: sparse_label = sorted(sparse_label) except TypeError: sparse_label = [sparse_label] labels.set_sparse_label(i, np.array(sparse_label, dtype=np.int32)) return labels def split_data(X, Y, ratio): num_samples = X.shape[0] train_samples = int(ratio * num_samples) return (X[:train_samples], Y[:train_samples], X[train_samples:], Y[train_samples:]) X_train, Y_train, X_test, Y_test = split_data(X, Y, 0.9) feats_0 = create_features(X_train, 0) feats_1 = create_features(X_train, 1) labels = create_labels(Y_train, 2) model = sg.structured_model("MultilabelModel", features=feats_0, labels=labels) model_with_bias = sg.structured_model("MultilabelModel", features=feats_1, labels=labels)
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Training and Evaluation of Structured Machines with/without Threshold In Shogun, several solvers and online solvers have been implemented for SO-Learning. Let's try to train the model using an online solver StochasticSOSVM.
import time sgd = sg.create_machine("StochasticSOSVM", model=model, labels=labels) sgd_with_bias = sg.create_machine("StochasticSOSVM", model=model_with_bias, labels=labels) start = time.process_time() sgd.train() print(">>> Time taken for SGD *without* threshold tuning = %f" % (time.process_time() - start)) start = time.process_time() sgd_with_bias.train() print(">>> Time taken for SGD *with* threshold tuning = %f" % (time.process_time() - start))
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Accuracy For measuring accuracy in multi-label classification, Jaccard Similarity Coefficients $\big(J(A, B) = \frac{|A \cap B|}{|A \cup B|}\big)$ is used : $Accuracy = \frac{1}{p}\sum_{i=1}^{p}\frac{ |Y_i \cap h(x_i)|}{|Y_i \cup h(x_i)|}$ This is available in MultilabelAccuracy for MultilabelLabels and StructuredAccuracy for MultilabelSOLabels.
def evaluate_machine(machine, X_test, Y_test, n_classes, bias): if bias: feats_test = create_features(X_test, 1) else: feats_test = create_features(X_test, 0) test_labels = create_labels(Y_test, n_classes) out_labels = machine.apply(feats_test) evaluator = sg.create_evaluation("StructuredAccuracy") jaccard_similarity_score = evaluator.evaluate(out_labels, test_labels) return jaccard_similarity_score print(">>> Accuracy of SGD *without* threshold tuning = %f " % evaluate_machine(sgd, X_test, Y_test, 2, False)) print(">>> Accuracy of SGD *with* threshold tuning = %f " %evaluate_machine(sgd_with_bias, X_test, Y_test, 2, True))
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Plotting the Data along with the Boundary
import matplotlib.pyplot as plt %matplotlib inline def get_parameters(weights): return -weights[0]/weights[1], -weights[2]/weights[1] def scatter_plot(X, y): zeros_class = np.where(y == 0) ones_class = np.where(y == 1) plt.scatter(X[zeros_class, 0], X[zeros_class, 1], c='b', label="Negative Class") plt.scatter(X[ones_class, 0], X[ones_class, 1], c='r', label="Positive Class") def plot_hyperplane(machine_0, machine_1, label_0, label_1, title, X, y): scatter_plot(X, y) x_min, x_max = np.min(X[:, 0]) - 0.5, np.max(X[:, 0]) + 0.5 y_min, y_max = np.min(X[:, 1]) - 0.5, np.max(X[:, 1]) + 0.5 xx = np.linspace(x_min, x_max, 1000) m_0, c_0 = get_parameters(machine_0.get("w")) m_1, c_1 = get_parameters(machine_1.get("w")) yy_0 = m_0 * xx + c_0 yy_1 = m_1 * xx + c_1 plt.plot(xx, yy_0, "k--", label=label_0) plt.plot(xx, yy_1, "g-", label=label_1) plt.xlim((x_min, x_max)) plt.ylim((y_min, y_max)) plt.grid() plt.legend(loc="best") plt.title(title) plt.show() fig = plt.figure(figsize=(10, 10)) plot_hyperplane(sgd, sgd_with_bias, "Boundary for machine *without* bias for class 0", "Boundary for machine *with* bias for class 0", "Binary Classification using SO-SVM with/without threshold tuning", X, Y)
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
As we can see from the above plot that sgd_with_bias can produce better classification boundary. The model without threshold tuning is crossing origin of space, while the one with threshold tuning is crossing $(1,1)$ (the constant we have added earlier).
from shogun import SparseMultilabel_obtain_from_generic def plot_decision_plane(machine, title, X, y, bias): plt.figure(figsize=(24, 8)) plt.suptitle(title) plt.subplot(1, 2, 1) x_min, x_max = np.min(X[:, 0]) - 0.5, np.max(X[:, 0]) + 0.5 y_min, y_max = np.min(X[:, 1]) - 0.5, np.max(X[:, 1]) + 0.5 xx = np.linspace(x_min, x_max, 200) yy = np.linspace(y_min, y_max, 200) x_mesh, y_mesh = np.meshgrid(xx, yy) if bias: feats = create_features(np.c_[x_mesh.ravel(), y_mesh.ravel()], 1) else: feats = create_features(np.c_[x_mesh.ravel(), y_mesh.ravel()], 0) out_labels = machine.apply_structured(feats) print(out_labels) z = [] for i in range(out_labels.get_num_labels()): label = SparseMultilabel_obtain_from_generic(out_labels.get("labels")[i]).get_data() if label.shape[0] == 1: # predicted a single label z.append(label[0]) elif label.shape[0] == 2: # predicted both the classes z.append(2) elif label.shape[0] == 0: # predicted none of the class z.append(3) z = np.array(z) z = z.reshape(x_mesh.shape) c = plt.pcolor(x_mesh, y_mesh, z, cmap=plt.cm.gist_heat) scatter_plot(X, y) plt.xlim((x_min, x_max)) plt.ylim((y_min, y_max)) plt.colorbar(c) plt.title("Decision Surface") plt.legend(loc="best") plt.subplot(1, 2, 2) weights = machine.get_w() m_0, c_0 = get_parameters(weights[:3]) m_1, c_1 = get_parameters(weights[3:]) yy_0 = m_0 * xx + c_0 yy_1 = m_1 * xx + c_1 plt.plot(xx, yy_0, "r--", label="Boundary for class 0") plt.plot(xx, yy_1, "g-", label="Boundary for class 1") plt.title("Hyper planes for different classes") plt.legend(loc="best") plt.xlim((x_min, x_max)) plt.ylim((y_min, y_max)) plt.show() sgd plot_decision_plane(sgd,"Model *without* Threshold Tuning", X, Y, False) plot_decision_plane(sgd_with_bias,"Model *with* Threshold Tuning", X, Y, True)
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
As we can see from the above plots of decision surface, the black region corresponds to the region of negative (label = $0$) class, where as the red region corresponds to the positive (label = $1$). But along with that there are some regions (although very small) of white surface and orange surface. The white surface corresponds to the region not classified to any label, whereas the orange region correspond to the region classified to both the labels. The reason for existence of these type of surface is that the above boundaries for both the class don't overlap exactly with each other (illustrated above). So, there are some regions for which both the compatibility function $f(x, 0) > 0$ as well as $f(x, 1) > 0$ (predicted both the labels) and there are some regions where both the compatibility function $f(x, 0) < 0$ and $f(x, 1) < 0$ (predicted none of the labels). Experiment 2 : Multi-Label Data Loading of data from LibSVM File
def load_data(file_name): input_file = open(file_name) lines = input_file.readlines() n_samples = len(lines) n_features = len(lines[0].split()) - 1 Y = [] X = [] for line in lines: data = line.split() Y.append(map(int, data[0].split(","))) feats = [] for feat in data[1:]: feats.append(float(feat.split(":")[1])) X.append(feats) X = np.array(X) n_classes = max(max(label) for label in Y) + 1 return X, Y, n_samples, n_features, n_classes
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Training and Evaluation of Structured Machines with/without Threshold
def test_multilabel_data(train_file, test_file): X_train, Y_train, n_samples, n_features, n_classes = load_data(train_file) X_test, Y_test, n_samples, n_features, n_classes = load_data(test_file) # create features and labels multilabel_feats_0 = create_features(X_train, 0) multilabel_feats_1 = create_features(X_train, 1) multilabel_labels = create_labels(Y_train, n_classes) # create multi-label model multilabel_model = MultilabelModel(multilabel_feats_0, multilabel_labels) multilabel_model_with_bias = MultilabelModel(multilabel_feats_1, multilabel_labels) # initializing machines for SO-learning multilabel_sgd = StochasticSOSVM(multilabel_model, multilabel_labels) multilabel_sgd_with_bias = StochasticSOSVM(multilabel_model_with_bias, multilabel_labels) start = time() multilabel_sgd.train() t1 = time() - start multilabel_sgd_with_bias.train() t2 = time() - start - t1 return (evaluate_machine(multilabel_sgd, X_test, Y_test, n_classes, False), t1, evaluate_machine(multilabel_sgd_with_bias, X_test, Y_test, n_classes, True), t2)
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Comparision with scikit-learn's implementation
from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.metrics import jaccard_similarity_score from sklearn.preprocessing import LabelBinarizer def sklearn_implementation(train_file, test_file): label_binarizer = LabelBinarizer() X_train, Y_train, n_samples, n_features, n_classes = load_data(train_file) X_test, Y_test, n_samples, n_features, n_classes = load_data(test_file) clf = OneVsRestClassifier(SVC(kernel='linear')) start = time() clf.fit(X_train, label_binarizer.fit_transform(Y_train)) t1 = time() - start return (jaccard_similarity_score(label_binarizer.fit_transform(Y_test), clf.predict(X_test)), t1) def print_table(train_file, test_file, caption): acc_0, t1, acc_1, t2 = test_multilabel_data(train_file, test_file) sk_acc, sk_t1 = sklearn_implementation(train_file, test_file) result = ''' \t\t%s Machine\t\t\t\tAccuracy\tTrain-time\n SGD *without* threshold tuning \t%f \t%f SGD *with* threshold tuning \t%f \t%f scikit-learn's implementation \t%f \t%f ''' % (caption, acc_0, t1, acc_1, t2, sk_acc, sk_t1) print(result)
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Yeast Multi-Label Data [2]
print_table(os.path.join(SHOGUN_DATA_DIR, "multilabel/yeast_train.svm"), os.path.join(SHOGUN_DATA_DIR, "multilabel/yeast_test.svm"), "Yeast dataset")
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Scene Multi-Label Data [2]
print_table(os.path.join(SHOGUN_DATA_DIR, "multilabel/scene_train"), os.path.join(SHOGUN_DATA_DIR, "multilabel/scene_test"), "Scene dataset")
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
geektoni/shogun
bsd-3-clause
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_ids = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')] target_ids = [[target_vocab_to_int[word] for word in line.split()] for line in target_text.split('\n')] eos = target_vocab_to_int['<EOS>'] target_ids = [line + [eos] for line in target_ids] return source_ids, target_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) pre_data = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), end], 1) return pre_data """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop_out = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) encoder_cell = tf.contrib.rnn.MultiRNNCell([drop_out] * num_layers) _, output_rnn = tf.nn.dynamic_rnn(encoder_cell, rnn_inputs, dtype=tf.float32) return output_rnn """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function drop_out = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) dynamic_rnn_decoder, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(drop_out, decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(dynamic_rnn_decoder) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ #fixing length issue size = maximum_length-1 # TODO: Implement Function decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, size, vocab_size ) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, decoder_fn_inference, scope=decoding_scope ) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function rnn_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) with tf.variable_scope("decoding") as decoding_scope: output_fully_connected = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) training_logits = decoding_layer_train( encoder_state, rnn_cell, dec_embed_input, sequence_length, decoding_scope, output_fully_connected, keep_prob ) with tf.variable_scope("decoding", reuse=True) as decoding_scope: inference_logits = decoding_layer_infer( encoder_state, rnn_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fully_connected, keep_prob ) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function encoder_embedd_input = tf.contrib.layers.embed_sequence( input_data, source_vocab_size, enc_embedding_size ) encoder_layer = encoding_layer( encoder_embedd_input, rnn_size, num_layers, keep_prob ) decoder_input = process_decoding_input( target_data, target_vocab_to_int, batch_size ) decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) decoder_embedding_input = tf.nn.embedding_lookup(decoder_embeddings, decoder_input) training_logits, inference_logits = decoding_layer( decoder_embedding_input, decoder_embeddings, encoder_layer, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob ) return training_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability
# Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 512 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 196 decoding_embedding_size = 196 # Learning Rate learning_rate = 0.005 # Dropout Keep Probability keep_probability = 0.9
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function int_sentence = [vocab_to_int.get(w.lower(), vocab_to_int['<UNK>']) for w in sentence.split()] return int_sentence """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
language-translation/dlnd_language_translation.ipynb
rajuniit/udacity
mit
Reading the data We download the data from "stooq" and only store the High value. Please note: this notebook is for showcasing tsfreshs feature extraction - not to predict stock market prices :-)
df = web.DataReader("AAPL", 'stooq')["High"] df.head() plt.figure(figsize=(15, 6)) df.plot(ax=plt.gca()) plt.show()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
We want to make the time dependency a bit clearer and add an identifier to each of the stock values (in this notebook we only have Google though).
df_melted = pd.DataFrame({"high": df.copy()}) df_melted["date"] = df_melted.index df_melted["Symbols"] = "AAPL" df_melted.head()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Create training data sample Forecasting typically involves the following steps: * take all data up to today * do feature extraction (e.g. by running extract_features) * run a prediction model (e.g. a regressor, see below) * use the result as the forecast for tomorrow In training however, we need multiple examples to train. If we would only use the time series until today (and wait for the value of tomorrow to have a target), we would only have a single training example. Therefore we use a trick: we replay the history. Imagine you have a cut-out window sliding over your data. At each time step $t$, you treat the data as it would be today. You extract the features with everything you know until today (which is all data until and including $t$). The target for the features until time $t$ is the time value of time $t + 1$ (which you already know, because everything has already happened). The process of window-sliding is implemented in the function roll_time_series. Our window size will be 20 (we look at max 20 days in the past) and we disregard all windows which are shorter than 5 days.
df_rolled = roll_time_series(df_melted, column_id="Symbols", column_sort="date", max_timeshift=20, min_timeshift=5) df_rolled.head()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
The resulting dataframe now consists of these "windows" stamped out of the original dataframe. For example all data with the id = (AAPL, 2020-07-14 00:00:00) comes from the original data of stock AAPL including the last 20 days until 2020-07-14:
df_rolled[df_rolled["id"] == ("AAPL", pd.to_datetime("2020-07-14"))] df_melted[(df_melted["date"] <= pd.to_datetime("2020-07-14")) & (df_melted["date"] >= pd.to_datetime("2020-06-15")) & (df_melted["Symbols"] == "AAPL")]
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
If you now group by the new id column, each of the groups will be a certain stock symbol until and including the data until a certain day (and including the last 20 days in the past). Whereas we started with 1259 data samples:
len(df_melted)
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
we now have 1254 unique windows (identified by stock symbol and ending date):
df_rolled["id"].nunique()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
We "lost" 5 windows, as we required to have a minimum history of more than 5 days.
df_rolled.groupby("id").size().agg([np.min, np.max])
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
The process is also shown in this image (please note that the window size is smaller for better visibility): <img src="./stocks.png"/> Extract Features The rolled (windowed) data sample is now in the correct format to use it for tsfreshs feature extraction. As normal, features will be extracted using all data for a given id, which is in our case all data of a given window and a given id (one colored box in the graph above). If the feature extraction returns a row with the index (AAPL, 2020-07-14 00:00:00), you know it has been calculated using the AAPL data up and including 2020-07-14 (and 20 days of history).
X = extract_features(df_rolled.drop("Symbols", axis=1), column_id="id", column_sort="date", column_value="high", impute_function=impute, show_warnings=False) X.head()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
We make the data a bit easier to work with by removing the tuple-index
X = X.set_index(X.index.map(lambda x: x[1]), drop=True) X.index.name = "last_date" X.head()
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Our (AAPL, 2020-07-14 00:00:00) is also in the data again:
X.loc['2020-07-14']
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Just to repeat: the features in this row were only calculated using the time series values of AAPL up to and including 2015-07-14 and the last 20 days. Prediction We can now use the extracted features to train a regressor. But what will be our targets? The target for the row 2020-07-13 is the value on the next timestep (that would be 2020-07-14 in this case). So all we need to do is go back to our original dataframe and take the stock value of tomorrow. This is done with shift:
y = df_melted.set_index("date").sort_index().high.shift(-1)
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Quick consistency test:
y["2020-07-13"], df["2020-07-14"].iloc[0]
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
However, we need to be a bit careful here: X is missing the first 5 dates (as our minimum window size was 5) and y is missing the last date (as there is nothing to predict on today). So lets make sure we have a consistent view on the data.
y = y[y.index.isin(X.index)] X = X[X.index.isin(y.index)]
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
We can now train normal AdaBoostRegressors to predict the next time step . Let's split the data into a training and testing sample (but make sure to keep temporal consistency). We take everything until 2019 as train data an the rest as test:
X[:"2018"] X_train = X[:"2018"] X_test = X["2019":] y_train = y[:"2018"] y_test = y["2019":]
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit