markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
and do feature selection before training
X_train_selected = select_features(X_train, y_train) ada = LinearRegression() ada.fit(X_train_selected, y_train)
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Now lets check how good our prediction is:
X_test_selected = X_test[X_train_selected.columns] y_pred = pd.Series(ada.predict(X_test_selected), index=X_test_selected.index)
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
The prediction is for the next day, so for drawing we need to shift 1 step back:
plt.figure(figsize=(15, 6)) y.plot(ax=plt.gca()) y_pred.plot(ax=plt.gca(), legend=None, marker=".")
notebooks/examples/05 Timeseries Forecasting.ipynb
blue-yonder/tsfresh
mit
Model background Here is an example based on the Henry saltwater intrusion problem. The synthetic model is a 2-dimensional SEAWAT model (X-Z domain) with 1 row, 120 columns and 20 layers. The left boundary is a specified flux of freshwater, the right boundary is a specified head and concentration saltwater boundary. The model has two stress periods: an initial steady state (calibration) period, then a transient period with less flux (forecast). The inverse problem has 603 parameters: 600 hydraulic conductivity pilot points, 1 global hydraulic conductivity, 1 specified flux multiplier for history matching and 1 specified flux multiplier for forecast conditions. The inverse problem has 36 obseravtions (21 heads and 15 concentrations) measured at the end of the steady-state calibration period. The forecasts of interest of the distance from the left model edge to the 10% seawater concentration in the basal model layer and the concentration at location 10. Both of there forecasts are "measured" at the end of the forecast stress period. The forecasts are both in the Jacobian matrix as zero-weight observations named pd_ten and C_obs10_2.I previously calculated the jacobian matrix, which is in the henry/ folder, along with the PEST control file. Unlike the Schur's complement example notebook, here we will examine the consequences of not adjusting the specified flux multiplier parameters (mult1 and mult2) during inversion, since these types of model inputs are not typically considered for adjustment. Using pyemu
import pyemu
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
First create a linear_analysis object. We will use err_var derived type, which replicates the behavior of the PREDVAR suite of PEST as well as ident_par utility. We pass it the name of the jacobian matrix file. Since we don't pass an explicit argument for parcov or obscov, pyemu attempts to build them from the parameter bounds and observation weights in a pest control file (.pst) with the same base case name as the jacobian. Since we are interested in forecast uncertainty as well as parameter uncertainty, we also pass the names of the forecast sensitivity vectors we are interested in, which are stored in the jacobian as well. Note that the forecasts argument can be a mixed list of observation names, other jacobian files or PEST-compatible ASCII matrix files. Remember you can pass a filename to the verbose argument to write log file. Since most groundwater model history-matching analyses focus on adjusting hetergeneous hydraulic properties and not boundary condition elements, let's identify the mult1 and mult2 parameters as omitted in the error variance analysis. We can conceptually think of this action as excluding the mult1 and mult2 parameters from the history-matching process. Later we will explicitly calculate the penalty for not adjusting this parameter.
la = pyemu.ErrVar(jco=os.path.join("henry", "pest.jcb"), omitted_parameters=["mult1","mult2"]) print(la.jco.shape) #without the omitted parameter or the prior info la.forecast_names
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
Parameter identifiability The errvar dervied type exposes a method to get a pandas dataframe of parameter identifiability information. Recall that parameter identifiability is expressed as $d_i = \Sigma(\mathbf{V}_{1i})^2$, where $d_i$ is the parameter identifiability, which ranges from 0 (not identified by the data) to 1 (full identified by the data), and $\mathbf{V}_1$ are the right singular vectors corresonding to non-(numerically) zero singular values. First let's look at the singular spectrum of $\mathbf{Q}^{\frac{1}{2}}\mathbf{J}$, where $\mathbf{Q}$ is the cofactor matrix and $\mathbf{J}$ is the jacobian:
s = la.qhalfx.s import pylab as plt figure = plt.figure(figsize=(10, 5)) ax = plt.subplot(111) ax.plot(s.x) ax.set_title("singular spectrum") ax.set_ylabel("power") ax.set_xlabel("singular value") ax.set_xlim(0,20) plt.show()
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
We see that the singluar spectrum decays rapidly (not uncommon) and that we can really only support about 3 right singular vectors even though we have 600+ parameters in the inverse problem. Let's get the identifiability dataframe at 15 singular vectors:
ident_df = la.get_identifiability_dataframe(3) # the method is passed the number of singular vectors to include in V_1 ident_df.sort_values(by="ident").iloc[0:10]
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
Plot the indentifiability: We see that the global_k parameter has a much higher identifiability than any one of the 600 pilot points Forecast error variance Now let's explore the error variance of the forecasts we are interested in. We will use an extended version of the forecast error variance equation: $\sigma_{s - \hat{s}}^2 = \underbrace{\textbf{y}i^T({\bf{I}} - {\textbf{R}})\boldsymbol{\Sigma}{{\boldsymbol{\theta}}i}({\textbf{I}} - {\textbf{R}})^T\textbf{y}_i}{1} + \underbrace{{\textbf{y}}i^T{\bf{G}}\boldsymbol{\Sigma}{\mathbf{\epsilon}}{\textbf{G}}^T{\textbf{y}}i}{2} + \underbrace{{\bf{p}}\boldsymbol{\Sigma}{{\boldsymbol{\theta}}_o}{\bf{p}}^T}{3}$ Where term 1 is the null-space contribution, term 2 is the solution space contribution and term 3 is the model error term (the penalty for not adjusting uncertain parameters). Remember the mult1 and mult2 parameters that we marked as omitted? The consequences of that action can now be explicitly evaluated. See Moore and Doherty (2005) and White and other (2014) for more explanation of these terms. Note that if you don't have any omitted_parameters, the only terms 1 and 2 contribute to error variance First we need to create a list (or numpy ndarray) of the singular values we want to test. Since we have $\lt40$ data, we only need to test up to $40$ singular values because that is where the action is:
sing_vals = np.arange(40)
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
The errvar derived type exposes a convience method to get a multi-index pandas dataframe with each of the terms of the error variance equation:
errvar_df = la.get_errvar_dataframe(sing_vals) errvar_df.iloc[0:10]
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
plot the error variance components for each forecast:
fig = plt.figure(figsize=(10, 10)) ax_1, ax_2= plt.subplot(211), plt.subplot(212) axes = [ax_1,ax_2] colors = {"first": 'g', "second": 'b', "third": 'c'} max_idx = 19 idx = sing_vals[:max_idx] for ipred, pred in enumerate(la.forecast_names): pred = pred.lower() ax = axes[ipred] ax.set_title(pred) first = errvar_df[("first", pred)][:max_idx] second = errvar_df[("second", pred)][:max_idx] third = errvar_df[("third", pred)][:max_idx] ax.bar(idx, first, width=1.0, edgecolor="none", facecolor=colors["first"], label="first",bottom=0.0) ax.bar(idx, second, width=1.0, edgecolor="none", facecolor=colors["second"], label="second", bottom=first) ax.bar(idx, third, width=1.0, edgecolor="none", facecolor=colors["third"], label="third", bottom=second+first) ax.set_xlim(-1,max_idx+1) ax.set_xticks(idx+0.5) ax.set_xticklabels(idx) if ipred == 2: ax.set_xlabel("singular value") ax.set_ylabel("error variance") ax.legend(loc="upper right") plt.show()
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
Here we see the trade off between getting a good fit to push down the null-space (1st) term and the penalty for overfitting (the rise of the solution space (2nd) term)). The sum of the first two terms in the "appearent" error variance (e.g. the uncertainty that standard analyses would yield) without considering the contribution from the omitted parameters. You can verify this be checking prior uncertainty from the Schur's complement notebook against the zero singular value result using only terms 1 and 2. We also see the added penalty for not adjusting the mult1 and mult2 parameters (3rd term). The ability to forecast the distance from the left edge of the model to the 10% saltwater concentration and the forecast the concentration at location 10 has been compromised by not adjusting mult1 and mult2 during calibration. Let's check the errvar results against the results from schur. This is simple with pyemu, we simply cast the errvar type to a schur type:
schur = la.get(astype=pyemu.Schur) schur_prior = schur.prior_forecast schur_post = schur.posterior_forecast print("{0:10s} {1:>12s} {2:>12s} {3:>12s} {4:>12s}" .format("forecast","errvar prior","errvar min", "schur prior", "schur post")) for ipred, pred in enumerate(la.forecast_names): first = errvar_df[("first", pred)][:max_idx] second = errvar_df[("second", pred)][:max_idx] min_ev = np.min(first + second) prior_ev = first[0] + second[0] prior_sh = schur_prior[pred] post_sh = schur_post[pred] print("{0:12s} {1:12.6f} {2:12.6f} {3:12.6} {4:12.6f}" .format(pred,prior_ev,min_ev,prior_sh,post_sh))
examples/errvarexample_henry.ipynb
jtwhite79/pyemu
bsd-3-clause
Notebook Overview In this notebook, I will construct: - A naive model of bitcoin price prediction A nested time series model. What do I mean by a nested time series model? I will illustrate with a simple example. Let's say that I wish to predict the mkt_price on 2016-10-30. I could fit a Linear Regression on all the features from 2016-10-26 - 29-10-2016. However, in order to predict the price of mkt_price on 2016-10-30 I need to have values for the features on 2016-10-30. This presents a problem as all my features are time series! That is, I cannot simply plug in a value for all the features because I don't know what their values would be on this future date! One possible remedy for this is to simply use the values of all the features on 2016-10-29. In fact, it is well know that the best predictor of a variable tomorrow is it's current state today. However, I wish to be more rigorous. Instead of simply plugging in t-1 values for the features at time t, I construct a time series model for each feature in order to predict its value at time t based on the entire history of data that I have for the features! These predicted values are then passed as inputs to our linear regression models! Thus, if I have N features, I am creating N-Time Series models in order to do a single prediction with Linear Regression for the mkt_price variable. Naive Baseline Model I will construct a naive baseline model that will most likely outperorm any other model I build below. The model will work as follows: When predicting the price on Day 91, I will take the average price change between Day 90 and Day 0. Let's call this average price change alpha. I will then take the price of Day 90 and add alpha to it. This will serve as the 'predicted' price for day 91.
df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl") df.head() def linear_extrapolation(df, window): pred_lst = [] true_lst = [] cnt = 0 all_rows = df.shape[0] while cnt < window: start = df.iloc[cnt:all_rows-window+cnt, :].index[0].date() end = df.iloc[cnt:all_rows-window+cnt, :].index[-1].date() predicting = df.iloc[all_rows-window+cnt, :].name.date() print("---- Running model from {} to {} and predicting on {} ----".format(start,end,predicting)) training_df = df.iloc[cnt:all_rows-window+cnt, :] testing_df = df.iloc[all_rows-window+cnt, :] true_val = testing_df[-1] first_row_value = training_df.iloc[0, :]['mkt_price'] first_row_date = training_df.iloc[0, :].name last_row_value = training_df.iloc[-1, :]['mkt_price'] last_row_date = training_df.iloc[-1, :].name alpha = (last_row_value-first_row_value)/90 prediction = last_row_value + alpha pred_lst.append(prediction) true_lst.append(true_val) cnt += 1 return pred_lst, true_lst pred_lst, true_lst = linear_extrapolation(df, 30) r2_score(true_lst, pred_lst)
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Naïve Model Caveats We can see above that we can use this extremely basic model to obtain an $R^2$ of 0.86. In fact, this should be the baseline model score that we need to beat! Let me mention some caveats to this result: I only have 4 months of Bitcoin data. It should be obvious to the reader that such a naive model is NOT the appropriate way to forecast bitcoin price in general. For if it were this simple, we would all be millionaires. Since I have 120 days worth of day, I am choosing to subset my data in 90 day periods, as such, I will produce 30 predictions. The variability of bitcoin prices around these 30 days will significantly impact the $R^2$ score. Again, more data is needed. While bitcoin data itself is not hard to come by, twitter data is! It is the twitter data that is limiting a deeper analysis. I hope that this notebook serves as a starting point for further investigation in the relationship between tweets and bitcoin price fluctuations. Lastly, I have made this notebook in Sept. 2017. The data for this project spans Oct 2016 - Feb 2017. Since that timeframe, bitcoin grew to unprecedented highs of \$4k/coin. Furthermore, media sound bites of CEOs such as James Dimon of JPMorgan have sent bitcoin prices tumbling by as much as $1k/coin. For me, this is what truly lies at the crux of the difficulty of cryptocurrency forecasting. I searched at great length for a free, searchable NEWS API, however, I could not find one. I think I great next step for this project would be to incorporate sentiment of news headlines concerning bitcoin! Furthermore, with the aforementioned timeframe, the overall bitcoin trend was upward. That is, there was not that much volatility in the price - as such, it is expected that the Naïve Model would outperform the nested time series model. The next step would again, be to collect more data and re-run all the models. Nested Time Series Model
df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl") df.head() df.corr() plot_corr_matrix(df) beta_values, pred, true = master(df, 30) r2_score(true, pred)#blows our Prophet TS only model away!
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Nested TS VS. FB Prophet TS We see from the above that our model has an $R^2$ of 0.75! This greatly outperforms our baseline model of just using FaceBook Prophet to forecast the price of bitcoin! The RMSE is 1.40 This is quite impressive given that we only have 3 months of training data and are testing on one month! The output above also shows regression output from statsmodels! The following features were significant in all 30 models: Gold Price Ethereum Price Positive Sentiment (Yay!) Average Transactions Per Block It is important, yet again, to note that this data does NOT take into account the wild fluctuations in price that bitcoin later experienced. We would need more data to affirm the significance of the above variables.
plt.plot(pred) plt.plot(true) plt.legend(["Prediction", 'Actual'], loc='upper left') plt.xlabel("Prediction #") plt.ylabel("Price") plt.title("Nested TS - Price Prediction"); fig, ax = plt.subplots() ax.scatter(true, pred, edgecolors=(0, 0, 0)) ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3) ax.set_xlabel('Actual') ax.set_ylabel('Predicted') plotting_dict_1 = {"eth_price": [], "pos_sent": [], "neg_sent": [], "unique_addr": [], "gold_price": [], "tot_num_trans": [], "mempool_trans":[], "hash_rate": [], "avg_trans_per_block":[]} for index, sub_list in enumerate(beta_values): for tup in sub_list: plotting_dict_1[tup[0]].append(tup[1]) plot_key(plotting_dict_1, "pos_sent")# here we say the effect of positive sentiment through time! plt.title("Positive Sentiment Effect on BTC Price") plt.ylabel("Beta Value") plt.xlabel("Model #") plt.tight_layout() plot_key(plotting_dict_1, "gold_price") plt.title("Gold Price Effect on BTC Price") plt.ylabel("Beta Value") plt.xlabel("Model #") plt.tight_layout() plot_key(plotting_dict_1, "avg_trans_per_block") plt.title("Avg. Trans per Block Effect on BTC Price") plt.ylabel("Beta Value") plt.xlabel("Model #") plt.tight_layout()
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Percent change model! I will now run the same nested TS model as above, however, I will now make my 'target' variable the percent change in bitcoin price. In order to make this a log-og model, I will use the percentage change of all features as inputs into the TS model and thus the linear regression! Since percent change will 'shift' our dataframe by one row, I omit the first row (which is all NaN's). Thus, if we were to predict a percent change of $0.008010$ on 28-10-2017, then this would mean that the predicted price would be the price on 27-10-2017 $*predicted_percent_change$.
df_pct = df.copy(deep=True) df_pct = df_pct.pct_change() df_pct.rename(columns={"mkt_price": "percent_change"}, inplace=True) df_pct = df_pct.iloc[1:, :] #first row is all NaN's df_pct.head() beta_values_p, pred_p, true_p = master(df_pct, 30) r2_score(true_p, pred_p) # this is expected due to the range of values on the y-axis! #very good! plt.plot(pred_p) plt.plot(true_p) plt.legend(["Prediction", 'Actual'], loc='upper left') plt.xlabel("Prediction #") plt.ylabel("Price") plt.title("Nested TS - % Change Prediction");
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
From the above, it seems that our model is not tuned well enough to anticipate the large dip shown above. This is due to a lack of training data. However, while our model might not be the best in predicting percent change how does it fair when we turn the percent change into prices.
fig, ax = plt.subplots() ax.scatter(true_p, pred_p, edgecolors=(0, 0, 0)) ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3) ax.set_xlabel('Actual') ax.set_ylabel('Predicted'); df.set_index('date', inplace=True) prices_to_be_multiplied = df.loc[pd.date_range(start="2017-01-23", end="2017-02-21"), "mkt_price"] forecast_price_lst = [] for index, price in enumerate(prices_to_be_multiplied): predicted_percent_change = 1+float(pred_p[index]) forecasted_price = (predicted_percent_change)*price forecast_price_lst.append(forecasted_price) ground_truth_prices = df.loc[pd.date_range(start="2017-01-24", end="2017-02-22"), "mkt_price"] ground_truth_prices = list(ground_truth_prices) r2_score(ground_truth_prices, forecast_price_lst)
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
We have an $R^2$ of 0.87! This surpasses the baseline model and the nested TS model! The caveats of the baseline model also apply here, however, it seems that the addition of additional variables have helped us slightly improve with regards to the $R^2$
plt.plot(forecast_price_lst) plt.plot(ground_truth_prices) plt.legend(["Prediction", 'Actual'], loc='upper left') plt.xlabel("Prediction #") plt.ylabel("Price") plt.title("Nested TS - % Change Prediction");
05-project-kojack/Final_Notebook.ipynb
igabr/Metis_Projects_Chicago_2017
mit
To provide an acceleration depending on an extra parameter, we can use closures like this one:
def constant_accel_factory(accel): def constant_accel(t0, u, k): v = u[3:] norm_v = (v[0]**2 + v[1]**2 + v[2]**2)**.5 return accel * v / norm_v return constant_accel constant_accel_factory(accel=1e-5)(t[0], u0, k) help(func_twobody)
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Now we setup the integrator manually using scipy.integrate.ode. We cannot provide the Jacobian since we don't know the form of the acceleration in advance.
res = np.zeros((t.size, 6)) res[0] = u0 ii = 1 accel = 1e-5 rr = ode(func_twobody).set_integrator('dop853') # All parameters by default rr.set_initial_value(u0, t[0]) rr.set_f_params(k, constant_accel_factory(accel)) while rr.successful() and rr.t + dt < t[-1]: rr.integrate(rr.t + dt) res[ii] = rr.y ii += 1 res[:5]
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
And we plot the results:
fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') ax.plot(*res[:, :3].T) ax.view_init(14, 70)
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Interactivity This is the last time we used scipy.integrate.ode directly. Instead, we can now import a convenient function from poliastro:
from poliastro.twobody.propagation import cowell def plot_iss(thrust=0.1, mass=2000.): r0, v0 = iss.rv() k = iss.attractor.k t = np.linspace(0, 10 * iss.period, 500).to(u.s).value u0 = state_to_vector(iss) res = np.zeros((t.size, 6)) res[0] = u0 accel = thrust / mass # Perform the whole integration r0 = r0.to(u.km).value v0 = v0.to(u.km / u.s).value k = k.to(u.km**3 / u.s**2).value ad = constant_accel_factory(accel) r, v = r0, v0 for ii in range(1, len(t)): r, v = cowell(k, r, v, t[ii] - t[ii - 1], ad=ad) x, y, z = r vx, vy, vz = v res[ii] = [x, y, z, vx, vy, vz] fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, projection='3d') ax.set_xlim(-20e3, 20e3) ax.set_ylim(-20e3, 20e3) ax.set_zlim(-20e3, 20e3) ax.view_init(14, 70) return ax.plot(*res[:, :3].T) interact(plot_iss, thrust=(0.0, 0.2, 0.001), mass=fixed(2000.))
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Error checking
rtol = 1e-13 full_periods = 2 u0 = state_to_vector(iss) tf = ((2 * full_periods + 1) * iss.period / 2).to(u.s).value u0, tf iss_f_kep = iss.propagate(tf * u.s, rtol=1e-18) r0, v0 = iss.rv() r, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value, tf, rtol=rtol) iss_f_num = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, iss.epoch + tf * u.s) iss_f_num.r, iss_f_kep.r assert np.allclose(iss_f_num.r, iss_f_kep.r, rtol=rtol, atol=1e-08 * u.km) assert np.allclose(iss_f_num.v, iss_f_kep.v, rtol=rtol, atol=1e-08 * u.km / u.s) #assert np.allclose(iss_f_num.a, iss_f_kep.a, rtol=rtol, atol=1e-08 * u.km) #assert np.allclose(iss_f_num.ecc, iss_f_kep.ecc, rtol=rtol) #assert np.allclose(iss_f_num.inc, iss_f_kep.inc, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.raan, iss_f_kep.raan, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.argp, iss_f_kep.argp, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.nu, iss_f_kep.nu, rtol=rtol, atol=1e-08 * u.rad)
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Too bad I cannot access the internal state of the solver. I will have to do it in a blackbox way.
u0 = state_to_vector(iss) full_periods = 4 tof_vector = np.linspace(0, ((2 * full_periods + 1) * iss.period / 2).to(u.s).value, num=100) rtol_vector = np.logspace(-3, -12, num=30) res_array = np.zeros((rtol_vector.size, tof_vector.size)) for jj, tof in enumerate(tof_vector): rf, vf = iss.propagate(tof * u.s, rtol=1e-12).rv() for ii, rtol in enumerate(rtol_vector): rr = ode(func_twobody).set_integrator('dop853', rtol=rtol, nsteps=1000) rr.set_initial_value(u0, 0.0) rr.set_f_params(k, constant_accel_factory(0.0)) # Zero acceleration rr.integrate(rr.t + tof) if rr.successful(): uf = rr.y r, v = uf[:3] * u.km, uf[3:] * u.km / u.s res = max(norm((r - rf) / rf), norm((v - vf) / vf)) else: res = np.nan res_array[ii, jj] = res fig, ax = plt.subplots(figsize=(16, 6)) xx, yy = np.meshgrid(tof_vector, rtol_vector) cs = ax.contourf(xx, yy, res_array, levels=np.logspace(-12, -1, num=12), locator=ticker.LogLocator(), cmap=plt.cm.Spectral_r) fig.colorbar(cs) for nn in range(full_periods + 1): lf = ax.axvline(nn * iss.period.to(u.s).value, color='k', ls='-') lh = ax.axvline((2 * nn + 1) * iss.period.to(u.s).value / 2, color='k', ls='--') ax.set_yscale('log') ax.set_xlabel("Time of flight (s)") ax.set_ylabel("Relative tolerance") ax.set_title("Maximum relative difference") ax.legend((lf, lh), ("Full period", "Half period"))
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Numerical validation According to [Edelbaum, 1961], a coplanar, semimajor axis change with tangent thrust is defined by: $$\frac{\operatorname{d}!a}{a_0} = 2 \frac{F}{m V_0}\operatorname{d}!t, \qquad \frac{\Delta{V}}{V_0} = \frac{1}{2} \frac{\Delta{a}}{a_0}$$ So let's create a new circular orbit and perform the necessary checks, assuming constant mass and thrust (i.e. constant acceleration):
ss = Orbit.circular(Earth, 500 * u.km) tof = 20 * ss.period ad = constant_accel_factory(1e-7) r0, v0 = ss.rv() r, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value, tof.to(u.s).value, ad=ad) ss_final = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, ss.epoch + rr.t * u.s) da_a0 = (ss_final.a - ss.a) / ss.a da_a0 dv_v0 = abs(norm(ss_final.v) - norm(ss.v)) / norm(ss.v) 2 * dv_v0 np.allclose(da_a0, 2 * dv_v0, rtol=1e-2) dv = abs(norm(ss_final.v) - norm(ss.v)) dv accel_dt = accel * u.km / u.s**2 * (t[-1] - t[0]) * u.s accel_dt np.allclose(dv, accel_dt, rtol=1e-2, atol=1e-8 * u.km / u.s)
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
This means we successfully validated the model against an extremely simple orbit transfer with approximate analytical solution. Notice that the final eccentricity, as originally noticed by Edelbaum, is nonzero:
ss_final.ecc
docs/source/examples/Propagation using Cowell's formulation.ipynb
anhiga/poliastro
mit
Replace the variable values in the cell below:
PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # Replace with your REGION SEED = 0 %%bash gsutil mb gs://$BUCKET
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Create a Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Lab Task 1a: Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with * title length greater than 10 characters * score greater than 10 * url length greater than 0 characters
%%bigquery --project $PROJECT SELECT # TODO: Your code goes here. FROM # TODO: Your code goes here. WHERE # TODO: Your code goes here. # TODO: Your code goes here. # TODO: Your code goes here. LIMIT 10
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i> Lab task 1b: Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
%%bigquery --project $PROJECT SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, # TODO: Your code goes here. FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') # TODO: Your code goes here. GROUP BY # TODO: Your code goes here. ORDER BY num_articles DESC LIMIT 100
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
regex = '.*://(.[^/]+)/' sub_query = """ SELECT title, ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$') AND LENGTH(title) > 10 """.format(regex) query = """ SELECT LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title, source FROM ({sub_query}) WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch') """.format(sub_query=sub_query) print(query)
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
bq = bigquery.Client(project=PROJECT) title_dataset = bq.query(query).to_dataframe() title_dataset.head()
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
AutoML for text classification requires that * the dataset be in csv form with * the first column being the texts to classify or a GCS path to the text * the last colum to be the text labels The dataset we pulled from BiqQuery satisfies these requirements.
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's make sure we have roughly the same number of labels for each of our three labels:
title_dataset.source.value_counts()
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Finally we will save our data, which is currently in-memory, to disk. We will create a csv file containing the full dataset and another containing only 1000 articles for development. Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
DATADIR = './data/' if not os.path.exists(DATADIR): os.makedirs(DATADIR) FULL_DATASET_NAME = 'titles_full.csv' FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME) # Let's shuffle the data before writing it to disk. title_dataset = title_dataset.sample(n=len(title_dataset)) title_dataset.to_csv( FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML). Lab Task 1c: Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
sample_title_dataset = # TODO: Your code goes here. # TODO: Your code goes here.
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's write the sample datatset to disk.
SAMPLE_DATASET_NAME = 'titles_sample.csv' SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME) sample_title_dataset.to_csv( SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8') sample_title_dataset.head() %%bash gsutil cp data/titles_sample.csv gs://$BUCKET
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
turbomanage/training-data-analyst
apache-2.0
前台服务员系统与后台系统的交互,我们可以通过命令的模式来实现,服务员将顾客的点单内容封装成命令,直接对后台下达命令,后台完成命令要求的事,即可。前台系统构建如下:
class waiterSys(): def __init__(self): self.menu_map=dict() self.commandList=[] def setOrder(self,command): print ("WAITER:Add dish") self.commandList.append(command) def cancelOrder(self,command): print ("WAITER:Cancel order...") self.commandList.remove(command) def notify(self): print ("WAITER:Nofify...") for command in self.commandList: command.execute()
DesignPattern/CommandPattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
前台系统中的notify接口直接调用命令中的execute接口,执行命令。命令类构建如下:
class Command(): receiver = None def __init__(self, receiver): self.receiver = receiver def execute(self): pass class foodCommand(Command): dish="" def __init__(self,receiver,dish): self.receiver=receiver self.dish=dish def execute(self): self.receiver.cook(self.dish) class mainFoodCommand(foodCommand): pass class coolDishCommand(foodCommand): pass class hotDishCommand(foodCommand): pass
DesignPattern/CommandPattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Command类是个比较通用的类,foodCommand类是本例中涉及的类,相比于Command类进行了一定的改造。由于后台系统中的执行函数都是cook,因而在foodCommand类中直接将execute接口实现,如果后台系统执行函数不同,需要在三个子命令系统中实现execute接口。这样,后台三个命令类就可以直接继承,不用进行修改了。(这里子系统没有变动,可以将三个子系统的命令废弃不用,直接用foodCommand吗?当然可以,各有利蔽。请读者结合自身开发经验,进行思考相对于自己业务场景的使用,哪种方式更好。) 为使场景业务精简一些,我们再加一个菜单类来辅助业务,菜单类在本例中直接写死。
class menuAll: menu_map=dict() def loadMenu(self):#加载菜单,这里直接写死 self.menu_map["hot"] = ["Yu-Shiang Shredded Pork", "Sauteed Tofu, Home Style", "Sauteed Snow Peas"] self.menu_map["cool"] = ["Cucumber", "Preserved egg"] self.menu_map["main"] = ["Rice", "Pie"] def isHot(self,dish): if dish in self.menu_map["hot"]: return True return False def isCool(self,dish): if dish in self.menu_map["cool"]: return True return False def isMain(self,dish): if dish in self.menu_map["main"]: return True return False dish_list=["Yu-Shiang Shredded Pork","Sauteed Tofu, Home Style","Cucumber","Rice"]#顾客点的菜 waiter_sys=waiterSys() main_food_sys=mainFoodSys() cool_dish_sys=coolDishSys() hot_dish_sys=hotDishSys() menu=menuAll() menu.loadMenu() for dish in dish_list: if menu.isCool(dish): cmd=coolDishCommand(cool_dish_sys,dish) elif menu.isHot(dish): cmd=hotDishCommand(hot_dish_sys,dish) elif menu.isMain(dish): cmd=mainFoodCommand(main_food_sys,dish) else: continue waiter_sys.setOrder(cmd) waiter_sys.notify()
DesignPattern/CommandPattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Remember DRY: Don't Repeat Yourself! Let's try to apply memoization in a generic way to not modified functions Let's do a bit of magic to apply memoization easily
real_fibonacci = fibonacci def fibonacci(n): res = simcache.get_key(n) if not res: res = real_fibonacci(n) simcache.set_key(n, res) return res t1_start = time.time() print fibonacci(30) t1_elapsed = time.time() - t1_start print "fibonacci time {}".format(t1_elapsed) t1_start = time.time() print real_fibonacci(30) t1_elapsed = time.time() - t1_start print "fibonacci_real time {}".format(t1_elapsed)
advanced/5_decorators.ipynb
ealogar/curso-python
apache-2.0
Let's explain the trick in slow motion
simcache.clear_keys() # Let's clean the cache # Let's define the real fibonacci computation function def fibonacci(n): if n < 2: return n print "Real fibonacci func, calling recursively to", fibonacci, n # Once the trick is done globals will contain a different function binded to 'fibonacci' return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci print fibonacci(5) # Call graph of fibonacci for n=5 # # __ 4 ---- 3 ----------- 2 ---- 1 # 5 __/ \__ 2 ---- 1 \__ 1 \__ 0 # | \__ 0 # \__ 3 ---- 2 ---- 1 # \__ 1 \__ 0 # # Let's save a reference to the real function real_fibonacci = fibonacci print real_fibonacci # Points to real fibonacci calculation function # Let's create a new function which will use memoization def memoized_fibonacci(n): # Try to retrieve value from cache res = simcache.get_key(n) if not res: # If failed, call real fibonacci func print "Memoized fibonacci func, proceeding to call real func",\ real_fibonacci, n res = real_fibonacci(n) # Store real result simcache.set_key(n, res) return res print memoized_fibonacci # This is the new function with memoization # Let's replace the real function by the memoized version in module globals fibonacci = memoized_fibonacci print fibonacci(5) # Let's see what happens now print fibonacci(5) # Let's try again print fibonacci(10) # Let's try with a bigger number
advanced/5_decorators.ipynb
ealogar/curso-python
apache-2.0
We have applied our first hand-crafted decorator How would you memoize any function, not just fibonacci? Do you remember functions are first class objects? They can be used as arguments or return values... Do you remember we can declare functions inside other functions? Let's apply these concepts to find a generic method to use memoization
def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) def memoize_any_function(func_to_memoize): """Function to return a wrapped version of input function using memoization """ print "Called memoize_any_function" def memoized_version_of_func(n): """Wrapper using memoization """ res = simcache.get_key(n) if not res: res = func_to_memoize(n) # Call the real function simcache.set_key(n, res) return res return memoized_version_of_func fibonacci = memoize_any_function(fibonacci) print fibonacci(35) # Much nice if we do: @memoize_any_function # This is the simplest decorator syntax def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci(150)
advanced/5_decorators.ipynb
ealogar/curso-python
apache-2.0
Python decorators: A callable which receives a funtion as only argument and returns another function. Typically the resulting function wrapps the first function executing some code before and/or after the first is called. Used with the at @ symbol before a function or method Don't forget to deal with 'self' as first argument of methods The decoration is done at import / evaluation time
def timing_decorator(decorated_func): print "Called timing_decorator" def wrapper(*args): # Use variable arguments to be compatible with any function """Wrapper for time executions """ start = time.time() res = decorated_func(*args) # Call the real function elapsed = time.time() - start print "Execution of '{0}{1}' took {2} seconds".format(decorated_func.__name__, args, elapsed) return res return wrapper @timing_decorator @memoize_any_function # We can accumulate decorators def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) simcache.clear_keys() print fibonacci(5)
advanced/5_decorators.ipynb
ealogar/curso-python
apache-2.0
It is possible to accumulate decorators Order matters, they are run in strict top - down order
print fibonacci # Why is the wrapper? Can we maintain the original name ? import functools def memoize_any_function(decorated_func): """Function to return a wrapped version of input function using memoization """ @functools.wraps(decorated_func) # Use functools.wraps to smooth the decoration def memoized_version_of_f(*args): """Wrapper using memoization """ res = simcache.get_key(args) if not res: res = decorated_func(*args) # Call the real function simcache.set_key(args, res) return res return memoized_version_of_f def timing_decorator(decorated_func): @functools.wraps(decorated_func) def wrapper(*args): # Use variable arguments to be compatible with any function """Wrapper for time executions """ start = time.time() res = decorated_func(*args) # Call the real function elapsed = time.time() - start print "Execution of '{0}{1}' took {2} seconds".format(decorated_func.__name__, args, elapsed) return res return wrapper @timing_decorator @memoize_any_function # We can accumulate decorators, and they are run in strict top-down order def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci(100)
advanced/5_decorators.ipynb
ealogar/curso-python
apache-2.0
CHIRPS (Climate Hazards group Infrared Precipitation with Stations Download the data files for the desired periods for the whole of Africa from CHIRPS. You can do this with FileZilla a free app for this purpose. For access to the CHIRPTS data see http://chg.ucsb.edu/data/chirps/ Next to tiff files one can find png (images) on the sight that can be directly viewed in your browser or imported into any application, whithout any processing. But of course a pictures does not have the original data. glob (unix-like file handling for python) Assuming that you have downloaded some files, use glob to get a list of them on your computer.
import glob chirps_files = glob.glob('../**/*/*.tif') pprint(chirps_files) fname = chirps_files[0]
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
gdal (working with tiff files among others, GIS) import gdal and check if the file is present by opening it.
import gdal try: # is the file present? g = gdal.Open(fname) except: exception(FileExistsError("Can't open file <{}>".fname))
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Get some basic information from the tiff file Ok, now with g the successfully opended CHIRPS file, get some basic information from that file.
print("\nBasic information on file <{}>\n".format(fname)) print("Driver: ", g.GetDriver().ShortName, '/', g.GetDriver().LongName) print("Size : ", g.RasterXSize, 'x', g.RasterYSize, 'x', g.RasterCount) print("Projection :\n", g.GetProjection()) print() print("\nGeotransform information:\n") gt = g.GetGeoTransform() print("Geotransform :", gt) # assign the individual fields to more recognizable variables xUL, dx, xRot, yUL, yRot, dy = gt # get the size of the data and the number of bands in the tiff file (is 1) Nx, Ny, Nband = g.RasterXSize, g.RasterYSize, g.RasterCount # show what we've got: print('Nx = {}\nNy = {}\nxUL = {}\nyUL = {}\ndx = {}\ndy = {} <--- Negative !'.format(Nx, Ny, xUL, yUL, dx, dy))
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
This projection says that it's WGS1984 (same as GoogleEarth and GoogleMaps. Therefore it is in longitude (x) and latitute (y) coordinates. This allows to immediately compute the WGS coordinates (lat/lon) from it, for instance for each pixel/cell center. It's also straightforward to compute the bounding box of this array and plot it in QGIS for instance:
# Bounding box around the tiff data set tbb = [xUL, yUL + Ny * dy, xUL + Nx * dx, yUL] print("Boudning box of data in tiff file :", tbb) # Generate coordinates for tiff pixel centers xm = 0.5 * dx + np.linspace(xUL, xUL + Nx * dx, Nx) ym = 0.5 * dy + np.linspace(yUL, yUL + Ny * dy, Ny)
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Generate a shapefile with a polyline that represents the model boundary The contour coordinates of the Erfoud/Tafilalet groundwater model happen to be the file ErfoudModelContour.kml. Kml files come from GoogleEarth and are in WGS84 coordinates. It was obtained by digitizing the line directly in Google Earth. We extract the coordinates from that HTML file and put them in a list of lists, the form needed to inject the coordinates into the shapefile. Extraction can be done in several ways, for instance with one of the HTML parsers that are available on the internet. However if you look at this file in the internet, it's clear that we may do this in a simple way. Read the file line by line until we find the word "coordinates". Then read the next line, which contains all the coordinates. Then clean that line form tabs, put a comma between each tripple of coordinate values and turn it into a list of lists with each list the x, y an z values of one vertice of the model boundary:
with open('ErfoudModelContour.kml', 'r') as f: for s in f: # read lines from this file if s.find('coord') > 0: # word "coord" bound? # Then the next line has all coordinates. Read it and clean up. pnts_as_str = f.readline().replace(' ',',').replace('\t','').split(',') # Use a comprehension to put these coordinates in a list, where list[i] has # a sublist of the three x, y and z coordinates. points = [ [float(p) for p in p3] for p3 in [pnts_as_str[i:i+3] for i in range(0, len(pnts_as_str), 3)] ] break; # The points pnts = np.array(points) # The bounding box mbb = [np.min(pnts[:,0]), np.min(pnts[:,1]), np.max(pnts[:,0]), np.max(pnts[:,1])] #pprint(points) #print(mbb)
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Generate the shapefile holding a 3 polygons a) The bonding box around the data in the tiff file, b) the bounding box of around the model contour. 3) the model contour
import shapefile as shp tb = lambda indices: [tbb[i] for i in indices] # convenience for selecting from tiff bounding box mb = lambda indices: [mbb[i] for i in indices] # same for selecting from model bounding box # open a shape file writer objetc w = shp.Writer(shapeType=shp.POLYGON) # add the three polylines to w.shapes # each shape has parts of of which can contain a polyline. We have one polyline, i.e. one part # in each chape. Therfore parts is a list of one item, which is a list of points of the polyline. w.poly(parts=[points]) # only one part, therefore, put points inbetween brackets. w.poly(parts=[[ tb([0, 1]), tb([2, 1]), tb([2, 3]), tb([0, 3]), tb([0, 1])]]) # bbox of tiff file w.poly(parts=[[ mb([0, 1]), mb([2, 1]), mb([2, 3]), mb([0, 3]), mb([0, 1])]]) # bbox of model w.field("Id","C", 20) # Add one field w.field("Id2", "N") # Add another field, just to see if it works and how # Aadd three records to w.records (one for eache shape w.record("model contour", 1) # each record has two values, a string and a nuber, see fields w.record("model bbox", 2) w.record("Tiff bbox", 3) # save this to a new shapefile w.save("ErfoudModelContour") # Change False to True so see the coordinates and the records if False: print() for i, sh in enumerate(w.shapes()): pprint(sh.points) print() #w.shapes()[0].points # check if w knows about these points for r in w.records: print(r) # To verify what's been saved read the saved file and show what's in it: if False: s = shp.Reader("ErfoudModelContour") for sh in s.shapeRecords(): pprint(sh.shape.points) print(sh.record)
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Show shapefile in QGIS Fire up QGIS and load the shape file. Set its CRS to WGS84 (same coordinates as GoogleMaps, most general LatLon) Here are the pictures taken van de screen of QGIS after the shapefile was loaded and the label under properties was set tot transparent with solid contour line. To get the GoogleMaps image, look for it it under web in the main menu. The first image is zoomed out, so that the location of the model can be seen in the south east of this image. It's in Morocco. <figure> <IMG SRC="./EfoudModelContour2.png" WIDTH=750 ALIGN="center"> </figure> The more detailed image shows the contour of the model and its bounding box. It proves that it works. <figure> <IMG SRC="./EfoudModelContour1.png" WIDTH=750 ALIGN="center"> </figure> The next step is to select the appropriage precipitation data from the CHIRPS file. Get the precipitation data from the CHIRPS tiff file The actual data are stored in rasterbands. We saw from the size above, that this file has only one rasterband. Rasterband information is obtained one band at a time. So here we pass band number 1.
A = g.GetRasterBand(1).ReadAsArray() A[A <- 9000] = 0. # replace no-dta values by 0 print() print("min precipitation in mm ", np.min(A)) print("max precipitation in mm ", np.max(A))
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Select a subarea equal to the bbox of the model contour.
# define a function to get the indices of the center points between the bounding box extents of the model def between(x, a, b): """returns indices of ponts between a and b""" I = np.argwhere(np.logical_and(min(a, b) < x, x < max(a, b))) return [i[0] for i in I] ix = between(xm, mbb[0], mbb[2]) iy = between(ym, mbb[1], mbb[3]) print(ix) print(iy)
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Read the data again, but now only the part that covers the model in Marocco:
A = g.GetRasterBand(1).ReadAsArray(xoff=int(ix[0]), yoff=int(iy[0]), win_xsize=len(ix), win_ysize=len(iy)) print("Preciptation on the Erfoud model area in Marocco from file\n{}:\n".format(fname)) prar(A)
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Just for curiosity, show the size of the area covered and the size resolution of the precipitation data.
# The extent of this area can be obtained from the latiture and longitude together with the radius of the earth. R = 6371 # km EWN = R * np.cos(np.pi/180 * mbb[1]) * np.pi/180. *(mbb[2] - mbb[0]) EWS = R * np.cos(np.pi/180 * mbb[3]) * np.pi/180. *(mbb[2] - mbb[0]) NS = R * np.pi/180 * (mbb[3] - mbb[1]) print("The size of the bounding box in km:") print("EW along the north boundary : ",EWN) print("EW along the south boundary : ",EWS) print("NS : ",NS) print("Size of each tile (the resolution) = {:.3f} x {:.3f} km: ".format(EWN/A.shape[1], NS/A.shape[0]))
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Let us implement a Hopfield network using images from the MNIST dataset as patterns. Initialize the dataset First we initialize the dataset:
#### Download the dataset # Get the script from internet ! wget https://raw.githubusercontent.com/sorki/python-mnist/master/get_data.sh > /dev/null 2>&1 # Run it to dovnload all files in a local dir named 'data' ! bash get_data.sh >/dev/null 2>&1 # We do not need the script anymore, remove it ! rm get_data.sh* > /dev/null 2>&1 # Initialize the dataset variables %run utils
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
We now fill a array with all parameters. We only need few samples, we take them from the training set. We take samples 2 and 5, representing respectively a '4' and a '2'
# Take two rows patterns = array(mndata.train_images)[[2,5],] labels = array(mndata.train_labels)[[2,5],] # We need only the sign (transform to binary input) patterns = sign(patterns/255.0 - 0.5) # Set the number of patterns (two in out case) n_patterns = patterns.shape[0] # Number of units of the network n = img_side*img_side
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
Let us visualize our two patterns:
fig = figure(figsize = (8, 4)) for i in xrange(n_patterns): plot_img( to_mat(patterns[i]), fig, i+1, windows = 2 )
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
Learning the weights Learning of the weight happens offline at the beginning, in one shot:
# Initialize weights to zero values W = zeros([n,n]) # Accumulate outer products for pattern in patterns : W += outer(pattern, pattern) # Divide times the number of patterns W /= float(n_patterns) # Exclude the autoconnections W *= 1.0 - eye(n, n)
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
Recall: Iterating the timesteps Now we implement the recall part, in which we give an initial activation to the network and iterate the timesteps unitil it relaxes to a steady state.
# Number of timesteps stime = 1000 # Number of samples to store as long # as spreading goes on samples = 100 # store data at each sampling interval sample_interval = stime/samples # Init the stories of spreading as a zero array, # we will fill it in at each timestep and we will # plot it at the end store_images = zeros([n_patterns, n, samples]) # Init the stories of energy as a zero array, # we will fill it in at each timestep and we will # plot it at the end store_energy = zeros([n_patterns, samples]) # We simulate two iterations, each one starting # with a coorupted version of one of our two patterns for target_index in xrange(n_patterns) : # Copy the original pattern target = patterns[target_index] x = target.copy() # Then modify the second half of the image # putting random binary values x[(n/2):] = sign(randn(n/2)) # During the iterations we need to peek # one unit at random. Thus we must prepare # a random sequence of indices: # we get the sequence of indices # of the network units x_indices = arange(n) # and we shuffle it shuffle(x_indices) # the iterations for t in xrange(stime) : # Get the current index browsing # the random sequence current_x = x_indices[t%n] # Activation of a unit x[current_x] = sign(dot(W[current_x,:], x)) # Store current activations if stime%sample_interval == 0 : # Energy of the current state of the network store_energy[target_index, t/sample_interval] = -0.5*dot(x, dot(W, x)) # array containing samples of network activation store_images[target_index,:,t/sample_interval] = x
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
Here you can see two animations showing the network that is initially activated with one of the two patterns. The initial activation is corrupted with a lot of noise so that the bottom half of the figure is completelly obscured. The network moves from this initial activation to the correct attractor state (the original uncorrupted figure). During this process the energy of the network lowers untill it reaches a steady state. <img src="mnist-hopfield_4.gif" width=100%> <img src="mnist-hopfield_2.gif" width=100%> Appendix: How to build the animation We use the matplotlib.animation package for animations and the gridspec class to customize the layout of subplots.
# The matplotlib object to do animations from matplotlib import animation # This grid allows to layout subplots in a more # flexible way import matplotlib.gridspec as gridspec
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
To plot the two animations we need a function to initialize a figure with three plots: the first showing the target digit, the second showing the current activity of the network and the third showing the sum of squared errors.
def init_figure(fig) : # Init the grid and the figure gs = gridspec.GridSpec(6, 20) #------------------------------------------------- # Plot 1 - plot the target digit # Create subplot ax1 = fig.add_subplot(gs[:4,:4]) title("target") # Create the imshow and save the handler im_target = ax1.imshow(to_mat(patterns[0]), interpolation = 'none', aspect = 'auto', cmap = cm.binary) axis('off') #------------------------------------------------- # Plot 2 - plot the current state of the network # Create subplot ax2 = fig.add_subplot(gs[:4,6:10]) title("recalling") # Create the imshow and save the handler im_activation = ax2.imshow(to_mat(store_images[0,:,0]), interpolation = 'none', aspect = 'auto', cmap = cm.binary) axis('off') #------------------------------------------------- # Plot 3 - plot the current history of energy # Create subplot ax3 = fig.add_subplot(gs[:4,12:]) title("Energy") # Create the line plot and save the handler im_energy, = ax3.plot(store_energy[0,]) # Only bottom-left axes - no tics ax3.spines['top'].set_visible(False) ax3.spines['right'].set_visible(False) ax3.set_xticks([]) ax3.set_yticks([]) # return plot handlers return im_target, im_activation, im_energy
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
We also need one another function that updates the figure at each animation timestep with a new sample
# Updates images at each frame of the animation # data : list of tuples Each row contains the # arguments of update for # a frame # returns : tuple The handlers of the # images def update(data) : # unpack plot handlers and data im_A, im_B, im_C, A, B, C = data # Update data of plot 1, plot 2 and 3 im_A.set_array(to_mat(A)) im_B.set_array(to_mat(B)) im_C.set_data(arange( len(C)), C) # return plot handlers return im_A, im_B, im_C
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
Finally we use the FuncAnimation class. We first build a data list where each row is a tuple containing plot handlers and data do for plot updates..
for target_index in xrange(n_patterns): # Init the figure fig = figure(figsize=(8, 3.5)) im_target, im_activation, im_energy = init_figure(fig) # Build the sequence of update arguments. # each row of the list contains: # 1 the target plot handler # 2 the activation plot handler # 3 the energy plot handler # 4 the target update data # 5 the activation update data # 6 the energy update data data = [( im_target, im_activation, im_energy, patterns[target_index], squeeze(store_images[target_index,:,t]), store_energy[target_index, :t] ) for t in xrange(samples ) ] # Create and render the animation anim = animation.FuncAnimation(fig, func = update, frames = data ) # save it to file anim.save("mnist-hopfield_{:d}.gif".format(labels[target_index]), fps = 10, writer='imagemagick')
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
<br><br><br><br><br><br><br><br><br><br><br><br><br><br> <br><br><br><br><br><br><br><br><br><br><br><br><br><br> <br><br><br><br><br><br><br><br><br><br><br><br><br><br> Next cell is just for styling
from IPython.core.display import HTML def css_styling(): styles = open("../style/ipybn.css", "r").read() return HTML(styles) css_styling()
course/hopfield-MNIST-simulation.ipynb
francesco-mannella/neunet-basics
mit
In this example, we show the current year incidence up to given week.<br> Along with the current incidence, we present the following intensity thresholds:<br> Low activity threshold: estimated epidemic threshold based on historical levels. Minimum: incidence equivalent to 5 cases. High activity threshold: incidence considered high based on historical levels. Minimum: incidence equivalent to 10 cases. Very high activity threshold: incidence considered very high based on historical levels. Minimum: incidence equivalent to 20 cases.
df_hist = pd.read_csv('../data/historical_estimated_values.csv', encoding='utf-8') df_inci = pd.read_csv('../data/current_estimated_values.csv', encoding='utf-8') df_typi = pd.read_csv('../data/mem-typical.csv', encoding='utf-8') df_thre = pd.read_csv('../data/mem-report.csv', encoding='utf-8') prepare_keys_name(df_hist) prepare_keys_name(df_inci) prepare_keys_name(df_typi) prepare_keys_name(df_thre) level_dict = { 'L0': 'Baixa', 'L1': 'Epidêmica', 'L2': 'Alta', 'L3': 'Muito alta' } df_inci.columns
Notebooks/historical_estimated_values.ipynb
FluVigilanciaBR/fludashboard
gpl-3.0
UF: locality code (includes UFs, Regions and Country) Tipo: locality type (Estado, Regional or País) mean: estimated mean incidence 50%: estimated median 2.5%: estimation lower 95% confidence interval 97.5%: estimation upper 95% confidence interval L0: probability of being below epi. threshold (low level) L1: probability of being above epi. threshold and below high activity (epidemic level) L2: prob. of being above high activity and below very high (high level) L3: prob. of being above very high activity threshold (very high level) Situation: stable: might suffer minor changes in the future. Reliable as is; estimated: data estimated based on opportunity (i.e. notification delay) profile. Reliable within confidence interval; unknown: might suffer significant changes in the coming weeks. This is the case for locations where estimation is not possible and data is still "fresh". Unreliable.
df_inci.head(5) df_typi.head(5) df_thre.tail(5)
Notebooks/historical_estimated_values.ipynb
FluVigilanciaBR/fludashboard
gpl-3.0
Entries with dfthresholds['se típica do inicio do surto'] = NaN have activity too low for proper epidemic threshold definition
k = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek'] df_inci2017 = df_inci[ (df_inci.epiyear == 2017) & # (df_inci.epiweek >= 15) & (df_inci.dado == 'srag') & (df_inci.escala == 'incidência') & (df_inci.uf == 'BR') ].copy() df_inci2017.sort_values(['epiyear', 'epiweek'], inplace=True) df_inci_chart = df_inci2017.copy() df_inci_chart.index = df_inci_chart.epiweek k = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek'] df_hist2017 = df_hist[ (df_hist.base_epiyear == 2017) & (df_hist.base_epiweek == 23) & (df_hist.dado == 'srag') & (df_hist.escala == 'incidência') & (df_hist.uf == 'BR') ].copy() df_hist2017.sort_values(['epiyear', 'epiweek'], inplace=True) df_hist_chart = df_hist2017.copy() df_hist_chart.index = df_hist_chart.epiweek # 50% estimated cases df_inci_chart[['srag', '50%', '2.5%', '97.5%']].plot() plt.title('Incidence') plt.grid(True) plt.show() df_hist_chart[['srag', '50%', '2.5%', '97.5%']].plot() plt.title('Historial') plt.grid(True) plt.show() df_hist2017['estimated_cases'] = df_hist2017['50%'] df = pd.merge( df_inci2017[['epiweek', 'srag', '2.5%', '97.5%']], df_hist2017[['epiweek', 'estimated_cases']], on='epiweek', how='outer' ) df.set_index('epiweek', inplace=True) df.plot() plt.grid(True) plt.title('Incidence X Historial') plt.show()
Notebooks/historical_estimated_values.ipynb
FluVigilanciaBR/fludashboard
gpl-3.0
Displaying data for user selected week w<a name="_historical data display"></a> For each week w selected by the user, the notification curve will always be that which is found on df_inci, while the estimates will be that stored in df_hist. Data df_inci only has the most recent estimates, which are based on the most recent week with data. The estimates obtained at each week is stored at df_hist. So, first of all, we will slice the historical data to week w, and limit current data to week <= w. If w=23, the historical dataset is already correctly sliced in df_hist2017, so we just have to limit the current for the proper plot:
df_hist[ (df_hist.base_epiyear == 2017) & (df_hist.dado == 'srag') & (df_hist.escala == 'incidência') & (df_hist.uf == 'BR') ].base_epiweek.unique() # First, last keep only stable weeksfor notification curve: df_inci2017.loc[(df_inci2017.situation != 'stable'), 'srag'] = np.nan # Adapt historical dataset: df_hist.sort_values(['epiyear', 'epiweek'], inplace=True) df_hist['estimated_cases'] = df_hist['50%'] # User selected week: y = 2017 w = 23 def week_data(y, w): df_week_inci = df_inci2017[(df_inci2017.epiweek <= w)] df_week_hist = df_hist[ (df_hist.base_epiyear == y) & (df_hist.base_epiweek == w) & (df_hist.dado == 'srag') & (df_hist.escala == 'incidência') & (df_hist.uf == 'BR') ].copy() df = pd.merge( df_week_inci[['epiweek', 'srag']], df_week_hist[['epiweek', 'estimated_cases', '2.5%', '97.5%']], on='epiweek', how='outer' ) df.set_index('epiweek', inplace=True) return df df = week_data(y, w) df.plot() plt.grid(True) plt.show() w = 28 df = week_data(y, w) df.plot() plt.grid(True) plt.show() w = 33 df = week_data(y, w) df.plot() plt.grid(True) plt.show()
Notebooks/historical_estimated_values.ipynb
FluVigilanciaBR/fludashboard
gpl-3.0
Collect data
def loadMovieLens(path='./data/movielens'): #Get movie titles movies={} for line in open(path+'/u.item'): id,title=line.split('|')[0:2] movies[id]=title # Load data prefs={} for line in open(path+'/u.data'): (user,movieid,rating,ts)=line.split('\t') prefs.setdefault(user,{}) prefs[user][movies[movieid]]=float(rating) return prefs data = loadMovieLens("data/ml-100k")
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
Explore data
data['3']
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
Creation of train set and test set We want to split data in two set (train and test) Actually : train= 80%totaldataset test = 20%totaldataset
def split_train_test(data,percent_test): test={} train={} movie={} for u in data.keys(): test.setdefault(u,{}) train.setdefault(u,{}) for movie in data[u]: #print(data[u][movie]) if (random()<percent_test): test[u][movie]=data[u][movie] else: train[u][movie]=data[u][movie] return train, test percent_test=0.2 train,test=split_train_test(data,percent_test)
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
Part that allows to clean train and test We don't want to have user in test set which are not in train test, the same for the movies so we delete them
def get_moove(data): moove = {} for u in data: for m in data[u]: moove[m]=0 return moove def get_youser(data): youser = {} for u in data: youser[u]=0 return youser def clean(d1,d2): to_erase = {} for i in d1: try: d2[i] except KeyError: to_erase[i]=0 for i in d2: try: d1[i] except KeyError: to_erase[i]=0 return to_erase def _remove_users(test,rem): for i in rem: try: del test[i] except KeyError: pass def _remove_movies(test,rem): for i in test: for j in rem: try: del test[i][j] except KeyError: pass mooveToRemoove = clean(get_moove(train),get_moove(test)) youserToRemoove = clean(get_youser(train),get_youser(test)) _remove_users(test,youserToRemoove) _remove_movies(test,mooveToRemoove)
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
Collaboritive Filtering classes
class BaselineMeanUser: def __init__(self): self.users={} self.movies={} def fit(self,train): for user in train: note=0 for movie in train[user]: note+=train[user][movie] note=note/len(train[user]) self.users[user]=round(note) def predict(self,user,movie): return self.users[user] def score(self,X): nb_movies = len(get_moove(X)) score = 0 for user in X: for movie in X[user]: if(self.predict(user,movie)==X[user][movie]): score+=1 return float(score)/nb_movies class BaselineMeanMovie: def __init__(self): self.users={} self.movies={} def fit(self,train): movies = get_moove(train) for movie in movies: note=0 cpt=0 for user in train: try: note+=train[user][movie] cpt+=1 except KeyError: pass note=note/cpt self.movies[movie]=round(note) def predict(self,user,movie): return self.movies[movie] def score(self,X): nb_movies = len(get_moove(X)) score = 0 for user in X: for movie in X[user]: if(self.predict(user,movie)==X[user][movie]): score+=1 return float(score)/nb_movies baseline_mu= BaselineMeanUser() baseline_mm= BaselineMeanMovie() baseline_mu.fit(train) baseline_mm.fit(train) print("score baseline mean user ",baseline_mu.score(test)) print("score baseline mean movie ",baseline_mm.score(test))
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
import pandas as pd import numpy as np import matplotlib.pyplot as plt tag_headers = ['user_id', 'movie_id', 'tag', 'timestamp'] tags = pd.read_table('data/ml-10M/tags.dat', sep='::', header=None, names=tag_headers) rating_headers = ['user_id', 'movie_id', 'rating', 'timestamp'] ratings = pd.read_table('data/ml-10M/ratings.dat', sep='::', header=None, names=rating_headers) movie_headers = ['movie_id', 'title', 'genres'] movies = pd.read_table('data/ml-10M/movies.dat', sep='::', header=None, names=movie_headers) movie_titles = movies.title.tolist() movies.head() ratings.head() tags.head() df = movies.join(ratings, on=['movie_id'], rsuffix='_r').join(tags, on=['movie_id'], rsuffix='_t') del df['movie_id_r'] del df['user_id_t'] del df['movie_id_t'] del df['timestamp_t'] df.head() rp = df.pivot_table(columns=['movie_id'],index=['user_id'],values='rating') rp.head() rp = rp.fillna(0); # Replace NaN rp.head() Q = rp.values Q Q.shape W = Q>0.5 W[W == True] = 1 W[W == False] = 0 # To be consistent with our Q matrix W = W.astype(np.float64, copy=False) W lambda_ = 0.1 n_factors = 100 m, n = Q.shape n_iterations = 20 X = 5 * np.random.rand(m, n_factors) Y = 5 * np.random.rand(n_factors, n) def get_error(Q, X, Y, W): return np.sum((W * (Q - np.dot(X, Y)))**2) errors = [] for ii in range(n_iterations): X = np.linalg.solve(np.dot(Y, Y.T) + lambda_ * np.eye(n_factors), np.dot(Y, Q.T)).T Y = np.linalg.solve(np.dot(X.T, X) + lambda_ * np.eye(n_factors), np.dot(X.T, Q)) if ii % 100 == 0: print('{}th iteration is completed'.format(ii)) errors.append(get_error(Q, X, Y, W)) Q_hat = np.dot(X, Y) print('Error of rated movies: {}'.format(get_error(Q, X, Y, W))) %matplotlib inline plt.plot(errors); plt.ylim([0, 20000]); def print_recommendations(W=W, Q=Q, Q_hat=Q_hat, movie_titles=movie_titles): #Q_hat -= np.min(Q_hat) #Q_hat[Q_hat < 1] *= 5 Q_hat -= np.min(Q_hat) Q_hat *= float(5) / np.max(Q_hat) movie_ids = np.argmax(Q_hat - 5 * W, axis=1) for jj, movie_id in zip(range(m), movie_ids): #if Q_hat[jj, movie_id] < 0.1: continue print('User {} liked {}\n'.format(jj + 1, ', '.join([movie_titles[ii] for ii, qq in enumerate(Q[jj]) if qq > 3]))) print('User {} did not like {}\n'.format(jj + 1, ', '.join([movie_titles[ii] for ii, qq in enumerate(Q[jj]) if qq < 3 and qq != 0]))) print('\n User {} recommended movie is {} - with predicted rating: {}'.format( jj + 1, movie_titles[movie_id], Q_hat[jj, movie_id])) print('\n' + 100 * '-' + '\n') #print_recommendations() weighted_errors = [] for ii in range(n_iterations): for u, Wu in enumerate(W): X[u] = np.linalg.solve(np.dot(Y, np.dot(np.diag(Wu), Y.T)) + lambda_ * np.eye(n_factors), np.dot(Y, np.dot(np.diag(Wu), Q[u].T))).T for i, Wi in enumerate(W.T): Y[:,i] = np.linalg.solve(np.dot(X.T, np.dot(np.diag(Wi), X)) + lambda_ * np.eye(n_factors), np.dot(X.T, np.dot(np.diag(Wi), Q[:, i]))) weighted_errors.append(get_error(Q, X, Y, W)) print('{}th iteration is completed'.format(ii)) weighted_Q_hat = np.dot(X,Y) #print('Error of rated movies: {}'.format(get_error(Q, X, Y, W))) weighted_Q_hat = np.dot(X,Y) plt.plot(weighted_errors); plt.xlabel('Iteration Number'); plt.ylabel('Mean Squared Error'); print_recommendations(Q_hat=weighted_Q_hat)
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
ToqueWillot/M2DAC
gpl-2.0
$$y=\frac{1}{1+\exp(x^T \beta)}$$
x = np.linspace(-5.0,5.0,200) y = -x plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(-x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = 1.0+np.exp(-x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
ロジスティック・シグモイド関数
y = 1/(1.0+np.exp(-x)) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = 1 - 1/(1.0+np.exp(-x)) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
オッズ
y = np.exp(x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
対数オッズ
y = x plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
微分 商の微分 \begin{eqnarray} (\frac{1}{f(x)})'&=&\lim_{h \rightarrow 0}\frac{\frac{1}{f(x+h)}-\frac{1}{f(x)}}{h}\ &=&\lim_{h \rightarrow 0}\frac{f(x)-f(x+h)}{hf(x)f(x+h)}\ &=&\lim_{h \rightarrow 0}-\frac{1}{f(x)f(x+h)}\frac{f(x+h)-f(x)}{h}\ &=&-\frac{f'(x)}{{ f(x)}^2} \end{eqnarray} $${ 1+\exp(-x)}'=-\exp(-x)$$ ロジスティック関数の微分 $$f(x)=\frac{1}{1+\exp(-x)}$$ \begin{eqnarray} f'(x)&=&-\frac{{1+\exp(-x)}'}{{ 1+\exp(-x)}^2}\ &=&\frac{\exp(-x)}{{ 1+\exp(-x)}^2}\ &=&\frac{1}{ 1+\exp(-x)}\frac{\exp(-x)}{ 1+\exp(-x)}\ &=&\frac{1}{ 1+\exp(-x)}(\frac{1+\exp(-x)}{ 1+\exp(-x)}-\frac{1}{ 1+\exp(-x)})\ &=&\frac{1}{ 1+\exp(-x)}(1-\frac{1}{ 1+\exp(-x)})\ &=&f(x)(1-f(x)) \end{eqnarray}
y = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x))) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
正規分布の確率密度関数に近い形状であり,最大は0.25 線形結合が正規分布に従うのに近いイメージ 正規分布 $$f(x)= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp (- \frac{(x-\mu)^2 }{2 \sigma}) $$
y = x plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = x*x plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = -x*x plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(-x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(-x*x) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(-x*x/2) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y) y = np.exp(-x*x/2)/np.sqrt(2*np.pi) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
比較
y1 = np.exp(-x*x/2)/np.sqrt(2*np.pi) y2 = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x))) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y1) plt.plot(x,y2) sigma = 1.6 y1 = np.exp(-x*x/2/sigma)/np.sqrt(2*np.pi)/sigma y2 = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x))) plt.figure(figsize=(10,6)) plt.grid(True) plt.plot(x,y1) plt.plot(x,y2)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
ランダムなデータ
t = np.random.randint(low=0,high=2,size=50) t feature = np.random.normal(loc=0.0,scale=1.0,size=(50,1)) feature m = LogisticRegression(penalty='l2',C=10000,fit_intercept=True) m.fit(feature,t) plt.figure(figsize=(10,6)) plt.grid(True) plt.scatter(feature,t) predict = m.predict(x.reshape((200,1))) y = 1/(1.0+np.exp(-x)) plt.figure(figsize=(10,6)) plt.grid(True) plt.scatter(feature,t) plt.scatter(x,predict) plt.plot(x,y)
s10_logistic_regression.ipynb
ryo8128/study_python
mit
Neural network classes for testing The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions. About the code: This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization. It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
class NeuralNet: def __init__(self, initial_weights, activation_fn, use_batch_norm): """ Initializes this object, creating a TensorFlow graph using the given parameters. :param initial_weights: list of NumPy arrays or Tensors Initial values for the weights for every layer in the network. We pass these in so we can create multiple networks with the same starting weights to eliminate training differences caused by random initialization differences. The number of items in the list defines the number of layers in the network, and the shapes of the items in the list define the number of nodes in each layer. e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would create a network with 784 inputs going into a hidden layer with 256 nodes, followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param use_batch_norm: bool Pass True to create a network that uses batch normalization; False otherwise Note: this network will not use batch normalization on layers that do not have an activation function. """ # Keep track of whether or not this network uses batch normalization. self.use_batch_norm = use_batch_norm self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm" # Batch normalization needs to do different calculations during training and inference, # so we use this placeholder to tell the graph which behavior to use. self.is_training = tf.placeholder(tf.bool, name="is_training") # This list is just for keeping track of data we want to plot later. # It doesn't actually have anything to do with neural nets or batch normalization. self.training_accuracies = [] # Create the network graph, but it will not actually have any real values until after you # call train or test self.build_network(initial_weights, activation_fn) def build_network(self, initial_weights, activation_fn): """ Build the graph. The graph still needs to be trained via the `train` method. :param initial_weights: list of NumPy arrays or Tensors See __init__ for description. :param activation_fn: Callable See __init__ for description. """ self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]]) layer_in = self.input_layer for weights in initial_weights[:-1]: layer_in = self.fully_connected(layer_in, weights, activation_fn) self.output_layer = self.fully_connected(layer_in, initial_weights[-1]) def fully_connected(self, layer_in, initial_weights, activation_fn=None): """ Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. """ # Since this class supports both options, only use batch normalization when # requested. However, do not use it on the final layer, which we identify # by its lack of an activation function. if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. # (See later in the notebook for more details.) weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) # Apply batch normalization to the linear combination of the inputs and weights batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) # Now apply the activation function, *after* the normalization. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None): """ Trains the model on the MNIST training dataset. :param session: Session Used to run training graph operations. :param learning_rate: float Learning rate used during gradient descent. :param training_batches: int Number of batches to train. :param batches_per_sample: int How many batches to train before sampling the validation accuracy. :param save_model_as: string or None (default None) Name to use if you want to save the trained model. """ # This placeholder will store the target labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define loss and optimizer cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer)) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) if self.use_batch_norm: # If we don't include the update ops as dependencies on the train step, the # tf.layers.batch_normalization layers won't update their population statistics, # which will cause the model to fail at inference time with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) # Train for the appropriate number of batches. (tqdm is only for a nice timing display) for i in tqdm.tqdm(range(training_batches)): # We use batches of 60 just because the original paper did. You can use any size batch you like. batch_xs, batch_ys = mnist.train.next_batch(60) session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) # Periodically test accuracy against the 5k validation images and store it for plotting later. if i % batches_per_sample == 0: test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) self.training_accuracies.append(test_accuracy) # After training, report accuracy against test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy)) # If you want to use this model later for inference instead of having to retrain it, # just construct it with the same parameters and then pass this file to the 'test' function if save_model_as: tf.train.Saver().save(session, save_model_as) def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None): """ Trains a trained model on the MNIST testing dataset. :param session: Session Used to run the testing graph operations. :param test_training_accuracy: bool (default False) If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. Note: in real life, *always* perform inference using the population mean and variance. This parameter exists just to support demonstrating what happens if you don't. :param include_individual_predictions: bool (default True) This function always performs an accuracy test against the entire test set. But if this parameter is True, it performs an extra test, doing 200 predictions one at a time, and displays the results and accuracy. :param restore_from: string or None (default None) Name of a saved model if you want to test with previously saved weights. """ # This placeholder will store the true labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # If provided, restore from a previously saved model if restore_from: tf.train.Saver().restore(session, restore_from) # Test against all of the MNIST test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images, labels: mnist.test.labels, self.is_training: test_training_accuracy}) print('-'*75) print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy)) # If requested, perform tests predicting individual values rather than batches if include_individual_predictions: predictions = [] correct = 0 # Do 200 predictions, 1 at a time for i in range(200): # This is a normal prediction using an individual test case. However, notice # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`. # Remember that will tell it whether it should use the batch mean & variance or # the population estimates that were calucated while training the model. pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy], feed_dict={self.input_layer: [mnist.test.images[i]], labels: [mnist.test.labels[i]], self.is_training: test_training_accuracy}) correct += corr predictions.append(pred[0]) print("200 Predictions:", predictions) print("Accuracy on 200 samples:", correct/200)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines. We add batch normalization to layers inside the fully_connected function. Here are some important points about that code: 1. Layers with batch normalization do not include a bias term. 2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.) 3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later. 4. We add the normalization before calling the activation function. In addition to that code, the training step is wrapped in the following with statement: python with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference. Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization. Batch Normalization Demos<a id='demos'></a> This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights. Code to support testing The following two functions support the demos we run in the notebook. The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots. The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
def plot_training_accuracies(*args, **kwargs): """ Displays a plot of the accuracies calculated during training to demonstrate how many iterations it took for the model(s) to converge. :param args: One or more NeuralNet objects You can supply any number of NeuralNet objects as unnamed arguments and this will display their training accuracies. Be sure to call `train` the NeuralNets before calling this function. :param kwargs: You can supply any named parameters here, but `batches_per_sample` is the only one we look for. It should match the `batches_per_sample` value you passed to the `train` function. """ fig, ax = plt.subplots() batches_per_sample = kwargs['batches_per_sample'] for nn in args: ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample), nn.training_accuracies, label=nn.name) ax.set_xlabel('Training steps') ax.set_ylabel('Accuracy') ax.set_title('Validation Accuracy During Training') ax.legend(loc=4) ax.set_ylim([0,1]) plt.yticks(np.arange(0, 1.1, 0.1)) plt.grid(True) plt.show() def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500): """ Creates two networks, one with and one without batch normalization, then trains them with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies. :param use_bad_weights: bool If True, initialize the weights of both networks to wildly inappropriate weights; if False, use reasonable starting weights. :param learning_rate: float Learning rate used during gradient descent. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param training_batches: (default 50000) Number of batches to train. :param batches_per_sample: (default 500) How many batches to train before sampling the validation accuracy. """ # Use identical starting weights for each network to eliminate differences in # weight initialization as a cause for differences seen in training performance # # Note: The networks will use these weights to define the number of and shapes of # its layers. The original batch normalization paper used 3 hidden layers # with 100 nodes in each, followed by a 10 node output layer. These values # build such a network, but feel free to experiment with different choices. # However, the input size should always be 784 and the final output should be 10. if use_bad_weights: # These weights should be horrible because they have such a large standard deviation weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,10), scale=5.0).astype(np.float32) ] else: # These weights should be good because they have such a small standard deviation weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] # Just to make sure the TensorFlow's default graph is empty before we start another # test, because we don't bother using different graphs or scoping and naming # elements carefully in this sample code. tf.reset_default_graph() # build two versions of same network, 1 without and 1 with batch normalization nn = NeuralNet(weights, activation_fn, False) bn = NeuralNet(weights, activation_fn, True) # train and test the two models with tf.Session() as sess: tf.global_variables_initializer().run() nn.train(sess, learning_rate, training_batches, batches_per_sample) bn.train(sess, learning_rate, training_batches, batches_per_sample) nn.test(sess) bn.test(sess) # Display a graph of how validation accuracies changed during training # so we can compare how the models trained and when they converged plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
Comparisons between identical networks, with and without batch normalization The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
train_and_test(False, 0.01, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations. If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.) The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.) In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
train_and_test(False, 0.01, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
train_and_test(False, 1, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate. The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
train_and_test(False, 1, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast. The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
train_and_test(False, 1, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy. The cell below shows a similar pair of networks trained for only 2000 iterations.
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced. The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
train_and_test(False, 2, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all. The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
train_and_test(False, 2, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization. However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
train_and_test(True, 0.01, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
train_and_test(True, 0.01, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
train_and_test(True, 1, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere. The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
train_and_test(True, 1, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy. The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
train_and_test(True, 2, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck. The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
train_and_test(True, 2, tf.nn.sigmoid)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit
In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%. Full Disclosure: Batch Normalization Doesn't Fix Everything Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run. This section includes two examples that show runs when batch normalization did not help at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
train_and_test(True, 1, tf.nn.relu)
batch-norm/Batch_Normalization_Lesson.ipynb
JasonNK/udacity-dlnd
mit