markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The unique characters in a set By turning the string into a set, we get the set of its unique characters:
set(s1)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
The number of occurences of each non-white character in the file To count the frequency of each character we could use those from the set as keys in a dict. We can generate the dict with the frequency if each character in a dict comprehension that combines the unique letter as a key with the method count(key) applied on s1, the string without whitespace:
ccnt = {c : s1.count(c) for c in set(s1)} pprint(ccnt)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Lets order the letters after their frequency of occurrence in the file: We can do so in one line, but this needs some explanaion. First we generate a list from the dict in which each item is a list of 2 itmes namely [char, number] Second we apply sorted on that list to get a sorted list. But we don't want it to be sorted based on the character, but based on the number. Therfore, we use the key argument. It tels that each item has to be compared on the second value (lambda x: x[1]). Finally, this yields the list that we want, but with the largest frequency at the bottom. So we turn this list upside down by using the slice [::-1] at the end. Here it is:
sorted([[k, ccnt[k]] for k in ccnt.keys()], key=lambda x: x[1])[::-1]
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Reading the file and returning a list of strings, one per line For this we would read reader.readlines() instead of reader.read:
with open(os.path.join(apth, fname), 'r') as reader: s = reader.readlines() type(s) pprint(s)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
From this point onward, you can analyse each line in sequence, pick out lines, etc. Reading a single line and lines one by one Often you don't want to read the entire file into memory (into a single character) at once. It might blow up the computer's memory if the file size were gigabits, as can easily the case with output of some models. And if it wouldn't crash the memory, your pc may still become very slow with large files. So a better and more generally applied way to read in a file is line by line, based on the newline characters that are embedded in them. In that case you can read the file in line by line, one at a time, not using reader.read() or reader.readlines() but reader.readline()
with open(os.path.join(apth, fname), 'r') as reader: s = reader.readline() type(s) print(s)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Which yields a string, the first string of the file in this case. The problem is now, that no more lines can be read from this file, because with the with statement, the file closes automatically as soon as the python reaches the end of its block:
s = reader.readline()
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Therefore, we should not use the with statement and hand-close the file when we're done, or put anything that we do with the strings that we read inside the with block. We may be tempted to put the reader in a while-loop like so s=[] while True: s.append(reader.readline()) But don't do that, becaus the while-loop will never end
with open(os.path.join(apth, fname), 'r') as reader: lines = [] while True: s = reader.readline() if s=="": break lines.append(s) pprint(lines) reader.readline?
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
$$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
L= 1.0 # domain length Nx= 100 dx_min=L/Nx x=np.array([0.0, dx_min]) while x[-1]<L: x=np.append(x, x[-1]+1.05*(x[-1]-x[-2])) x[-1]=L mesh = Grid1D(dx=dx) phi = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0) phi.constrain(5.0, mesh.facesLeft) phi.constrain(0., mesh.facesRight) # D(phi)=D0*(1.0+phi.^2) # dD(phi)=2.0*D0*phi D0 = 1.0 dt= 0.01*L*L/D0 # a proper time step for diffusion process eq = TransientTerm(var=phi) - DiffusionTerm(var=phi, coeff=D0*(1+phi.faceValue**2)) for i in range(4): for i in range(5): c_res = eq.sweep(dt = dt) phi.updateOld() Viewer(vars = phi, datamax=5.0, datamin=0.0); # viewer.plot()
python/test averaging methods.ipynb
simulkade/peteng
mit
$$\frac{\partial\phi}{\partial t}+\nabla . \left(-D\left(\phi_{0}\right)\nabla \phi\right)+\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi\right) =\nabla.\left(-\nabla \phi_{0}\left(\frac{\partial D}{\partial \phi}\right){\phi{0,face}}\phi_{0,face}\right)$$
phi2 = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0) phi2.constrain(5.0, mesh.facesLeft) phi2.constrain(0., mesh.facesRight) # D(phi)=D0*(1.0+phi.^2) # dD(phi)=2.0*D0*phi D0 = 1.0 dt= 0.01*L*L/D0 # a proper time step for diffusion process eq2 = TransientTerm(var=phi2)-DiffusionTerm(var=phi2, coeff=D0*(1+phi2.faceValue**2))+ \ UpwindConvectionTerm(var=phi2, coeff=-2*D0*phi2.faceValue*phi2.faceGrad)== \ (-2*D0*phi2.faceValue*phi2.faceGrad*phi2.faceValue).divergence for i in range(4): for i in range(5): c_res = eq2.sweep(dt = dt) phi2.updateOld() viewer = Viewer(vars = [phi, phi2], datamax=5.0, datamin=0.0)
python/test averaging methods.ipynb
simulkade/peteng
mit
The above figure shows how the upwind convection term is not consistent with the linear averaging.
phi3 = CellVariable(mesh=mesh, name="phi", hasOld=True, value = 0.0) phi3.constrain(5.0, mesh.facesLeft) phi3.constrain(0., mesh.facesRight) # D(phi)=D0*(1.0+phi.^2) # dD(phi)=2.0*D0*phi D0 = 1.0 dt= 0.01*L*L/D0 # a proper time step for diffusion process u = -2*D0*phi3.faceValue*phi3.faceGrad eq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \ UpwindConvectionTerm(var=phi3, coeff=-2*D0*phi3.faceValue*phi3.faceGrad)== \ (-2*D0*phi3.faceValue*phi3.faceGrad*phi3.faceValue).divergence for i in range(4): for i in range(5): c_res = eq3.sweep(dt = dt) phi_face = FaceVariable(mesh, upwindValues(mesh, phi3, u)) u = -2*D0*phi_face*phi3.faceGrad eq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \ UpwindConvectionTerm(var=phi3, coeff=u)== \ (u*phi_face).divergence phi3.updateOld() viewer = Viewer(vars = [phi, phi3], datamax=5.0, datamin=0.0)
python/test averaging methods.ipynb
simulkade/peteng
mit
Class 13: Introduction to Business Cycle Modeling Empirical evidence of TFP fluctuation
# Import actual and trend production data data = pd.read_csv('http://www.briancjenkins.com/teaching/winter2017/econ129/data/Econ129_Rbc_Data.csv',index_col=0) print(data.head())
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
letsgoexploring/teaching
mit
Recall: \begin{align} \frac{X_t - X_t^{trend}}{X_t^{trend}} & \approx \log\left(X_t/X_t^{trend}\right) = \log X_t - \log X_t^{trend} \end{align}
# Create new DataFrame of percent deviations from trend data_cycles = pd.DataFrame({ 'gdp':100*(np.log(data.gdp/data.gdp_trend)), 'consumption':100*(np.log(data.consumption/data.consumption_trend)), 'investment':100*(np.log(data.investment/data.investment_trend)), 'hours':100*(np.log(data.hours/data.hours_trend)), 'capital':100*(np.log(data.capital/data.capital_trend)), 'tfp':100*(np.log(data.tfp/data.tfp_trend)), }) # Plot all percent deviations from trend fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(3,2,1) ax.plot_date(data_cycles.index,data_cycles['gdp'],'-',lw=3,alpha=0.7) ax.grid() ax.set_title('GDP per capita') ax.set_ylabel('% dev from trend') ax = fig.add_subplot(3,2,2) ax.plot_date(data_cycles.index,data_cycles['consumption'],'-',lw=3,alpha=0.7) ax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2) ax.grid() ax.set_title('Consumption per capita (GDP in light gray)') ax = fig.add_subplot(3,2,3) ax.plot_date(data_cycles.index,data_cycles['investment'],'-',lw=3,alpha=0.7) ax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2) ax.grid() ax.set_title('Investment per capita (GDP in light gray)') ax.set_ylabel('% dev from trend') ax = fig.add_subplot(3,2,4) ax.plot_date(data_cycles.index,data_cycles['hours'],'-',lw=3,alpha=0.7) ax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2) ax.grid() ax.set_title('Hours per capita (GDP in light gray)') ax = fig.add_subplot(3,2,5) ax.plot_date(data_cycles.index,data_cycles['capital'],'-',lw=3,alpha=0.7) ax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2) ax.grid() ax.set_title('Capital per capita (GDP in light gray)') ax.set_ylabel('% dev from trend') ax = fig.add_subplot(3,2,6) ax.plot_date(data_cycles.index,data_cycles['tfp'],'-',lw=3,alpha=0.7,label='TFP') ax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2,label='GDP') ax.grid() ax.set_title('TFP per capita (GDP in light gray)') plt.tight_layout() # Add a column of lagged tfp values data_cycles['tfp_lag']= data_cycles['tfp'].shift() data_cycles = data_cycles.dropna() data_cycles.head() plt.scatter(data_cycles.tfp_lag,data_cycles.tfp,s=50,alpha = 0.7) plt.grid() plt.xlabel('TFP lagged one period (% dev from trend)') plt.ylabel('TFP (% dev from trend)')
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
letsgoexploring/teaching
mit
Since there appears to be a stong correlation between the lagged cyclical component of TFP and the current cyclical component of TFP, let's estimate the following AR(1) model using the statsmodels package. \begin{align} \hat{a}t & = \rho \hat{a}{t-1} + \epsilon_t \end{align}
model = sm.OLS(data_cycles.tfp,data_cycles.tfp_lag) results = model.fit() print(results.summary()) # Store the estimated autoregressive parameter rhoA = results.params['tfp_lag'] # Compute the predicted values: tfp_pred = results.predict() # Compute the standard deviation of the residuals of the regression sigma = np.std(results.resid) print('rho: ',np.round(rhoA,5)) print('sigma (in percent):',np.round(sigma,5)) # Scatter plot of data with fitted regression line: plt.scatter(data_cycles.tfp_lag,data_cycles.tfp,s=50,alpha = 0.7) plt.plot(data_cycles.tfp_lag,tfp_pred,'r') plt.grid() plt.xlabel('TFP lagged one period (% dev from trend)') plt.ylabel('TFP (% dev from trend)')
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
letsgoexploring/teaching
mit
A Baseline Real Business Cycle Model Consider the following business cycle model: \begin{align} Y_t & = A_t K_t^{\alpha} \tag{1}\ C_t & = (1-s)Y_t \tag{2}\ I_t & = K_{t+1} - ( 1- \delta) \tag{3}\ Y_t & = C_t + I_t \tag{4} \end{align} where: \begin{align} \log A_{t+1} & = \rho \log A_t + \epsilon_t, \tag{5} \end{align} reflects exogenous fluctuation in TFP. The endogenous variables in the model are $K_t$, $Y_t$, $C_t$, $I_t$, and $A_t$ and $\epsilon_t$ is an exogenous white noise shock process with standard deviation $\sigma$. $K_t$ and $A_t$ are called state variables because their values in period $t$ affect the equilibrium of the model in period $t+1$. Non-stochastic steady state The non-stochastic steady state equilibrium for the model is an equilibrium in which the exogenous shock process $\epsilon_t = 0$ for all $t$ and $K_{t+1} = K_t$ and $A_{t+1} = A_t$ for all $t$. Find the non-stochastic steady state of the model analytically. That is, use pencil and paper to find values for capital $\bar{K}$, output $\bar{Y}$, consumption $\bar{C}$, and investment $\bar{I}$ in terms of the model parameters $\alpha$, $s$, and $\delta$. Suppose that: $\alpha = 0.35$, $\delta = 0.025$, and $s = 0.1$. Use your answers to the previous exercise to compute numerical values for consumption, output, capital, and investment. Use the variable names kss, yss, css, and iss to store the computed steady state values.
# Define parameters s = 0.1 delta = 0.025 alpha = 0.35 # Compute the steady state values of the endogenous variables kss = (s/delta)**(1/(1-alpha)) yss = kss**alpha css = (1-s)*yss iss = yss - css print('Steady states:\n') print('capital: ',round(kss,5)) print('output: ',round(yss,5)) print('consumption:',round(css,5)) print('investment: ',round(iss,5))
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
letsgoexploring/teaching
mit
Impulse responses In this part, you will simulate the model directly in response to a 1 percent shock to aggregate technology. The simulation will run for $T+1$ periods from $t = 0,\ldots, T$ and the shock arrives at $t = 1$. Suppose that $T = 12$. Use equations (1) through (4) to solve for $K_{t+1}$, $Y_t$, $C_t$, and $I_t$ in terms of only $K_t$, $a_t$, and the model parameters $\alpha$, $\delta$, and $s$. Initialize an array for $\epsilon_t$ called eps_ir that is equal to a $\times 1$ array of zeros. Set the first element of this array equal to 0.01. Initialize an array for $\log A_t$ called log_a_ir that is equal to a $(T+1)\times 1$ array of zeros. Set $\rho = 0.75$ and compute the impulse response of $\log A_t$ to the shock. Use the simulated values for $\log A_t$ to compute $A_t$ and save the values in a variable called a_ir (Note: $A_t = e^{\log A_t}$). Plot $\log A_t$ and $A_t$. Initialize an array for $K_t$ called k_ir that is a $(T+1)\times 1$ array of zeros. Set the first value in the array equal to steady state capital. Then compute the subsequent values for $K_t$ using the computed values for $A_t$. Plot $K_t$. Initialize $(T+1)\times 1$ arrays for $Y_t$, $C_t$, and $I_t$ called y_ir, c_ir, and i_ir. Use the computed values for $K_t$ to compute simulated values for $Y_t$, $C_t$, and $I_t$. Construct a $2\times2$ grid of subplots of the impulse responses of capital, output, consumption, and investment to a one percent shock to aggregate technology. Compute the percent deviation of each variable from its steady state \begin{align} 100*(\log(X_t) - \log(\bar{X})) \end{align} and store the results in variables called: k_ir_dev, y_ir_dev, c_ir_dev, and i_ir_dev. Construct a $2\times2$ grid of subplots of the impulse responses of capital, output, consumption, and investment to the technology shock with each variable expressed as a percent deviation from steady state.
# Set number of simulation periods (minus 1): T = 12 # Initialize eps_ir as a T x 1 array of zeros and set first value to 0.01 eps_ir = np.zeros(T) eps_ir[0] = 0.01 # Set coefficient of autocorrelation for log A rho = 0.75 # Initialize log_a_ir as a (T+1) x 1 array of zeros and compute. log_a_ir = np.zeros(T+1) for t in range(T): log_a_ir[t+1] = rho*log_a_ir[t] + eps_ir[t] # Plot log_a_ir plt.plot(log_a_ir,lw=3,alpha =0.7) plt.title('$\log A_t$') plt.grid() # Computes a_ir. a_ir = np.exp(log_a_ir) # Plot a_ir plt.plot(a_ir,lw=3,alpha =0.7) plt.title('$A_t$') plt.grid() # Initialize k_ir as a (T+1) x 1 array of zeros and compute k_ir = np.zeros(T+1) k_ir[0] = kss for t in range(T): k_ir[t+1] = s*a_ir[t] *k_ir[t]**alpha+ (1-delta)*k_ir[t] # Plot k_ir plt.plot(k_ir,lw=3,alpha =0.7) plt.title('$K_t$') plt.grid() # Compute y_ir, c_ir, i_ir y_ir = a_ir*k_ir**alpha c_ir = (1-s)*y_ir i_ir = s*y_ir # Create a 2x2 plot of y_ir, c_ir, i_ir, and k_ir fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(y_ir,lw=3,alpha = 0.7) ax.set_title('$Y_t$') ax.grid() ax = fig.add_subplot(2,2,2) ax.plot(c_ir,lw=3,alpha = 0.7) ax.set_title('$C_t$') ax.grid() ax = fig.add_subplot(2,2,3) ax.plot(i_ir,lw=3,alpha = 0.7) ax.set_title('$I_t$') ax.grid() ax = fig.add_subplot(2,2,4) ax.plot(k_ir,lw=3,alpha = 0.7) ax.set_title('$K_t$') ax.grid() plt.tight_layout() # Compute y_ir_dev, c_ir_dev, i_ir_dev, and k_ir_dev to be the log deviations from steady state of the # respective variables y_ir_dev = np.log(y_ir) - np.log(yss) c_ir_dev = np.log(c_ir) - np.log(css) i_ir_dev = np.log(i_ir) - np.log(iss) k_ir_dev = np.log(k_ir) - np.log(kss) # Create a 2x2 plot of y_ir_dev, c_ir_dev, i_ir_dev, and k_ir_dev fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(y_ir_dev,lw=3,alpha = 0.7) ax.set_title('$\hat{y}_t$') ax.grid() ax.set_ylabel('% dev from steady state') ax = fig.add_subplot(2,2,2) ax.plot(c_ir_dev,lw=3,alpha = 0.7) ax.set_title('$\hat{c}_t$') ax.grid() ax = fig.add_subplot(2,2,3) ax.plot(i_ir_dev,lw=3,alpha = 0.7) ax.set_title('$\hat{\imath}_t$') ax.grid() ax.set_ylabel('% dev from steady state') ax = fig.add_subplot(2,2,4) ax.plot(k_ir_dev,lw=3,alpha = 0.7) ax.set_title('$\hat{k}_t$') ax.grid() plt.tight_layout()
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
letsgoexploring/teaching
mit
"Top-K" Filtering A common analytical pattern involves subsetting based on some method of ranking. For example, "the 5 most frequently occurring widgets in a dataset". By choosing the right metric, you can obtain the most important or least important items from some dimension, for some definition of important. To carry out the pattern by hand involves the following Choose a ranking metric Aggregate, computing the ranking metric, by the target dimension Order by the ranking metric and take the highest K values Use those values as a set filter (either with semi_join or isin) in your next query For example, let's look at the TPC-H tables and find the 5 or 10 customers who placed the most orders over their lifetime:
orders = con.table('tpch_orders') top_orders = (orders .group_by('o_custkey') .size() .sort_by(('count', False)) .limit(5)) top_orders
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
Now, we could use these customer keys as a filter in some other analysis:
# Among the top 5 most frequent customers, what's the histogram of their order statuses? analysis = (orders[orders.o_custkey.isin(top_orders.o_custkey)] .group_by('o_orderstatus') .size()) analysis
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
This is such a common pattern that Ibis supports a high level primitive topk operation, which can be used immediately as a filter:
top_orders = orders.o_custkey.topk(5) orders[top_orders].group_by('o_orderstatus').size()
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
This goes a little further. Suppose now we want to rank customers by their total spending instead of the number of orders, perhaps a more meaningful metric:
total_spend = orders.o_totalprice.sum().name('total') top_spenders = (orders .group_by('o_custkey') .aggregate(total_spend) .sort_by(('total', False)) .limit(5)) top_spenders
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
To use another metric, just pass it to the by argument in topk:
top_spenders = orders.o_custkey.topk(5, by=total_spend) orders[top_spenders].group_by('o_orderstatus').size()
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
Self joins If you're a relational data guru, you may have wondered how it's possible to join tables with themselves, because joins clauses involve column references back to the original table. Consider the SQL sql SELECT t1.key, sum(t1.value - t2.value) AS metric FROM my_table t1 JOIN my_table t2 ON t1.key = t2.subkey GROUP BY 1 Here, we have an unambiguous way to refer to each of the tables through aliasing. Let's consider the TPC-H database, and support we want to compute year-over-year change in total order amounts by region using joins.
region = con.table('tpch_region') nation = con.table('tpch_nation') customer = con.table('tpch_customer') orders = con.table('tpch_orders') orders.limit(5)
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
First, let's join all the things and select the fields we care about:
fields_of_interest = [region.r_name.name('region'), nation.n_name.name('nation'), orders.o_totalprice.name('amount'), orders.o_orderdate.cast('timestamp').name('odate') # these are strings ] joined_all = (region.join(nation, region.r_regionkey == nation.n_regionkey) .join(customer, customer.c_nationkey == nation.n_nationkey) .join(orders, orders.o_custkey == customer.c_custkey) [fields_of_interest])
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
Okay, great, let's have a look:
joined_all.limit(5)
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
Sweet, now let's aggregate by year and region:
year = joined_all.odate.year().name('year') total = joined_all.amount.sum().cast('double').name('total') annual_amounts = (joined_all .group_by(['region', year]) .aggregate(total)) annual_amounts
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
Looking good so far. Now, we need to join this table on itself, by subtracting 1 from one of the year columns. We do this by creating a "joinable" view of a table that is considered a distinct object within Ibis. To do this, use the view function:
current = annual_amounts prior = annual_amounts.view() yoy_change = (current.total - prior.total).name('yoy_change') results = (current.join(prior, ((current.region == prior.region) & (current.year == (prior.year - 1)))) [current.region, current.year, yoy_change]) df = results.execute() df['yoy_pretty'] = df.yoy_change.map(lambda x: '$%.2fmm' % (x / 1000000.)) df
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
If you're being fastidious and want to consider the first year occurring in the dataset for each region to have 0 for the prior year, you will instead need to do an outer join and treat nulls in the prior side of the join as zero:
yoy_change = (current.total - prior.total.zeroifnull()).name('yoy_change') results = (current.outer_join(prior, ((current.region == prior.region) & (current.year == (prior.year - 1)))) [current.region, current.year, current.total, prior.total.zeroifnull().name('prior_total'), yoy_change]) results.limit(10)
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
deepfield/ibis
apache-2.0
1) Take a first look at the data Run the next code cell to load in the libraries and dataset you'll use to complete the exercise.
# modules we'll use import pandas as pd import numpy as np # read in all our data sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv") # set seed for reproducibility np.random.seed(0)
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
Use the code cell below to print the first five rows of the sf_permits DataFrame.
# TODO: Your code here! #%%RM_IF(PROD)%% sf_permits.head()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
# Check your answer (Run this code cell to receive credit!) q1.check() # Line below will give you a hint #_COMMENT_IF(PROD)_ q1.hint()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
2) How many missing data points do we have? What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
# TODO: Your code here! percent_missing = ____ # Check your answer q2.check() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q2.hint() #_COMMENT_IF(PROD)_ q2.solution() #%%RM_IF(PROD)%% # get the number of missing data points per column percent_missing = sf_permits.isnull().sum().sum() q2.assert_check_failed() #%%RM_IF(PROD)%% # get the number of missing data points per column missing_values_count = sf_permits.isnull().sum() # how many total missing values do we have? total_cells = np.product(sf_permits.shape) total_missing = missing_values_count.sum() # percent of data that is missing percent_missing = (total_missing/total_cells) * 100 q2.assert_check_passed()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
3) Figure out why the data is missing Look at the columns "Street Number Suffix" and "Zipcode" from the San Francisco Building Permits dataset. Both of these contain missing values. - Which, if either, are missing because they don't exist? - Which, if either, are missing because they weren't recorded? Once you have an answer, run the code cell below.
# Check your answer (Run this code cell to receive credit!) q3.check() # Line below will give you a hint #_COMMENT_IF(PROD)_ q3.hint()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
4) Drop missing values: rows If you removed all of the rows of sf_permits with missing values, how many rows are left? Note: Do not change the value of sf_permits when checking this.
# TODO: Your code here! #%%RM_IF(PROD)%% sf_permits.dropna()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
Once you have an answer, run the code cell below.
# Check your answer (Run this code cell to receive credit!) q4.check() # Line below will give you a hint #_COMMENT_IF(PROD)_ q4.hint()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
5) Drop missing values: columns Now try removing all the columns with empty values. - Create a new DataFrame called sf_permits_with_na_dropped that has all of the columns with empty values removed. - How many columns were removed from the original sf_permits DataFrame? Use this number to set the value of the dropped_columns variable below.
# TODO: Your code here sf_permits_with_na_dropped = ____ dropped_columns = ____ # Check your answer q5.check() #%%RM_IF(PROD)%% # remove all columns with at least one missing value sf_permits_with_na_dropped = sf_permits.dropna(axis=1) # calculate number of dropped columns cols_in_original_dataset = sf_permits.shape[1] cols_in_na_dropped = sf_permits_with_na_dropped.shape[1] dropped_columns = cols_in_original_dataset - cols_in_na_dropped q5.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q5.hint() #_COMMENT_IF(PROD)_ q5.solution()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
6) Fill in missing values automatically Try replacing all the NaN's in the sf_permits data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame sf_permits_with_na_imputed.
# TODO: Your code here sf_permits_with_na_imputed = ____ # Check your answer q6.check() #%%RM_IF(PROD)%% sf_permits_with_na_imputed = sf_permits_with_na_dropped.fillna(method='bfill', axis=0).fillna(0) q6.assert_check_failed() #%%RM_IF(PROD)%% sf_permits_with_na_imputed = sf_permits.fillna(method='bfill', axis=0).fillna(0) q6.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q6.hint() #_COMMENT_IF(PROD)_ q6.solution()
notebooks/data_cleaning/raw/ex1.ipynb
Kaggle/learntools
apache-2.0
Relevant Parameters An l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux. Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
b.filter(qualifier='l3_mode')
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
So let's add a LC dataset
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
print(b.filter(qualifier='l3*'))
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
l3_mode = 'flux' When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.
print(b.filter(qualifier='l3*')) print(b.get_parameter('l3'))
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options. Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
print(b.compute_l3s())
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
l3_mode = 'fraction' When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.
b.set_value('l3_mode', 'fraction') print(b.filter(qualifier='l3*')) print(b.get_parameter('l3_frac'))
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s. Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
print(b.compute_l3s())
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Light Curves (Fluxes) "Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy. To see this we'll compare a light curve with and without "third" light.
b.run_compute(irrad_method='none', model='no_third_light') b.set_value('l3_mode', 'flux') b.set_value('l3', 5) b.run_compute(irrad_method='none', model='with_third_light')
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
afig, mplfig = b['lc01'].plot(model='no_third_light') afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Meshes (Intensities) "Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes. NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details. To see this we can run both of our models again and look at the values of the intensities in the mesh.
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01']) b.set_value('l3', 0.0) b.run_compute(irrad_method='none', model='no_third_light', overwrite=True) b.set_value('l3', 5) b.run_compute(irrad_method='none', model='with_third_light', overwrite=True) print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light'))) print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light'))) print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light'))) print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))
2.3/tutorials/l3.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
In this exercise, we'll use the Boston Housing dataset to predict house prices from characteristics like the number of rooms and distance to employment centers.
boston_housing_data = pd.read_csv('../datasets/boston.csv')
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
Pandas allows reading our data from different file formats and sources. See this link for a list of supported operations.
boston_housing_data.head() boston_housing_data.info() boston_housing_data.describe()
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
Visualizing data After reading our data into a pandas DataFrame and getting a broader view of the dataset, we can build charts to visualize tha "shape" of the data. We'll use python's Matplotlib library to create these charts. An example Suppose you're given the following information about four datasets:
datasets = pd.read_csv('../datasets/anscombe.csv') for i in range(1, 5): dataset = datasets[datasets.Source == 1] print('Dataset {} (X, Y) mean: {}'.format(i, (dataset.x.mean(), dataset.y.mean()))) print('\n') for i in range(1, 5): dataset = datasets[datasets.Source == 1] print('Dataset {} (X, Y) std deviation: {}'.format(i, (dataset.x.std(), dataset.y.std()))) print('\n') for i in range(1, 5): dataset = datasets[datasets.Source == 1] print('Dataset {} correlation between X and Y: {}'.format(i, dataset.x.corr(dataset.y)))
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
They all have roughly the same mean, standard deviations and correlation. How similar are they? This dataset is known as the Anscombe's Quartet and it's used to illustrate how tricky it can be to trust only summary statistics to characterize a dataset.
import matplotlib.pyplot as plt # This line makes the graphs appear as cell outputs rather than in a separate window or file. %matplotlib inline # Extract the house prices and average number of rooms to two separate variables prices = boston_housing_data.medv rooms = boston_housing_data.rm # Create a scatterplot of these two properties using plt.scatter() plt.scatter(rooms, prices) # Specify labels for the X and Y axis plt.xlabel('Number of rooms') plt.ylabel('House price') # Show graph plt.show() # Extract the house prices and average number of rooms to two separate variables prices = boston_housing_data.medv nox = boston_housing_data.nox # Create a scatterplot of these two properties using plt.scatter() plt.scatter(nox, prices) # Specify labels for the X and Y axis plt.xlabel('Nitric oxide concentration') plt.ylabel('House price') # Show graph plt.show()
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
Predicting house prices We could see in the previous graphs that some features have a roughy linear relationship to the house prices. We'll use Scikit-Learn's LinearRegression to model this data and predict house prices from other information. The example below builds a LinearRegression model using the average number of rooms to predict house prices:
from sklearn.linear_model import LinearRegression x = boston_housing_data.rm.values.reshape(-1, 1) y = boston_housing_data.medv.values.reshape(-1, 1) lr = LinearRegression().fit(x, y) lr.predict(6)
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
We'll now use all the features in the dataset to predict house prices. Let's start by splitting our data into a training set and a validation set. The training set will be used to train our linear model; the validation set, on the other hand, will be used to assess how accurate our model is.
X = boston_housing_data.drop('medv', axis=1) t = boston_housing_data.medv.values.reshape(-1, 1) # Use sklean's train_test_plit() method to split our data into two sets. # See http://scikit-learn.org/0.17/modules/generated/sklearn.cross_validation.train_test_split.html#sklearn.cross_validation.train_test_split from sklearn.cross_validation import train_test_split Xtr, Xts, ytr, yts = train_test_split(X, t) # Use the training set to build a LinearRegression model lr = LinearRegression().fit(Xtr, ytr) # Use the validation set to assess the model's performance. # See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html from sklearn.metrics import mean_squared_error mean_squared_error(yts, lr.predict(Xts))
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
ffmmjj/intro_to_data_science_workshop
apache-2.0
Part V: Training an LSTM extraction model In the intro tutorial, we automatically featurized the candidates and trained a linear model over these features. Here, we'll train a more complicated model for relation extraction: an LSTM network. You can read more about LSTMs here or here. An LSTM is a type of recurrent neural network and automatically generates a numerical representation for the candidate based on the sentence text, so no need for featurizing explicitly as in the intro tutorial. LSTMs take longer to train, and Snorkel doesn't currently support hyperparameter searches for them. We'll train a single model here, but feel free to try out other parameter sets. Just make sure to use the development set - and not the test set - for model selection. Note: Again, training for more epochs than below will greatly improve performance- try it out!
from snorkel.annotations import load_marginals train_marginals = load_marginals(session, split=0) from snorkel.annotations import load_gold_labels L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) from snorkel.learning import reRNN train_kwargs = { 'lr': 0.01, 'dim': 100, 'n_epochs': 20, 'dropout': 0.5, 'rebalance': 0.25, 'print_freq': 5 } lstm = reRNN(seed=1701, n_threads=None) lstm.train(train, train_marginals, X_dev=dev, Y_dev=L_gold_dev, **train_kwargs)
tutorials/cdr/CDR_Tutorial_3.ipynb
jasontlam/snorkel
apache-2.0
Scoring on the test set Finally, we'll evaluate our performance on the blind test set of 500 documents. We'll load labels similar to how we did for the development set, and use the score function of our extraction model to see how we did.
from load_external_annotations import load_external_labels load_external_labels(session, ChemicalDisease, split=2, annotator='gold') L_gold_test = load_gold_labels(session, annotator_name='gold', split=2) L_gold_test lstm.score(test, L_gold_test)
tutorials/cdr/CDR_Tutorial_3.ipynb
jasontlam/snorkel
apache-2.0
Would actually like to know what kind of score this model gets on the check_test_score script.
%run check_test_score.py run_settings/replicate_8aug.json
notebooks/model_modifications/Tuning Learning Rate.ipynb
Neuroglycerin/neukrill-net-work
mit
So we can guess that the log loss score we're seeing is in fact correct. There are definitely some bugs in the ListDataset code. The other model that we've run using it is the following:
run_settings = neukrill_net.utils.load_run_settings( "run_settings/online_manyaug.json", settings, force=True) model = pylearn2.utils.serial.load(run_settings['alt_picklepath']) plot_monitor(c="valid_objective") plot_monitor(c="train_objective")
notebooks/model_modifications/Tuning Learning Rate.ipynb
Neuroglycerin/neukrill-net-work
mit
Setup a python function that specifies the dynamics
def SIR(U,t,p): x,y,z=U yNew= p["alpha"] * y * x zNew= p["beta"] * y dx = -yNew dy = yNew - zNew dz = zNew return dx, dy, dz
SIRmodel.ipynb
brujonildo/randomNonlinearDynamics
cc0-1.0
The function SIR above takes three arguments, $U$, $t$, and $p$ that represent the states of the system, the time and the parameters, respectively. Outbreak condition The condition \begin{equation} \frac{\alpha}{\beta}x(t)>1 , \quad y>0 \end{equation} defines a threshold for a full epidemic outbreak. An equivalent condition is \begin{equation} x>\frac{\beta}{\alpha }, \quad y>0 \end{equation} Therefore, with the parameters $(\alpha,\beta)$=(0.5,0.1), there will be an outbreak if the initial condition for $x(t)>1/5$ with $y>0$. Notice that the initial value for $z$ can be interpreted as the initial proportion of immune individuals within the population. The dynamics related to the oubreak condition can be studied by defining a variable $B(t) = x(t) \alpha/\beta$, called by some authors "effective reproductive number". If $x(t)\approx 1$, the corresponding $B(t)$ is called "basic reproductive number", or $R_o$. Let's define a python dictionary containing parameters and initial conditions to perform simulations.
p={"alpha": 0.15, "beta":0.1, "timeStop":300.0, "timeStep":0.01 } p["Ro"]=p["alpha"]/p["beta"] p["sampTimes"]= sc.arange(0,p["timeStop"],p["timeStep"]) N= 1e4; i0= 1e1; r0=0; s0=N-i0-r0 x0=s0/N; y0=i0/N; z0=r0/N; p["ic"]=[x0,y0,z0] print("N=%g with initial conditions (S,I,R)=(%g,%g,%g)"%(N,s0,i0,r0)) print("Initial conditions: ", p["ic"]) print("B(0)=%g"%(p["ic"][0]*p["Ro"]))
SIRmodel.ipynb
brujonildo/randomNonlinearDynamics
cc0-1.0
Integrate numerically and plot the results
# Numerical integration xyz= sc.integrate.odeint(SIR, p["ic"], p["sampTimes"], args=(p,)).transpose() # Calculate the outbreak indicator B= xyz[0]*p["alpha"]/p["beta"] # Figure fig=gr.figure(figsize=(11,5)) gr.ioff() rows=1; cols=2 ax=list() for n in sc.arange(rows*cols): ax.append(fig.add_subplot(rows,cols,n+1)) ax[0].plot(p["sampTimes"], xyz[0], 'k', label=r"$(t,x(t))$") ax[0].plot(p["sampTimes"], xyz[1], 'g', lw=3, label=r"$(t,y(t))$") ax[0].plot(p["sampTimes"], xyz[2], 'b', label=r"$(t,z(t))$") ax[0].plot(p["sampTimes"], B, 'r', label=r"$(t,B(t))$") ax[0].plot([0, p["timeStop"]], [1,1], 'k--', alpha=0.4) ax[1].plot(xyz[0], xyz[1], 'g', lw=3, label=r"$(x(t),y(t))$") ax[1].plot(xyz[0], xyz[2], 'b', label=r"$(x(t),z(t))$") ax[1].plot(xyz[0], B, 'r', label=r"$(x(t),B(t))$") ax[1].plot([0, 1], [1,1], 'k--', alpha=0.4) ax[0].legend(); ax[1].legend(loc="upper left") gr.ion(); gr.draw()
SIRmodel.ipynb
brujonildo/randomNonlinearDynamics
cc0-1.0
Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None)
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
from collections import Counter total_counts = Counter() for _, row in reviews.iterrows(): total_counts.update(row[0].split(' ')) print("Total words in data set: ", len(total_counts))
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60])
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
print(vocab[-1], ': ', total_counts[vocab[-1]])
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
word2idx = {word: i for i, word in enumerate(vocab)}
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int_) for word in text.split(' '): idx = word2idx.get(word, None) if idx is None: continue else: word_vector[idx] += 1 return np.array(word_vector)
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ```
text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65]
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Now, run through our entire review data set and convert each review to a word vector.
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :23]
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2) trainY
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
# Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() # Inputs net = tflearn.input_data([None, 10000]) # Hidden layer(s) net = tflearn.fully_connected(net, 200, activation='ReLU') net = tflearn.fully_connected(net, 25, activation='ReLU') # Output layer net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) return model
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
model = build_model()
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
# Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy)
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Try out your own text!
# Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Moonlight is by far the best movie of 2016." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence)
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
Hyperparticle/deep-learning-foundation
mit
Pandigital products Problem 32 We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through 5 pandigital. The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing multiplicand, multiplier, and product is 1 through 9 pandigital. Find the sum of all products whose multiplicand/multiplier/product identity can be written as a 1 through 9 pandigital. HINT: Some products can be obtained in more than one way so be sure to only include it once in your sum.
from math import sqrt from euler import Seq, timer def isPandigital(n): return (range(2, int(sqrt(n))) >> Seq.filter(lambda x: n%x==0) >> Seq.map (lambda x: (str(x) + str(n/x) + str(n)) >> Seq.toSet) >> Seq.exists (lambda x: x == {'1','2','3','4','5','6','7','8','9'})) def p032(): return range(1000, 10000) >> Seq.filter(isPandigital) >> Seq.sum timer(p032)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Digit canceling fractions Problem 33 The fraction 49/98 is a curious fraction, as an inexperienced mathematician in attempting to simplify it may incorrectly believe that 49/98 = 4/8, which is correct, is obtained by cancelling the 9s. We shall consider fractions like, 30/50 = 3/5, to be trivial examples. There are exactly four non-trivial examples of this type of fraction, less than one in value, and containing two digits in the numerator and denominator. If the product of these four fractions is given in its lowest common terms, find the value of the denominator.
from euler import Seq, GCD, fst, snd, timer def p033(): def is_cancelling(a,b): a_str, b_str = str(a), str(b) for i in range(2): for j in range(2): if a_str[i] == b_str[j]: return float(a_str[not i]) / float(b_str[not j]) == float(a) / float(b) return False def numbers(n): return range(n,100) >> Seq.filter(lambda x: (x%10 != 0) & (x%10 != x/10)) fraction = (numbers(10) >> Seq.collect(lambda x: numbers(x+1) >> Seq.map(lambda y: (x,y))) >> Seq.filter(lambda (x,y): is_cancelling(x,y)) >> Seq.reduce(lambda x,y: (fst(x)*fst(y), snd(x)*snd(y)))) # then define the denominator by the greatest common divisor return snd(fraction) / GCD(fst(fraction), snd(fraction)) timer(p033)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Digit factorials Problem 34 145 is a curious number, as 1! + 4! + 5! = 1 + 24 + 120 = 145. Find the sum of all numbers which are equal to the sum of the factorial of their digits. Note: as 1! = 1 and 2! = 2 are not sums they are not included.
from math import factorial from euler import Seq, fst, timer def p034(): def factsum(n): acc = 0 while n >= 1: acc += factorial(n%10) n /= 10 return acc max_n = (fst(Seq.initInfinite(lambda x: (x, x * factorial(9))) >> Seq.find(lambda (a,b): (10 ** a - 1) > b)) - 1) * factorial(9) def nums(): for i in range(3, max_n + 1): if i == factsum(i): yield i return nums() >> Seq.sum timer(p034)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Circular primes Problem 35 The number, 197, is called a circular prime because all rotations of the digits: 197, 971, and 719, are themselves prime. There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, and 97. How many circular primes are there below one million?
from euler import Seq, primes, timer def p035(): def contains_even(n): return str(n) >> Seq.map(int) >> Seq.exists(lambda x: x%2==0) def shift(n): str_n = str(n) return int(str_n[1:] + str_n[0]) def circle(n): yield n m = shift(n) while m != n: yield m m = shift(m) p = (primes() >> Seq.filter(lambda n: not(contains_even(n))) >> Seq.takeWhile(lambda x: x<1000000) >> Seq.toList) def next_p(n): return p >> Seq.find(lambda m: m > n) n = 2 while n is not None: if not(all((i in p) for i in circle(n))): for i in circle(n): if i in p: p.remove(i) n = next_p(n) return (p >> Seq.length) + 1 timer(p035)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Double-base palindromes Problem 36 The decimal number, $585 = 1001001001_2$ (binary), is palindromic in both bases. Find the sum of all numbers, less than one million, which are palindromic in base 10 and base 2. (Please note that the palindromic number, in either base, may not include leading zeros.)
from euler import Seq, timer def p036(): def dec_is_palindrome(n): return str(n)[::-1] == str(n) def bin_is_palindrome(n): a = (Seq.unfold(lambda x: (x%2, x/2) if (x != 0) else None, n) >> Seq.toList) return a == list(reversed(a)) return ( range(1,1000001) >> Seq.filter(dec_is_palindrome) >> Seq.filter(bin_is_palindrome) >> Seq.sum) timer(p036)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Truncatable primes Problem 37 The number 3797 has an interesting property. Being prime itself, it is possible to continuously remove digits from left to right, and remain prime at each stage: 3797, 797, 97, and 7. Similarly we can work from right to left: 3797, 379, 37, and 3. Find the sum of the only eleven primes that are both truncatable from left to right and right to left. NOTE: 2, 3, 5, and 7 are not considered to be truncatable primes.
from euler import Seq, primes, is_prime, timer def p037(): def is_truncatable_prime(n): x = str(n) for i in range(1,len(x)): if not(is_prime(int(x[i:])) & is_prime(int(x[:i]))): return False return True return ( primes() >> Seq.skipWhile(lambda x: x <= 7) >> Seq.filter(is_truncatable_prime) >> Seq.take(11) >> Seq.sum) timer(p037)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Pandigital multiples Problem 38 Take the number 192 and multiply it by each of 1, 2, and 3: 192 × 1 = 192 192 × 2 = 384 192 × 3 = 576 By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call 192384576 the concatenated product of 192 and (1,2,3) The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5, giving the pandigital, 918273645, which is the concatenated product of 9 and (1,2,3,4,5). What is the largest 1 to 9 pandigital 9-digit number that can be formed as the concatenated product of an integer with (1,2, ... , n) where n > 1?
from euler import Seq, timer # largest integer to test is 9876 (2*x concat x) def p038(): def get_pandigital(num): i = 0 concat_num = '' while len(concat_num) < 9: i += 1 concat_num += str(num * i) if (len(concat_num) == 9) and (sorted(map(int, concat_num)) == range(1,10)): return int(concat_num) else: return None return max(get_pandigital(n) for n in range(9876,0,-1)) timer(p038)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Integer right triangles Problem 39 If $p$ is the perimeter of a right angle triangle with integral length sides, ${a,b,c}$, there are exactly three solutions for $p = 120$. ${20,48,52}, {24,45,51}, {30,40,50}$ For which value of $p ≤ 1000$, is the number of solutions maximised?
from euler import Seq, timer def p039(): def sols(p): return sum(1 for a in range(1,p-1) for b in range(a, p-a) if (p - a - b) ** 2 == a ** 2 + b ** 2) return range(3, 1001) >> Seq.maxBy(sols) timer(p039)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Champernowne's constant Problem 40 An irrational decimal fraction is created by concatenating the positive integers: 0.123456789101112131415161718192021... It can be seen that the 12th digit of the fractional part is 1. If $d_n$ represents the nth digit of the fractional part, find the value of the following expression. $d_1 × d_{10} × d_{100} × d_{1000} × d_{10000} × d_{100000} × d_{1000000}$
from euler import timer def p040(): s = "".join(range(1,500001) >> Seq.map(str)) return ( Seq.init(7, lambda i: int(s[10 ** i - 1])) >> Seq.reduce(lambda x,y: x*y)) timer(p040)
euler_031_040.ipynb
mndrake/PythonEuler
mit
Now,You need the robot and the V-REP time.
from poppy.creatures import PoppyHumanoid poppy = PoppyHumanoid(simulator='vrep') import time as real_time class time: def __init__(self,robot): self.robot=robot def time(self): t_simu = self.robot.current_simulation_time return t_simu def sleep(self,t): t0 = self.robot.current_simulation_time while (self.robot.current_simulation_time - t0) < t-0.01: real_time.sleep(0.001) time = time(poppy) print time.time() time.sleep(0.025) #0.025 is the minimum step according to the V-REP defined dt print time.time()
tutorials-education/poppy-humanoid_balance_leg_math.ipynb
poppy-project/community-notebooks
lgpl-3.0
It is now possible to define a mobility in percentage, according to the angle limit of ankle.
class leg_move(leg_angle): def __init__(self,motor_limit,knee=0): self.ankle_limit_front=radians(motor_limit.angle_limit[1]) self.ankle_limit_back=radians(motor_limit.angle_limit[0]) leg_angle.__init__(self,knee) def update_foot_gap_percent(self,foot_gap_percent): #calcul of foot_gap_max to convert foot_gap_percent into value if foot_gap_percent>=0:# si le foot_gap est positif if acos(self.high/(self.shin+self.thigh)) > self.ankle_limit_front: # construction 1 knee!=0 gap1 = sin(self.ankle_limit_front)*self.shin high1 = cos(self.ankle_limit_front)*self.shin high2 = self.high - high1 gap2 = sqrt(self.thigh**2-high2**2) foot_gap_max = gap1 + gap2 foot_gap = foot_gap_percent * foot_gap_max / 100 self.update_foot_gap(foot_gap) else: #construction 2 knee=0 foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2) foot_gap = foot_gap_percent * foot_gap_max / 100 self.update_foot_gap(foot_gap) if foot_gap_percent<0: if -acos((self.high-self.thigh)/self.shin )< self.ankle_limit_back: #construction 1 knee!=0 print degrees(self.ankle_limit_back) print degrees(-acos((self.high-self.thigh)/self.shin )) gap1 = sin(self.ankle_limit_back)*self.shin high1 = cos(self.ankle_limit_back)*self.shin high2 = self.high - high1 print gap1,high1,high2 gap2 = sqrt(self.thigh**2-high2**2) print gap1,gap2,high1,high2 foot_gap_max = gap1 + gap2 foot_gap = -foot_gap_percent * foot_gap_max / 100 self.update_foot_gap(foot_gap) else: #constrution 2 knee=0 foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2) foot_gap = foot_gap_percent * foot_gap_max / 100 self.update_foot_gap(foot_gap) def update_high_percent(self,high_percent,high_min,high_max): high_var = high_max-high_min high = (high_percent*high_var/100)+high_min self.update_high(high) def high_limit(self): high_max = sqrt((self.shin+self.thigh)**2-self.foot_gap**2) high1_min = cos(self.ankle_limit_back)*self.shin gap2 = self.foot_gap-sin(self.ankle_limit_back)*self.shin # si gap2 est supérieur a thigh alors ce n'est plus la flexion de la cheville qui est limitante # dans ce cas on met la hauteur a zero if gap2 <= self.thigh: high2_min = sqrt(self.thigh**2-gap2**2) high_min = high1_min + high2_min else: high_min = 0 return [high_min,high_max]
tutorials-education/poppy-humanoid_balance_leg_math.ipynb
poppy-project/community-notebooks
lgpl-3.0
Finaly, a primitive can set the high and the foot gap of poppy.
from pypot.primitive import Primitive class leg_primitive(Primitive): def __init__(self,robot,speed,knee=0): self.right = leg_move(robot.l_ankle_y,knee)# il faudrait mettre r_ankle_y mais les angles limites semblent faux, c'est l'opposé self.left = leg_move(robot.l_ankle_y,knee) self.robot = robot Primitive.__init__(self, robot) self.high_percent = 100 self.r_foot_gap_percent = 0 self.l_foot_gap_percent = 0 self.speed = speed def run(self): if self.high_percent !=-1: high_limit=(max([self.right.high_limit()[0],self.left.high_limit()[0]]),min([self.right.high_limit()[1],self.left.high_limit()[1]])) self.right.update_high_percent(self.high_percent,high_limit[0],high_limit[1]) self.left.update_high_percent(self.high_percent,high_limit[0],high_limit[1]) if self.r_foot_gap_percent !=-1: self.right.update_foot_gap_percent(self.r_foot_gap_percent) if self.l_foot_gap_percent !=-1: self.left.update_foot_gap_percent(self.l_foot_gap_percent) print "left - ankle" ,degrees(self.left.ankle),'knee', degrees(self.left.knee),'hip', degrees(self.left.hip), 'high', self.left.high,'foot_gap',self.left.foot_gap print "right - ankle" ,degrees(self.right.ankle),'knee', degrees(self.right.knee),'hip', degrees(self.right.hip), 'high', self.right.high,'foot_gap',self.right.foot_gap self.robot.l_ankle_y.goto_position(degrees(self.left.ankle),self.speed) self.robot.r_ankle_y.goto_position(degrees(self.right.ankle),self.speed) self.robot.l_knee_y.goto_position(degrees(self.left.knee),self.speed) self.robot.r_knee_y.goto_position(degrees(self.right.knee),self.speed) self.robot.l_hip_y.goto_position(degrees(self.left.hip),self.speed) self.robot.r_hip_y.goto_position(degrees(self.right.hip),self.speed,wait=True)
tutorials-education/poppy-humanoid_balance_leg_math.ipynb
poppy-project/community-notebooks
lgpl-3.0
It is now possible to set the high and the foot gap using the leg_primitive.
leg=leg_primitive(poppy,speed=3) leg.start() time.sleep(1) time.sleep(1) leg.speed=3 leg.high_percent=50 leg.r_foot_gap_percent=20 leg.l_foot_gap_percent=-20 leg.start() time.sleep(3) leg.high_percent=100 leg.r_foot_gap_percent=-1 leg.l_foot_gap_percent=-1 leg.start() time.sleep(3) leg.high_percent=0 leg.start() time.sleep(3) leg.high_percent=80 leg.r_foot_gap_percent=-20 leg.l_foot_gap_percent=20 leg.start() time.sleep(3) leg.r_foot_gap_percent=-1 leg.l_foot_gap_percent=-1 leg.high_percent=0 leg.start() time.sleep(3) leg.high_percent=100 leg.r_foot_gap_percent=0 leg.l_foot_gap_percent=0 leg.start() time.sleep(3)
tutorials-education/poppy-humanoid_balance_leg_math.ipynb
poppy-project/community-notebooks
lgpl-3.0
Parameters are given in the order (r, k).
times = np.linspace(0, 100, 100) r = 0.1 k = 50 values = model.simulate((r, k), times) plt.figure(figsize=(15,2)) plt.xlabel('t') plt.ylabel('y (Population)') plt.plot(times, values) plt.show()
examples/toy/model-logistic.ipynb
martinjrobins/hobo
bsd-3-clause
We can see that, starting from p0 = 2 the model quickly approaches the carrying capacity k = 50. We can test that, if we wait long enough, we get very close to $k$:
print(model.simulate((r, k), [40])) print(model.simulate((r, k), [80])) print(model.simulate((r, k), [120])) print(model.simulate((r, k), [160])) print(model.simulate((r, k), [200])) print(model.simulate((r, k), [240])) print(model.simulate((r, k), [280]))
examples/toy/model-logistic.ipynb
martinjrobins/hobo
bsd-3-clause
This model also provides sensitivities: derivatives $\frac{\partial y}{\partial p}$ of the output $y$ with respect to the parameters $p$.
values, sensitivities = model.simulateS1((r, k), times)
examples/toy/model-logistic.ipynb
martinjrobins/hobo
bsd-3-clause
We can plot these sensitivities, to see where the model is sensitive to each of the parameters:
plt.figure(figsize=(15,7)) plt.subplot(3, 1, 1) plt.ylabel('y (Population)') plt.plot(times, values) plt.subplot(3, 1, 2) plt.ylabel(r'$\partial y/\partial r$') plt.plot(times, sensitivities[:, 0]) plt.subplot(3, 1, 3) plt.xlabel('t') plt.ylabel(r'$\partial y/\partial k$') plt.plot(times, sensitivities[:, 1]) plt.show()
examples/toy/model-logistic.ipynb
martinjrobins/hobo
bsd-3-clause
Compute power spectrum densities of the sources with dSPM Returns an STC file containing the PSD (in dB) of each of the sources.
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, compute_source_psd print(__doc__)
0.17/_downloads/5c761b4eaf61d9e6642d568c8bc535a2/plot_source_power_spectrum.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_label = data_path + '/MEG/sample/labels/Aud-lh.label' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, verbose=False) events = mne.find_events(raw, stim_channel='STI 014') inverse_operator = read_inverse_operator(fname_inv) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # picks MEG gradiometers picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True, stim=False, exclude='bads') tmin, tmax = 0, 120 # use the first 120s of data fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2 label = mne.read_label(fname_label) stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM", tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, pick_ori="normal", n_fft=n_fft, label=label, dB=True) stc.save('psd_dSPM')
0.17/_downloads/5c761b4eaf61d9e6642d568c8bc535a2/plot_source_power_spectrum.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View PSD of sources in label
plt.plot(1e3 * stc.times, stc.data.T) plt.xlabel('Frequency (Hz)') plt.ylabel('PSD (dB)') plt.title('Source Power Spectrum (PSD)') plt.show()
0.17/_downloads/5c761b4eaf61d9e6642d568c8bc535a2/plot_source_power_spectrum.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in source_text.split('\n')] y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \ for sentence in target_text.split('\n')] source_id_text = [] target_id_text = [] for i in range(len(x)): n1 = len(x[i]) n2 = len(y[i]) source_id_text.append(x[i]) target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']]) return (source_id_text, target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input_text = tf.placeholder(tf.int32,[None, None], name="input") target_text = tf.placeholder(tf.int32,[None, None], name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") return input_text, target_text, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob) _, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32) return enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \ dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(train_logits_drop) #I'm missing the keep_prob! don't know where to put it return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: Maximum length of :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) #Again, don't know where to put the keep_drop param return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\ None, scope=decoding_scope) with tf.variable_scope("decoding") as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell_drop, dec_embed_input,\ sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: infer_logits = decoding_layer_infer(encoder_state, dec_cell_drop, dec_embeddings,\ target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'], sequence_length,\ vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
language-translation/dlnd_language_translation_23.ipynb
blua/deep-learning
mit