markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The plot above shows the points in the embedded space (red dots) in comparison to the real space (blue dots). We can see that the location in the embedded space is basically a projection on the basis fucntion (and it makes sense mathematically - can you tell why?). What is a good Embedded Space Now the part that I prom...
f, ax = plt.subplots(1, 1, figsize=(9, 9)) ax.scatter(ps[:, 0], ps[:, 1], s=80, alpha=.55) # Reconstruction of the data from the latent-space representation # With both dimensions - if you want to see that one again. # Xhat = np.dot(W, V) + mu # DON'T FORGET TO ADD THE MU # Only one vector Xhat = np.dot(W[:, 1].re...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
In this plot points that were far from each other are now close to one another. Faces Data Set In the HW assignment you'll have to basically repeat what I've done with the 2D example using the Face data set. To help you with the assignment I'm going to do some of these things and explain what they mean.
X = np.genfromtxt("data/faces.txt", delimiter=None) # load face dataset # Running the SVD using the exact same steps. # Step 1 mu = np.mean(X, axis=0) X0 = X - mu # Step 2 U, s, V = svd(X0, full_matrices=False) # Step 3 W = np.dot(U, np.diag(s)) print 'Shape of W = (%d, %d)' % W.shape, 'Shape of V = (%d, %d)' % V....
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
V and W The part where you have to show the "directions" learned I will keep to the HW assignment and instead just show you the W part -- using the code from the HW assignment. In it, you are going to plot the faces as a function of their W values using only two dimensions (it's hard to plot 3d stuff and impossible to ...
f, ax = plt.subplots(1, 1, figsize=(12, 8)) idx = np.random.choice(X.shape[0], 15, replace=False) coord, params = ml.transforms.rescale(W[:,0:2]) # normalize scale of "W" locations for i in idx: loc = (coord[i, 0], coord[i, 0] + 0.5, coord[i,1], coord[i, 1] + 0.5) # where to place the image & size img = np.r...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Hopefully you had a good random seed here and you got faces that are close to each other in the embedded space and are also similar image. If not, it is understandable. Our embedded space in this example is of 576 dimensions, this means that even if in the first 2 the images don't look alike, it is possible that adding...
print 'The original images' f, ax = plt.subplots(1, 2, figsize=(12, 15)) idx = [88, 14] for j in range(len(idx)): i = idx[j] img = np.reshape(X[i,:],(24,24)) # reshape flattened data into a 24*24 patch # We've seen the imshow method in the previous discussion :) ax[j].imshow( img.T , cmap="gray")...
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
The more dimensions we use, the lower the loss is. Why use smaller K then? If the highe K we use the less information we lose, then why use a small value for dimensions? As you said in the discussions, this value is related directly to complexity, both time and space. I also showed you a different reason, that is relat...
import pickle # If python 2.x sdata = pickle.load(open('./data/data.pkl', 'r')) movies_info = pickle.load(open('./data/movies_info.pkl', 'r')) # If python 3.x # sdata = pickle.load(open('./data/data.pkl', 'r'), encoding='latin1') # movies_info = pickle.load(open('./data/movies_info.pkl', 'r'), encoding='latin1')
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
SVD on the Movie Dataset We are going to do the exact same thing only with one difference. We will run svds instead of svd. svds is a version of svd that is suitable for sparse data. It has one extra paramters -- k. In svd() we got all the dimensions and took only the first 2, here we are going to specify directly tha...
from scipy.sparse.linalg import svds mu = sdata.mean(axis=0) sdata = sdata - mu U, s, V = svds(sdata, 2) W = np.dot(U, np.diag(s)) print W.shape, V.shape
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
Embedded Space If the embedded space in the geometric example showed directions in the 2d, what would it mean in the movies? We can't really plot it, so let's just print the top and bottom 10 movies in each direction. This will give us a clue.
arg_sort_x = np.argsort(V[0]) print 'Top 10 on the X direction' print '---------------------\n' for i in arg_sort_x[:10]: print movies_info[i] print '\n\n==================\n\n' print 'Bottom 10 on the X direction' print '---------------------\n' for i in arg_sort_x[-10:]: print movies_info[i]
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
And the second dimension...
arg_sort_y = np.argsort(V[1]) print 'Top 10 on the Y direction' print '---------------------\n' for i in arg_sort_y[:10]: print movies_info[i] print '\n\n==================\n\n' print 'Bottom 10 on the Y direction' print '---------------------\n' for i in arg_sort_y[-10:]: print movies_info[i]
week9/svd.ipynb
sameersingh/ml-discussions
apache-2.0
plot=True only plots the simulations for you to review that is set up correctly However the core and cladding index of the fiber are very close to 1.44, so it's hard to see. You can also use plot_contour=True to plot only the contour of the simulation shapes.
df20 = gm.write_sparameters_grating() # fiber_angle_deg = 20 sim.plot.plot_sparameters(df20) df = gm.write_sparameters_grating(fiber_angle_deg=15) sim.plot.plot_sparameters(df)
docs/notebooks/plugins/meep/002_gratings.ipynb
gdsfactory/gdsfactory
mit
A NetworkSet can also be constructed directly from: - a directory containing Touchstone files: NetworkSet.from_dir(), - a zipfile of touchstones files: NetworkSet.from_zip(), - a dictionnary of s-parameters: NetworkSet.from_s_dict(), - a (G)MDIF (.mdf) file: NetworkSet.from_mdif(), - a CITI (.cti) file: NetworkSe...
ro_ns[0]
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
More info on this can be found in the function, skrf.io.general.network_2_spreadsheet Named Parameters If all the Network objects of a NetworkSet have a params property containing a dictionnary of the named parameters and values associated to each Network, it is possible to select the Networks corresponding to a subset...
# dummy named parameters and values 'a', 'X' and 'c' import numpy as np params = [ {'a':0, 'X':10, 'c':'A'}, {'a':1, 'X':10, 'c':'A'}, {'a':2, 'X':10, 'c':'A'}, {'a':1, 'X':20, 'c':'A'}, {'a':0, 'X':20, 'c':'A'}, ] # create a NetworkSet made of dummy Networks, each defi...
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
Selecting the sub-NetworkSet matching scalar parameters can be made as:
ns.sel({'a': 1}) ns.sel({'a': 0, 'X': 10})
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
Selecting the sub-NetworkSet matching a range of parameters also works:
ns.sel({'a': 0, 'X': [10,20]}) ns.sel({'a': [0,1], 'X': [10,20]})
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
The various named parameter keys and values of the NetworkSet can be retrieved using the dims and coords properties:
ns.dims ns.coords
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
Interpolating between the Networks of a NetworkSet It is possible to create new Networks interpolated from the Networks contained in a NetworkSet. If no params properties have been defined for each Network of the NetworkSet, the interpolate_from_network() method can be used to specify a interpolating parameter.
param_x = [1, 2, 3] # a parameter associated to each Network x0 = 1.5 # parameter value to interpolate for interp_ntwk = ro_ns.interpolate_from_network(param_x, x0) print(interp_ntwk)
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
An illustrated example is given in the Examples section of the documentation. It is also possible to interpolate using a named parameter when they have been defined:
# Interpolated Network for a=1.2 within X=10 Networks: ns.interpolate_from_params('a', 1.2, {'X': 10})
doc/source/tutorials/NetworkSet.ipynb
jhillairet/scikit-rf
bsd-3-clause
Agent-type object creation The most basic way of creating HARK object is to call constructor (in OPP method which create the object, called by the class name). For $\texttt{PerfForesightConsumerType}$ we need to set: - $T+1$: a consumer's lifespan, called $\texttt{cycles}$ in the code, if $T= \infty$, set $\texttt{cyc...
Example_agent_1 = PerfForesightConsumerType(cycles=0, CRRA = 2.0, Rfree = 1.03, DiscFac = 0.99, LivPrb = 1.0, PermGroFac = 1.0)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Because we did not assume growth in 𝑌 or survival uncertainty , we set these values to 1. The second method involves creating a dictionary: a list of parameters' names and values. Here we define the dictionary with the same values as in the first example.
First_dictionary = { 'CRRA' : 2.0, 'DiscFac' : 0.99, 'Rfree' : 1.03, 'cycles' : 0, 'LivPrb' : [1.00], 'PermGroFac' : [1.00], }
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
To create an object with a dictionary, use the constructor with the previously defined dictionary as an argument:
Example_agent_2 = PerfForesightConsumerType(**First_dictionary)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Although the first method is easier, we recommend defining a dictionary whenever you create a HARK object. Firstly, it makes your code cleaner. Secondly, it enables you to create multiple objects with the same dictionary (what will be important when it comes to creating macro classes). The presented here methods work ...
Example_agent_3 = deepcopy(Example_agent_2)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Do not use an assignment operator (=) because it does not create new object. For example a command:
Example_agent_4 = Example_agent_2
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
does not create a new object, it only gives a new name to the object Example_agent_2 (this object simply gets two names: Example_agent_2, Example_agent_4) Modifying parameter values You can easily change the parameter value of the object by "." operator. For example, to change the discount factor value of the object...
Example_agent_3.DiscFac = 0.95
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solving an agent-type problems To solve agent type problem presented in the example, you need to find a value function from the Bellman equations and the policy functions. In our case, the only policy function is a consumption function: a function that for each age t and cash-in-hand $M_t$, specify the optimal consumpt...
Example_agent_2.solve()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution elements Solve method finds value function and consumption function for each period t of the consumer's life (in case of the infinite T, it specify only one set of functions, because all the parameters are stable and lifespan is always infinite, the functions are the same, no matter of t). Besides consumption ...
Example_agent_2.solution[0].vFunc Example_agent_2.solution[0].cFunc Example_agent_2.solution[0].mNrmMin
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
As you can see, only mNrmMin can be printed as a value. However, value and consumption functions can be plotted. Plotting the solution After $\texttt{solve}$ method is used, the value and consumption functions can be plotted. HARK dedicated function for doing so is plot_funcs. As arguments, you need to give a function ...
min_v = Example_agent_2.solution[0].mNrmMin max_v = -Example_agent_2.solution[0].mNrmMin print("Consumption function") plot_funcs([Example_agent_2.solution[0].cFunc],min_v,max_v) print("Value function") plot_funcs([Example_agent_2.solution[0].vFunc],min_v,max_v)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Simulation Next step is to simulate the agent behavior. To do so, you firstly need to set a few parameters for the sake of the simulation: $\texttt{AgentCount}$: number of simulated agents $\texttt{T_cycle}$: logical parameter which governs the time flow during the simulation (if it is moving forward or backward) $\te...
Simulation_dictionary = { 'AgentCount': 1, 'aNrmInitMean' : 0.0, 'aNrmInitStd' : 0.0, 'pLvlInitMean' : 0.0, 'pLvlInitStd' : 0.0, 'PermGroFacAgg' : 1.0, 'T_cy...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Next, you need to update the object. To do so we use setattr function, which add parameter's values to the object.
for key,value in Simulation_dictionary.items(): setattr(Example_agent_2,key,value)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Finally, you can start our simulation. Firstly, you need to decide which variables you want to track, we choose an assets level and consumption level, in the code they are called: $\texttt{aNrmNow}$ and $\texttt{cNrmNow}$. Next, you need to initialize the simulation by $\texttt{initialize_sim}$ method. Lastly, run the ...
Example_agent_2.track_vars = ['aNrm','cNrm'] Example_agent_2.initialize_sim() Example_agent_2.simulate()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Plotting the simulation Plotting the simulation is a little bit more complicated than plotting the solution, as you cannot use a dedicated function. Instead, use matplot library. To see the consumption and asset history, use objects created by simulation, which contains the history of every agent, in each of the simul...
periods= np.linspace(0,1000,1000) asset_level = np.mean(Example_agent_2.history['aNrm'][0:1000], axis = 1) cons_level = np.mean(Example_agent_2.history['cNrm'][0:1000], axis = 1) plt.figure(figsize=(5,5)) plt.plot(periods,asset_level,label='Assets level') plt.plot(periods,cons_level,label='Consumption level') plt.lege...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Now, let's plot the mean asset and consumption increase:
increase_assets = asset_level[1:1000]/asset_level[0:999] increase_cons = cons_level[1:1000]/cons_level[0:999] plt.figure(figsize=(5,5)) plt.plot(periods[1:1000],increase_assets, label='Assets increase' ) plt.plot(periods[1:1000],increase_cons,label='Consumption increase') plt.legend(loc=2) plt.show()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise Congratulations! You've just learned the basics of the agent-type class in HARK. It is time for some exercises: Exercise 1: create the agent-type object Define a dictionary and then use it to create the agent-type object with the parameters: $\beta = 0.96$ $\rho = 2.0$ $T = \infty$ Risk free interest rate $R=...
#Write your solution here # fill the dictionary and then use it to create the object #First_dictionary = { # 'CRRA' : , # 'DiscFac' : , # 'Rfree' : , # 'cycles' : , # 'LivPrb' : [], # 'PermGroFac' : [], #} #Exercise_agent =
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution First_dictionary = { 'CRRA' : 2.0, 'DiscFac' : 0.96, 'Rfree' : 1.05, 'cycles' : 0, 'LivPrb' : [1.0], 'PermGroFac' : [1.0], } Exercise_agent = PerfForesightConsumerType(**First_dictionary)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise 2: Solve the model and plot the value function
#Write your solution here, use methods from "solving the model" subsection
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution Exercise_agent.solve() min_v = Exercise_agent.solution[0].mNrmMin max_v = -Exercise_agent.solution[0].mNrmMin print("Value function") plot_funcs([Exercise_agent.solution[0].vFunc],min_v,max_v)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise 3: Prepare the simulation Next prepare the simulation. Assume that there exsists the initial assets and income heterogenity. Assume, the initial income and assets distributions are log-normal, have mean 1 and std =1. Simulate 1000 agents for 1000 periods. Add the new parameters to the object:
#Write your solution here. #Fill the dictionary #Simulation_dictionary = { 'AgentCount': , # 'aNrmInitMean' : , # 'aNrmInitStd' : , # 'pLvlInitMean' : , # 'pLvlInitStd' : , # 'PermGroFacA...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution Simulation_dictionary = { 'AgentCount': 1000, 'aNrmInitMean' : 0.0, 'aNrmInitStd' : 1.0, 'pLvlInitMean' : 0.0, 'pLvlInitStd' : 1.0, 'PermGroFacAgg' : 1.0, ...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise 4: Simulate
#Write your solution here. Use the commands from "simulation" subsection, track consumption values
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution Exercise_agent.track_vars = ['aNrm','cNrm'] Exercise_agent.initialize_sim() Exercise_agent.simulate()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise 5: Plot the simulations Plot mean consumption level and consumption increase:
#Write your solution here. #Firstly prepare the vectors which you would like to plot: #periods= np.linspace(0,1000,1000) #cons_level = np.mean(Exercise_agent.cNrmNow_hist[0:1000], axis = 1) #increase_cons = cons_level[1:1000]/cons_level[0:999] #next plot your solution
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution periods= np.linspace(0,1000,1000) cons_level = np.mean(Exercise_agent.history['cNrm'][0:1000], axis = 1) increase_cons = cons_level[1:1000]/cons_level[0:999] plt.figure(figsize=(5,5)) plt.plot(periods,cons_level,label='Consumption level') plt.legend(loc=2) plt.show() plt.figure(figsize=(5,5)) plt.plot(perio...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
PART II: advanced methods for the perfect foresight agent PART II: advanced methods for the perfect foresight agent In this part we focus on more complicated cases of the deterministic agent model. In the previous example survival probability (in the code LivPrb) and income increase factor (in the code PermGroFac) were...
LifeCycle_dictionary = { 'CRRA' : 2.0, 'Rfree' : 1.04, 'DiscFac' : 0.98, 'LivPrb' : [0.99,0.98,0.97,0.96,0.95,0.94,0.93,0.92,0.91,0.90], 'PermGroFac' : [1.01,1.01,1.01,1.02,1.00,0.99,0.5,1.0,1.0,1.0], 'cycles' : 1, } LC_agent = PerfForesightConsumerType(**LifeCycle_dictionary)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Understanding the solution As it was mentioned in the first part of the tutorial, solve method finds value and consumption function. In case of $\Gamma_t \neq 1.0$, these functions are defined on the normalized by $Y_t$ space of the cash-in hands arguments. It is important to remember that, when you will plot them. N...
LC_agent.solve()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Quick exercise Consider the case, when consumer lives with certainty 40 periods. However, her income perform cycles. During each cycles she experience two periods of the income increase and two of the decrease. Create HARK agent-type object with the described above income cycles. Solve the model. Assume that for each ...
#Write your solution here
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
#Solution Cyc_dictionary = { 'CRRA' : 2.0, 'Rfree' : 1.03, 'DiscFac' : 0.96, 'LivPrb' : [1.05,1.1, 0.95, 0.92], 'PermGroFac' : 4*[1.0], 'cycles' : 4, } Cyc_agent = PerfForesightConsumerType(**Cyc_dictionary) Cyc_agent.solve()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Methods of plotting the solution $\texttt{plot_funcs()}$ enables to plot many functions at the same graph. You need to declare them as vector of functions. To see this, just follow an example. We plot the consumption functions for each age $t$ of the consumer. To get better access to the consumption functions, you ca...
LC_agent.unpack('cFunc')
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Next, we set the minimal value of the gird such that at least one of the consumption functions is defined.
min_v = min(LC_agent.solution[t].mNrmMin for t in range(11) ) max_v = -min_v print("Consumption functions") plot_funcs(LC_agent.cFunc[:],min_v,max_v)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
If you want to compare a few functions (eg. value functions), you can also construct the vector by yourself, for example:
print("Value functions") plot_funcs([LC_agent.solution[0].vFunc, LC_agent.solution[5].vFunc, LC_agent.solution[9].vFunc],min_v,max_v)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Advanced simulation techniques Here we present more advanced simulation techniques with the mortal agents and income dynamics. We will also present how to plot assets distribution of agents. Firstly, as in the part 1 of the tutorial, you need to define simulation dictionary. However, you need to be careful with T_ag...
Simulation_dictionary = { 'AgentCount': 1000, 'aNrmInitMean' : -10.0, 'aNrmInitStd' : 0.0, 'pLvlInitMean' : 0.0, 'pLvlInitStd' : 1.0, 'PermGroFacAgg' : 1.0, ...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Next, we simulate the economy and plot the mean asset level. However, be careful! $\texttt{aNrmNow}$ gives the asset levels normalized by the income. To get the original asset level we need to use $\texttt{aLvlNow}$ (unfortunately, cLvlNow is not implemented).
LC_agent.track_vars = ['aNrm','cNrm', 'aLvl'] LC_agent.initialize_sim() LC_agent.simulate() periods= np.linspace(0,200,200) assets_level = np.mean(LC_agent.history['aLvl'][0:200], axis = 1) plt.figure(figsize=(5,5)) plt.plot(periods,assets_level,label='assets level') plt.legend(loc=2) plt.show()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
As you can see, for the first 10 periods the asset level much more fluctuate. It is because in the first periods the agents which were born in period 0 strictly dominate the population (as only a small fraction die in the first periods of life). You can simply cut the first observations, to get asset levels for more b...
after_burnout = np.mean(LC_agent.history['aLvl'][10:200], axis = 1) plt.figure(figsize=(5,5)) plt.plot(periods[10:200],after_burnout,label='assets level') plt.legend(loc=2) plt.show()
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Plotting the distribution of assets When you plot similar simulations, often the main interest is not to get exact assets/consumption levels during the simulation but rather a general distribution of assets. In our case, we plot the asset distribution. Firstly, get one vector of the asset levels:
sim_wealth = np.reshape(LC_agent.history['aLvl'],-1)
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Next, we plot simple histogram of assets level using a standard hist function from matplotlib library
print("Wealth distribution histogram") n, bins, patches = plt.hist(sim_wealth,100,density=True, range=[0.0,10.0])
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
With HARK, you can also easily plot the Lorenz curve. To do so import some HARK utilities which help us plot Lorenz curve:
from HARK.utilities import get_lorenz_shares, get_percentiles
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Then, use $\texttt{get_lorenz_shares}$ to plot the Lornez curve.
pctiles = np.linspace(0.001,0.999,15) #SCF_Lorenz_points = get_lorenz_shares(SCF_wealth,weights=SCF_weights,percentiles=pctiles) sim_Lorenz_points = get_lorenz_shares(sim_wealth,percentiles=pctiles) plt.figure(figsize=(5,5)) plt.title('Lorenz curve') plt.plot(pctiles,sim_Lorenz_points,'-b',label='Lorenz curve') plt.pl...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Exercise Let's make a model with a little more realistic assumptions. In files 'life_table.csv' you find the death-probablities for Americans in age 25-105 in 2017 from Human Mortality Database. The age-dependent income for American males in file 'productivity_profile.csv' are deduced from Heathcote et al. (2010). Try...
# Write your solution here #Firstly import data, you can use this part of code (however, there are other ways to do this) import sys import os sys.path.insert(0, os.path.abspath('..')) prob_dead = np.genfromtxt('life_table.csv', delimiter=',', skip_header =1) prob_surv = 1 - prob_dead # The HARK argument need to...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Solution: click on the box on the left to expand the solution
import sys import os sys.path.insert(0, os.path.abspath('..')) prob_dead = np.genfromtxt('life_table.csv', delimiter=',', skip_header =1) prob_surv = 1 - prob_dead prob_surv_list= np.ndarray.tolist(prob_surv[:80]) income_profile = np.genfromtxt('productivity_profile.csv', delimiter=',', skip_header =1) income_prof...
examples/Journeys/Quickstart_tutorial/Quick_start_with_solution.ipynb
econ-ark/HARK
apache-2.0
Load the data from our JSON file. The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet.
with open('/Users/mac28/CLCrawler/MasterApartmentData.json') as f: my_dict = json.load(f) dframe = DataFrame(my_dict) dframe = dframe.T dframe
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Clean up the data a bit Right now the 'shared' and 'split' are included in number of bathrooms. If I were to conver that to a number I would consider a shared/split bathroom to be half or 0.5 of a bathroom.
dframe.bath = dframe.bath.replace('shared',0.5) dframe.bath = dframe.bath.replace('split',0.5)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Get rid of null values I haven't figured out the best way to clean this up yet. For now I'm going to drop any rows that have a null value, though I recognize that this is not a good analysis practice. We ended up dropping 2,014 data points, which is a litle less than 16% of the data. 😬 Also there were some CRAZY outl...
df = dframe[dframe.price < 10000][['bath','bed','feet','price']].dropna() df df.describe() sns.distplot(df.price)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Let's simplify our data I have a hunch that bedrooms, bathrooms, and square footage have the greatest effect on housing price. Before we get too complicated, let's see how accurate we can be with just a simple set of data
features = df[['bath','bed','feet']].values price = df[['price']].values
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Split data into Training and Testing Data
from sklearn.cross_validation import train_test_split features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1, random_state=42) from sklearn import tree from sklearn.metrics import r2_score clf = tree.DecisionTreeRegressor() clf = clf.fit(features_train, price_train) ...
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
72% Woot! Wait, is that even good? I think that for the most part, it's pretty bad, but for our first run through, with super simple data, I'm willing to go with it. What does it look like?
plt.scatter(pred,price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Ok, so we can see that there's at least a relationship, which we already knew from the R^2 score.Visually it looks like there is more variation in the prices, and that we're better at predicting on the higher end, but it could very well be that we just have WAY more data on the lower end. Remember our plot from before?...
sns.distplot(df.price)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Ok, so we tried decision trees. Let's try decision trees on steroids. Random Forest!
from sklearn.ensemble import RandomForestRegressor reg = RandomForestRegressor() reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) r2_score(forest_pred, price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
hmmm, it's not actually that much higher of accuracy? Maybe overfitting the data? Maybe Not enough features? Since the dataset is relatively small let's try to up the number of "Trees" that we use. We'll up it from the default 10 to 100
reg = RandomForestRegressor(n_estimators = 100) reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) r2_score(forest_pred, price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Still no difference, ok, I can take a hint. Let's look into over-fitting
reg = RandomForestRegressor(min_samples_split = 20) reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) r2_score(forest_pred, price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Shoot, we got worse. Feel free to play with it, we get better at predicting as the min samples split goes down. Let's try a more complicated set of data
df2 = dframe[dframe.price < 10000][['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'price',]].dropna() df2 features = df2[['bath','bed','feet','dog','cat','content','getphotos', 'hasmap']].values price = df2[['price']].values features_train, features_test, price_train, price_test = train_test_split(...
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Ahhhhh! We're just getting worse!
plt.scatter(pred,price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
Let's try
reg = RandomForestRegressor(n_estimators=50) reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) r2_score(forest_pred, price_test) plt.scatter(forest_pred, price_test)
analysis/First_Analysis.ipynb
rileyrustad/pdxapartmentfinder
mit
While Conditional.: return discussion Follow up on yesterday's discussion on conditionals more on conditionals: * is and is not * difference between is and ==
4 is 2**2
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
comparison priority not has the highest priority and or the lowest, so that A and not B or C is equivalent to (A and (not B)) or C. Short circuit operators
#1000 == 10**3 #1000 is 10**3 #4 is 2**2 #python stores small values, whereas with 10**3 it has to calculate #a = [1,2,3] #c =[1,2,3] #b =a #a is c #a[:] is not b #for x in range(10): # print (x) #10 <= x and x < 20 #x = 1.5 #if x < 10 and x > 0: # print ("x is between ten and zero") #x = 'a' #if x == 'a...
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
More string and list practice Add elements from a list L to the end of another list at least three different ways.
a = [1, 2, 3] L = [4, 5, 6] a[len(a):]
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
Split a sentence into a list of words (no whitespace), how would you split on comma or semi-colon? Read up on how to format strings
#0. a = ['x',2,3] b = [4,5,6] f = a.pop() b.append(f) g = a.pop() b.append(g) h = a.pop() b.append(h) b w = [1,2,3] w.insert(3,4) w 'Hello my name is Nick'.split() a.index('x') a.count('x') 'hello'.capitalize() 'hello'.upper() a = [1,2,3] L = [4,5,6] a.extend(L) a a=[1,2,3] L = [4,5,6] a + L a = [1,2,3] L ...
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
Intro to pins assignment Intro to try/except and error handling Syntax errors vs. exceptions
10 * (1/0) 4 + spam*3 '2' + 2
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
Handing exceptions
while True: try: x = int(input("Please enter a number: ")) break except ValueError: print("Oops! That was no valid number. Try again...") b == a b = a[:]
wk0/notebooks/wk0.1.ipynb
nsrchemie/code_guild
mit
Rayleigh Quotient MarkII We want to mix the last two functions we saw in the exercise, the shape associated with a load applied to the tip and the shape associated with a uniform distributed load. We start by defining a number of variables that point to Symbol objects,
z, h , r0, dr, t, E, rho, zeta = symbols('z H r_0 Delta t E rho zeta')
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
We define the tip-load function starting from the expression of the bending moment, just a linear function that is 0 for $z=H$... we integrate two times and we get the displacements bar the constants of integration that, on the other hand, happen to be both equal to zero due to clamped end at $z=0$, implying that $\psi...
f12 = h-z f11 = integrate(f12,z) f10 = integrate(f11,z)
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
We have no scaling in place... we have to scale correctly our function by evaluating it for $z=H$
scale_factor = f10.subs(z,h)
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Dividing our shape function (and its derivatives) by this particular scale factor we have, of course, an unit value of the tip displacement.
f10 /= scale_factor f11 /= scale_factor f12 /= scale_factor f10, f11, f12, f10.subs(z,h)
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
We repeat the same procedure to compute the shape function for a constant distributed load, here the constraint on the bending moment is that both the moment and the shear are zero for $z=H$, so the non-normalized expression for $M_b\propto \psi_2''$ is
f22 = h*h/2 - h*z + z*z/2
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
The rest of the derivation is the same
f21 = integrate(f22,z) f20 = integrate(f21,z) scale_factor = f20.subs(z,h) f20 /= scale_factor f21 /= scale_factor f22 /= scale_factor f20, f21, f22, f20.subs(z,h)
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
To combine the two shapes in the right way we write $$\psi = \alpha\,\psi_1+(1-\alpha)\,\psi_2$$ so that $\psi(H)=1$, note that the shape function depends on one parameter, $\alpha$, and we can minimize the Rayleigh Quotient with respect to $\alpha$.
a = symbols('alpha') f0 = a*f10 + (1-a)*f20 f2 = diff(f0,z,2) f0.expand().collect(z), f2.expand().collect(z), f0.subs(z,h)
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Working with symbols we don't need to formally define a Python function, it suffices to bind a name to a symbolic expression. That's done for the different variable quantities that model our problem and using these named expressions we can compute the denominator and the numerator of the Rayleigh Quotient.
re = r0 - dr * z/h ri = re - t A = pi*(re**2-ri**2) J = pi*(re**4-ri**4)/4 fm = rho*A*f0**2 fs = E*J*f2**2 mstar = 80000+integrate(fm,(z,0,h)) kstar = integrate(fs,(z,0,h))
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Our problem is characterized by a set of numerical values for the different basic variables:
values = {E:30000000000, h:32, rho:2500, t:Rational(1,4), r0:Rational(18,10), dr:Rational(6,10)} values
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
We can substitute these values in the numerator and denominator of the RQ
display(mstar.subs(values)) display(kstar.subs(values))
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Let's look at the RQ as a function of $\alpha$, with successive refinements
rq = (kstar/mstar).subs(values) plot(rq, (a,-3,3)); plot(rq, (a,1,3)); plot(rq, (a,1.5,2.0));
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Here we do the following: Derive the RQ and obtain a numerical function (rather than a symbolic expression) using the lambdify function. Using a root finder function (here bisect from the scipy.optimize collection) we find the location of the minimum of RQ. Display the location of the minimum. Display the shape functi...
rqdiff = lambdify(a, rq.diff(a)) from scipy.optimize import bisect a_0 = bisect(rqdiff, 1.6, 1.9) display(a_0) display(f0.expand().subs(a,a_0).subs(z,zeta*h)) rq.subs(a,a_0).evalf()
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Oh, we have (re)discovered the Ritz method! and we have the better solution so far...
# usual incantation from IPython.display import HTML HTML(open('00_custom.css').read())
dati_2015/ha03/04_Rayleigh_MkII.ipynb
boffi/boffi.github.io
mit
Model background This example is based on the synthetic classroom model of Freyberg(1988). The model is a 2-dimensional MODFLOW model with 1 layer, 40 rows, and 20 columns. The model has 2 stress periods: an initial steady-state stress period used for calibration, and a 5-year transient stress period. The calibrat...
import flopy # load the model model_ws = os.path.join("Freyberg","extra_crispy") ml = flopy.modflow.Modflow.load("freyberg.nam",model_ws=model_ws) # Because this model is old -- it predates flopy's modelgrid implementation. # And because modelgrid has been implemented without backward compatability # the modelgrid ...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
The plot shows the Freyberg (1988) model domain. The colorflood is the hydraulic conductivity $\left(\frac{m}{d}\right)$. Red and green cells correspond to well-type and river-type boundary conditions. Blue dots show the locations of water levels used for calibration. Using pyemu
import pyemu
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
First we need to create a linear_analysis object of the schur derived type, which replicates the behavior of the PREDUNC suite of PEST for calculating posterior parameter covariance. We pass it the name of the Jacobian matrix file. Since we don't pass an explicit argument for parcov or obscov, pyemu attempts to buil...
# just set the path and filename for the jco file jco = os.path.join("Freyberg","freyberg.jcb") # use the jco name with extention "pst" for the control file pst = pyemu.Pst(jco.replace(".jcb",".pst")) # get the list of forecast names from the pest++ argument la = pyemu.Schur(jco=jco, pst=pst, verbose="schur_examp...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
General parameter uncertainty analysis--evaluating posterior parameter covariance Let's calculate and save the posterior parameter covariance matrix. In this linear analysis, the posterior covariance represents the updated covariance following notional calibration as represented by the Jacobian matrix and both prior pa...
#writes posterior covariance to a text file la.posterior_parameter.to_ascii(jco+"_post.cov")
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
You can open this file (it will be called freyberg.jcb_post.cov) in a text editor to examine. The diagonal of this matrix is the posterior variance of each parameter. Since we already calculated the posterior parameter covariance matrix, additional calls to the posterior_parameter decorated methods only require access...
#easy to read in the notebook la.posterior_parameter.to_dataframe().sort_index().\ sort_index(axis=1).iloc[0:3,0:3]
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
We can see the posterior variance of each parameter along the diagonal of this matrix. Now, let's make a simple plot of prior vs posterior uncertainty for the 761 parameters. The .get_parameters_summary() method is the easy way:
#get the parameter uncertainty dataframe and sort it par_sum = la.get_parameter_summary().\ sort_values("percent_reduction",ascending=False) #make a bar plot of the percent reduction par_sum.loc[par_sum.index[:20],"percent_reduction"].\ plot(kind="bar",figsize=(10,4),edgecolor="none") #echo the first 10 entries ...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
We can see that calibrating the model to the 12 water levels reduces the uncertainty of the calibration period recharge parameter (rch_0) by 43%. Additionally, the hydraulic conductivity of many model cells is also reduced. Now lets look at the other end of the parameter uncertainty summary -- the values with the le...
# sort in increasing order without 'ascending=False' par_sum = la.get_parameter_summary().sort_values("percent_reduction") # plot the first 20 par_sum.loc[par_sum.index[:20],"percent_reduction"].\ plot(kind="bar",figsize=(10,4),edgecolor="none") #echo the first 10 par_sum.iloc[0:20,:]
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
We see that several parameters are unaffected by calibration - these are mostly parameters that represent forecast period uncertainty (parameters that end with _2), but there are also some hydraulic conductivities that are uninformed by the 12 water level observations. The naming conventions for the hydraulic conductiv...
ml.modelgrid.extent, ml.modelgrid.xoffset, ml.modelgrid.xvertices hk_pars = par_sum.loc[par_sum.groupby(lambda x:"hk" in x).groups[True],:] hk_pars.loc[:,"names"] = hk_pars.index names = hk_pars.names # use the parameter names to parse out row and column locations hk_pars.loc[:,"i"] = names.apply(lambda x: int(x[3:5])...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
As expected, most of the information in the observations is reduces uncertainty for the hydraulic conductivity parameters near observations themselves. Areas farther from the observations experience less reduction in uncertainty due to calibration. Forecast uncertainty Now let's examine the prior and posterior varianc...
# get the forecast summary then make a bar chart of the percent_reduction column fig = plt.figure(figsize=(4,4)) ax = plt.subplot(111) ax = la.get_forecast_summary().percent_reduction.plot(kind='bar', ax=ax,grid=True) ax.set_ylabel("percent uncertainy\nreduction fro...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
Notice the spread on the uncertainty reduction: some forecasts benefit more from calibration than others. For example, or28c05_0, the calibration-period water level forecast, benefits from calibration since its uncertainty is reduced by about 75%, while sw_gw_1, the forecast-period surface-water groundwater exchange f...
df = la.get_par_group_contribution() df #calc the percent reduction in posterior df_percent = 100.0 * (df.loc["base",:]-df)/\ df.loc["base",:] #drop the base column df_percent = df_percent.iloc[1:,:] #transpose and plot ax = df_percent.T.plot(kind="bar", ylim=[0,100],figsize=(8,5)) ax.grid() plt....
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
We see some interesting results here. The sw-gw flux during calibration (sw_gw_0) is influenced by both recharge and hk uncertainty, but the forecast period sw-gw flux is influenced most by recharge uncertainty. For the water level forecasts (or28c05_0 and or28c05_1), the results are similar: the forecast of water lev...
pnames = la.pst.par_names fore_names = [pname for pname in pnames if pname.endswith("_2")] props = [pname for pname in pnames if pname[:2] in ["hk","ss","sy","rc"] and\ "rch" not in pname] cal_names = [pname for pname in pnames if pname.endswith("_1")] pdict = {'forecast forcing':fore_names,"properties":props,...
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause
These results are also intuitive. For both forecasts originating from the second model stress period (the "forecast" period), the forecast-period forcings (which represent future recharge and future water use) play a role in reducing forecast uncertainty for the forecast period. Calibration forcings (current recharge a...
df_worth = la.get_removed_obs_importance() df_worth
examples/Schurexample_freyberg.ipynb
jtwhite79/pyemu
bsd-3-clause